image
image
user-login
Patent search/

A SYSTEM FOR MONITORING AND REPORTING AN INDIVIDUAL’S EMOTIONAL STABILITY

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

A SYSTEM FOR MONITORING AND REPORTING AN INDIVIDUAL’S EMOTIONAL STABILITY

ORDINARY APPLICATION

Published

date

Filed on 12 November 2024

Abstract

ABSTRACT The present invention relates to a system 100 for monitoring and reporting an individual’s emotional stability, comprising an imaging unit 102 configured to capture one or more images of an individual during an interview, at least one microphone 104 synchronously coupled with the imaging unit 102 configured to record audio signals generated by the individual during the interview, an artificial intelligence (AI) module 106 receives a dataset containing the one or more images and the audio signals, analyses the dataset using at least one AI model, determines one or more physical parameters associated with the individual and one or more vocal parameters associated with the individual, calculates a steadiness score of the individual during the interview, and a user interface 108 displays the steadiness score along with number of correctly answered questions to an interviewer. Refer to Figure 1 and Figure 2

Patent Information

Application ID202411087062
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application12/11/2024
Publication Number48/2024

Inventors

NameAddressCountryNationality
Pushkal18/410, Indira Nagar, Lucknow, Uttar Pradesh India 226016IndiaIndia

Applicants

NameAddressCountryNationality
HOUSE OF COUTON PRIVATE LIMITED18/410, Indira Nagar, Lucknow, Uttar Pradesh India 226016IndiaIndia

Specification

Description:"A SYSTEM FOR MONITORING AND REPORTING AN INDIVIDUAL'S EMOTIONAL STABILITY"
FIELD OF THE INVENTION
[0001] The present invention relates to a system for monitoring and reporting an individual's emotional stability that is capable of evaluating an individual's performance during interviews, leveraging artificial intelligence for comprehensive analysis of both visual and vocal parameters, thereby enhancing the efficiency and accuracy of candidate evaluation processes.
BACKGROUND OF THE INVENTION
[0002] In an increasingly competitive job market, the ability to effectively evaluate candidates during interviews is paramount for organizations seeking to identify the best fit for their teams. Traditional interview methods often rely heavily on interviewer intuition and subjective assessments, which can introduce biases and inconsistencies. As a result, the evaluation process may not accurately reflect a candidate's true potential or capabilities.

[0003] Conventional approaches to interview assessment primarily focus on verbal communication and may overlook critical non-verbal cues, such as body language and facial expressions. Additionally, audio recordings alone cannot provide a comprehensive understanding of a candidate's vocal dynamics or emotional steadiness throughout the interview. This fragmented assessment approach can lead to inadequate feedback and misinformed hiring decisions, ultimately impacting the quality of talent acquisition.

[0004] Moreover, existing evaluation tools frequently lack real-time analytical capabilities. The absence of objective metrics hinders interviewers from gaining insights into a candidate's performance while the interview is in progress, limiting opportunities for immediate feedback and improvement. This gap in real-time assessment tools can result in missed opportunities for candidates to demonstrate their abilities more effectively.

[0005] To address these challenges, there is a growing need for a unified system that combines imaging and audio capture with advanced artificial intelligence. Such a system would enable the simultaneous evaluation of both visual and vocal parameters, providing a holistic assessment of candidate performance. By calculating different metrics and analyzing responses in real-time, the proposed invention aims to enhance the interview process, ensuring that organizations make more informed hiring decisions based on objective data rather than subjective impressions.

SUMMARY OF THE INVENTION
[0006] In view of the foregoing disadvantages inherent in the prior art, the general purpose of the present disclosure is to provide a system for monitoring and recording an individual's steadiness score to evaluate performance of the individual during an interview using artificial intelligence to include all advantages of the prior art, and to overcome the drawbacks inherent in the prior art.
[0007] An objective of the present invention is to develop a system that monitors a comprehensive data of an individual and evaluate performance of the individual during interviews.
[0008] Another objective of the present invention is to develop a system that identifies signs of stress, anxiety, confidence, and overall emotional stability.
[0009] Another objective of the present invention is to develop a system that provides a data-driven feedback on the individual's emotional and psychological steadiness.
[0010] Another objective of the present invention is to develop a system that is capable of operating without distracting the individual.
[0011] Yet another objective of the present invention is to develop a system that supports an interviewer in making more informed hiring decisions.
[0012] Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
[0013] The present invention relates to a system for monitoring and reporting an individual's emotional stability that includes an imaging unit that is designed to capture one or more images of an individual during an interview. Furthermore, at least one microphone that is synchronously coupled with the imaging unit. Further, the at least one microphone records audio signals that are generated by the individual during the interview.

[0014] Moreover, an artificial intelligence (AI) module that is communicatively coupled with the imaging unit and the at least one microphone. Further, the AI module is designed to receive a dataset. The dataset contains the one or more images and the audio signals. Further, the AI module is designed to analyse the dataset by using at least one AI model. Further, the AI model is configured to determine one or more physical parameters that are associated with the individual and one or more vocal parameters that are also associated with the individual. Furthermore, the one or more physical parameters includes but are not limited to eye contact, facial expressions, posture changes, and gesticulations. Additionally, the one or more vocal parameters includes but are not limited to voice tone, speech pace, linguistic patterns, and correct answers. Successively, the AI module is configured to calculate a steadiness score of the individual during the interview.
[0015] Finally, a user interface that is coupled with the AI module is configured to display the steadiness score along with number of correctly answered questions to an interviewer. The steadiness score along with the number of correctly answered questions are displayed by the user interface in a real-time.
[0016] In another embodiment of the present disclosure, a method for operating system for monitoring and reporting an individual's emotional stability is disclosed. The method comprises steps of capturing, via an imaging unit, one or more images of an individual during an interview, the method includes recording, via at least one microphone synchronously coupled with the imaging unit, audio signals generated by the individual during the interview, the method includes receiving, via an artificial intelligence (AI) module communicatively coupled with the imaging unit and the at least one microphone, a dataset containing the one or more images and the audio signals, the method includes, analysing, via the AI module, the dataset using at least one AI model, the method includes determine, via the AI module, one or more physical parameters associated with the individual and one or more vocal parameters associated with the individual, the one or more physical parameters includes but are not limited to eye contact, facial expressions, posture changes, and gesticulations, the one or more vocal parameters includes but are not limited to voice tone, speech pace, linguistic patterns, and correct answers, the method includes calculate, via the AI module, a steadiness score of the individual during the interview, the method includes displaying, via a user interface coupled with the AI module, the steadiness score along with number of correctly answered questions to an interviewer.
BRIEF DESCRIPTION OF DRAWING
[0017] The foregoing summary, as well as the following detailed description of various embodiments, is better understood when read in conjunction with the drawings provided herein. For the purposes of illustration, there are shown in the drawings exemplary embodiments; however, the presently disclosed subject matter is not limited to the specific methods and instrumentalities disclosed.
[0018] Figure 1 illustrates a block diagram of a system for monitoring and reporting an individual's emotional stability as disclosed in an embodiment of the present disclosure; and
[0019] Figure 2 illustrates a flowchart showing a method for operating the system for monitoring and reporting an individual's emotional stability as disclosed in an embodiment of the present disclosure.
[0020] Like reference numerals refer to like parts throughout the description of several views of the drawing.
DETAILED DESCRIPTION OF THE INVENTION
[0021] Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well- known apparatus structures, and well-known techniques are not described in detail.
[0022] The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," "including," and "having," are open-ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
[0023] The following detailed description should be read with reference to the drawings, in which similar elements in different drawings are identified with the same reference numbers. The drawings, which are not necessarily to scale, depict illustrative embodiments and are not intended to limit the scope of the disclosure.
[0024] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. In this application, the use of the singular includes the plural, the word "a" or "an" means "at least one", and the use of "or" means "and/or", unless specifically stated otherwise. Furthermore, the use of the term "including", as well as other forms, such as "includes" and "included", is not limiting. Also, terms such as "element" or "component" encompass both elements and components comprising one unit and elements or components that comprise more than one unit unless specifically stated otherwise.
[0025] Furthermore, the term "module", as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, C++, python, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
[0026] Referring to figure 1 and 2, a block diagram of a system 100 for monitoring and reporting an individual's emotional stability is illustrated, comprising an imaging unit 102, at least one microphone 104 synchronously coupled with the imaging unit 102, an artificial intelligence (AI) module 106 communicatively coupled with the imaging unit 102 and the at least one microphone 104, a user interface 108 coupled with the AI module 106, a database 110 communicatively coupled with the AI module 106, and a communication module 112.
[0027] In an embodiment of the present disclosure, the system 100 comprises the imaging unit 102. Further, the imaging unit 102 is installed within a premises. Further, the premises include but not limited to a conference room, an office, a cubical, a hall etc. Further, the imaging unit 102 is configured to capture one or more images of an individual present within the premises during an interview. The imaging unit 102 operates through an integrated module 106 having optical components and digital processing units. Upon activation, the imaging unit 102 containing an array of sensors detects a presence of individuals within a field of view of the imaging unit 102. Further, the imaging unit 102 adjusts parameters such as focus, exposure, and white balance to ensure optimal image quality. In an exemplary embodiment, the imaging unit 102 is equipped with a user-friendly interface that allows operators to initiate image capture with minimal intervention, ensuring that the process remains unobtrusive during interviews.
[0028] Further, the one or more captured images are processed in real-time using the digital processing units that enhance clarity and detail, while also enabling storage of images in various formats. The imaging unit 102 is connected with external storage solutions or networks, facilitating easy retrieval and management of images for future reference. Additionally, the imaging unit 102 includes security features to protect the one or more captured images to ensure compliance with privacy regulations.
[0029] Further, the system 100 comprises the at least one microphone 104. Further, the imaging unit 102 is installed within the premises. The at least one microphone 104 is synchronously coupled with the imaging unit 102. Further, the at least one microphone 104 is configured to record audio signals generated by the individual during the interview. The synchronous operation of the at least one microphone 104 with the imaging unit 102 ensures that audio signals align with the one or more images captured by the imaging unit 102 during the interview. In an exemplary embodiment, the at least one microphone 104 utilizes a noise-cancellation module 106, that enables the at least one microphone 104 to filters out ambient sounds and focuses on clear voice capture, thereby enhancing the quality of the audio signals.
[0030] The at least one microphone 104 employs a digital signal processing (DSP) techniques Further, the DSP is configured to optimize sound fidelity and clarity. Upon activation, the at least one microphone 104 records audio signals in real-time and compresses the data having the audio signals for efficient storage. The audio signals are further synchronized with the one or more images captured by the imaging unit 102, creating a comprehensive record of the interview that combines both visual and auditory components.
[0031] The system 100 includes the artificial intelligence (AI) module 106. Further, the AI module 106 is communicatively coupled with the imaging unit 102 and the at least one microphone 104. The AI module 106 is configured within a processor. Further, the processor is configured to perform each operation associated with the system 100. The AI module 106 first establishes a communicative link with both the imaging unit 102 and the at least one microphone 104 to receive a dataset. Further, the dataset comprises the one or more images and the corresponding audio signals that are recorded during the interview. Further, the integration of visual and auditory data is crucial for a holistic analysis. In an exemplary embodiment, the AI module 106 receives high-resolution images captured at key moments of the interview alongside audio recordings that include the individual's responses and ambient sounds.
[0032] Once the dataset is received, the AI module 106 employs at least one AI model to conduct a thorough analysis. Further, the AI module 106 is configured to pre-process the dataset to determine verbal and non-verbal cues. Further, the at least one AI model includes but not limited to convolutional neural networks (CNNs) for image analysis and recurrent neural networks (RNNs) or transformers for audio processing. The CNN processes the visual data to extract physical parameters such as facial expressions, posture, and gestures, identifying emotions like happiness or anxiety. Concurrently, the audio signals are analyzed using RNNs or transformer models, which are adept at understanding temporal sequences, enabling the detection of vocal parameters such as tone, pitch variation, speech rate, and intonation patterns.
[0033] Following the analysis, the AI module 106 trains the at least one AI model using the datasets. Further, the AI module 106 identifies and quantifies both, one or more physical and vocal parameters associated with the individual. The one or more physical parameters includes metrics like the frequency of smiling, the stability of posture, and eye contact duration. Further, the one or more vocal parameters includes but not limited to measures like average pitch, speech clarity, and variations in volume. For instance, a high frequency of rapid speech along with a tense posture could indicate nervousness, while a steady tone with relaxed body language might suggest confidence.
[0034] Furthermore, the AI module 106 synthesizes the identified parameters to calculate a steadiness score for the individual during the interview. Further, the steadiness score is derived from a weighted combination of the physical and vocal metrics, reflecting the individual's overall composure and engagement during the interview. The AI module 106 is configured to assign a higher steadiness score to vocal consistency and lower steadiness score to minor physical movements, depending on relevance to perceived steadiness. The steadiness score provides evaluators with quantifiable insights into the individual's emotional stability.
[0035] Moreover, the steadiness score calculated by the AI module 106 provides insight to the interviewers about how likely the individual is to remain committed to an interviewing organization overtime. For instance, if the AI module 106 assigns a higher steadiness score (e.g. 8 out of 10) that indicates that the individual is likely to stay committed to the organization for several years before moving on. This could indicate to interviewers that the individual is likely to be more loyal and invested in the organization. For instance, if the AI module 106 assigns a lower steadiness score (e.g. 3 out of 10) that indicates that the individual is likely to leave the organization within 1-2 years and move on to another organization. This could indicate to interviewers that the individual is unlikely to be loyal to the organization.
[0036] The system 100 further comprises the user interface 108. The user interface 108 is coupled with the AI module 106. Further, the user interface 108 is installed within a computing unit. The computing unit includes but not limited to a mobile phone, a tablet, a hand-held terminal etc. The use interface is configured to display the steadiness score along with number of correctly answered questions to an interviewer. The user interface 108 provides a means for the interviewer to access and interpret the results generated by the AI module 106. The user interface 108 is designed to be intuitive and responsive. The user interface 108 is developed using a graphical framework suitable for the computing unit. The user interface 108 includes various elements such as buttons, sliders, and display areas. Upon launching the computing unit, the user interface 108 initializes and establishes a wireless connection with the AI module 106 via the communication module 112. The user interface 108 is designed to be responsive, adapting to different screen sizes and orientations.
[0037] Once connected, the user interface 108 sends a request to the AI module 106 to retrieve relevant data, including the steadiness score and the number of correctly answered questions. The interaction between the user interface 108 and the AI module 106 is facilitated by the communication module 112, which encodes the request in a format suitable for transmission over a wireless network, such as JSON or XML. The communication module 112 utilizes standard protocols (e.g., HTTP/HTTPS, Web Socket) to ensure reliable data transfer. Upon receiving the response from the AI module 106, the user interface 108 processes the incoming data and displays the steadiness score in a clear and accessible format. The steadiness score is presented prominently, possibly as a numerical value along with a visual representation, such as a gauge or color-coded indicator, to facilitate quick comprehension. Additionally, the number of correctly answered questions is displayed alongside the score, allowing the interviewer to evaluate the individual's performance holistically.
[0038] Further, the system 100 comprises the database 110. Further, the database 110 is communicatively coupled with the AI module 106. Further, the database 110 is configured to store the calculated steadiness score for future reference, comparison, and further evaluation. The database 110 includes but not limited to a physical memory, a cloud storage, etc. Further, the database 110 is coupled with the AI module 106 through the communication module 112.
[0039] The system 100 may include a method 200 for operating the system 100 for monitoring and reporting an individual's emotional stability. The method 200 comprising steps of capturing, via an imaging unit 102, one or more images of an individual during an interview, the method 200 includes recording, via at least one microphone 104 synchronously coupled with the imaging unit 102, audio signals generated by the individual during the interview, the method 200 includes receiving, via an artificial intelligence (AI) module 106 communicatively coupled with the imaging unit 102 and the at least one microphone 104, a dataset containing the one or more images and the audio signals, the method 200 includes, analysing, via the AI module 106, the dataset using at least one AI model, the method 200 includes determine, via the AI module 106, one or more physical parameters associated with the individual and one or more vocal parameters associated with the individual, the one or more physical parameters includes but are not limited to eye contact, facial expressions, posture changes, and gesticulations, the one or more vocal parameters includes but are not limited to voice tone, speech pace, linguistic patterns, and correct answers, the method 200 includes calculate, via the AI module 106, a steadiness score of the individual during the interview, the method 200 includes displaying, via a user interface 108 coupled with the AI module 106, the steadiness score along with number of correctly answered questions to an interviewer.
[0040] While considerable emphasis has been placed herein on the specific features of the preferred embodiment, it will be appreciated that many additional features can be added and that many changes can be made in the preferred embodiment without departing from the principles of the disclosure. These and other changes in the preferred embodiment of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0041] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements.
[0042] The embodiments described above are intended only to illustrate and teach one or more ways of practicing or implementing the present invention, not to restrict its breadth or scope. The actual scope of the invention, which embraces all ways of practicing or implementing the teachings of the invention, is defined only by the following claims and their equivalents.
[0043] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements.
[0044] The embodiments described above are intended only to illustrate and teach one or more ways of practicing or implementing the present invention, not to restrict its breadth or scope. The actual scope of the invention, which embraces all ways of practicing or implementing the teachings of the invention, is defined only by the following claims and their equivalents.
, Claims:CLAIMS:
1. A system 100 for monitoring and reporting an individual's emotional stability, comprises:
an imaging unit 102 configured to capture one or more images of an individual during an interview;
at least one microphone 104 synchronously coupled with the imaging unit 102 configured to record audio signals generated by the individual during the interview;
an artificial intelligence (AI) module 106 communicatively coupled with the imaging unit 102 and the at least one microphone 104, wherein the AI module 106 is configured to:
receive a dataset containing the one or more images and the audio signals,
analyse the dataset using at least one AI model,
determine one or more physical parameters associated with the individual and one or more vocal parameters associated with the individual, and
calculate a steadiness score of the individual during the interview; and
a user interface 108 coupled with the AI module 106 configured to display the steadiness score along with number of correctly answered questions to an interviewer.

2. The system 100 as claimed in claim 1, further comprises a database 110 communicatively coupled with the AI module 106.
3. The system 100 as claimed in claim 2, wherein the database 110 is configured to store the calculated steadiness score for future reference, comparison, and further evaluation.
4. The system 100 as claimed in claim 1, wherein the one or more physical parameters includes but are not limited to eye contact, facial expressions, posture changes, and gesticulations.
5. The system 100 as claimed in claim 1, wherein the one or more vocal parameters includes but are not limited to voice tone, speech pace, linguistic patterns, and correct answers.
6. The system 100 as claimed in claim 1, wherein the AI module 106 is configured to pre-process the dataset to determine verbal and non-verbal cues.
7. The system 100 as claimed in claim 1, wherein the imaging unit 102 is connected with the at least one microphone 104 through a communication module 112.
8. The method 200 for operating system 100 for monitoring and reporting an individual's emotional stability, comprising:
capturing, via an imaging unit 102, one or more images of an individual during an interview;
recording, via at least one microphone 104 synchronously coupled with the imaging unit 102, audio signals generated by the individual during the interview;
receiving, via an artificial intelligence (AI) module 106 communicatively coupled with the imaging unit 102 and the at least one microphone 104, a dataset containing the one or more images and the audio signals;
analysing, via the AI module 106, the dataset using at least one AI model;
determining, via the AI module 106, one or more physical parameters associated with the individual and one or more vocal parameters associated with the individual;
calculating, via the AI module 106, a steadiness score of the individual during the interview; and
displaying, via a user interface 108 coupled with the AI module 106, the steadiness score along with number of correctly answered questions to an interviewer.
9. The method 200 for operating system 100 for monitoring and reporting an individual's emotional stability as claimed in claim 8, wherein the steadiness score ranges between 0 to10.
10. The method 200 for operating system 100 for monitoring and reporting an individual's emotional stability as claimed in claim 9, wherein the user interface 108 is installed within a computing unit accessed by the interviewer.

Documents

NameDate
202411087062-FORM 18A [20-11-2024(online)].pdf20/11/2024
202411087062-FORM28 [20-11-2024(online)].pdf20/11/2024
202411087062-STARTUP [20-11-2024(online)].pdf20/11/2024
202411087062-COMPLETE SPECIFICATION [12-11-2024(online)].pdf12/11/2024
202411087062-DECLARATION OF INVENTORSHIP (FORM 5) [12-11-2024(online)].pdf12/11/2024
202411087062-DRAWINGS [12-11-2024(online)].pdf12/11/2024
202411087062-EDUCATIONAL INSTITUTION(S) [12-11-2024(online)].pdf12/11/2024
202411087062-EVIDENCE FOR REGISTRATION UNDER SSI [12-11-2024(online)].pdf12/11/2024
202411087062-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-11-2024(online)].pdf12/11/2024
202411087062-FORM 1 [12-11-2024(online)].pdf12/11/2024
202411087062-FORM FOR SMALL ENTITY(FORM-28) [12-11-2024(online)].pdf12/11/2024
202411087062-FORM-9 [12-11-2024(online)].pdf12/11/2024
202411087062-POWER OF AUTHORITY [12-11-2024(online)].pdf12/11/2024
202411087062-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-11-2024(online)].pdf12/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.