image
image
user-login
Patent search/

SYSTEM AND METHOD FOR INTERPRETING COGNITIVE AND EMOTIONAL STATES OF A USER

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

SYSTEM AND METHOD FOR INTERPRETING COGNITIVE AND EMOTIONAL STATES OF A USER

ORDINARY APPLICATION

Published

date

Filed on 9 November 2024

Abstract

ABSTRACT SYSTEM AND METHOD FOR INTERPRETING COGNITIVE AND EMOTIONAL STATES OF A USER The present disclosure relates, in general, to the field of artificial intelligence and human psychology. More specifically, embodiments of the present invention relate to a system (100) for interpreting cognitive and emotional states of a user, the system (100) comprising an eye-tracking device (102), a mind voice analysis module (104), an ai-processing module (106), a personalized profiling module (108), and a user interface (110). The eye-tracking device (102) is configured to capture real-time eye movement data, and a mind voice analysis module (104) processes this data using an AI-based image processing model to identify eye movement patterns. The AI-based image processing module (106), trained on psychological models, correlates these patterns with the user’s cognitive and emotional states. The personalized profiling module (108) adapts the interpretation of cognitive and emotional states based on the user’s unique profile, continuously refining analysis over time. The user interface (110) then delivers the adapted interpretation based on the eye movement data and personalized profile.

Patent Information

Application ID202441086466
Invention FieldCOMPUTER SCIENCE
Date of Application09/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
ARLA GOPALA KRISHNADepartment of Computer Science and Engineering, SRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
PANDU SOWKUNTLADepartment of Computer Science and Engineering, SRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia
SANJAY KUMARDepartment of Computer Science and Engineering, SRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur-522502, Andhra Pradesh, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
SRM UNIVERSITYAmaravati, Mangalagiri, Andhra Pradesh-522502, IndiaIndiaIndia

Specification

Description:FIELD
[0001] The present disclosure relates, in general, to the field of artificial intelligence and human psychology.
BACKGROUND
[0002] The background information herein below relates to the present disclosure but is not necessarily prior art.
[0003] Recent advancements in artificial intelligence (AI), human-computer interaction (HCI), and mental health monitoring underscore the growing importance of technologies capable of interpreting and responding to human emotions. Emotion recognition technologies, which apply machine learning algorithms to analyze facial expressions, vocal tones, physiological signals, and behavioral cues, play a crucial role in these fields by facilitating the detection and interpretation of emotional states. This capability is becoming increasingly relevant in applications such as user experience enhancement, therapeutic monitoring, and real-time assistance.
[0004] Traditional emotion recognition methods primarily rely on facial and vocal analysis. These systems typically utilize neural networks trained to interpret facial expressions or voice modulations associated with emotions like happiness, anger, and sadness. However, achieving high accuracy in real-world environments remains challenging due to factors such as environmental variability, individual differences, and the subjective nature of emotional expression, which often limit the reliability of these systems.
[0005] Eye movement tracking presents an additional promising approach for understanding emotional and cognitive states. Eye-tracking technology can capture user engagement, attention, and stress indicators, as specific gaze patterns and pupillary responses often correlate with cognitive load and emotional arousal. Existing applications of eye-tracking range from cognitive assessment and usability studies to clinical diagnostics. However, these systems are often used in isolation and lack integration with complementary methods that could provide a more holistic emotional and mental profile.
[0006] A further emerging area in emotion and cognitive state analysis involves interpreting the "mind voice," or the user's internal or silent voice linked to subvocalized thoughts. Advances in sensors and AI algorithms have enabled the initial capture and decoding of this subtle communication channel by tracking neuromuscular signals or markers of subvocal speech. Mind voice analysis offers potential insights into unspoken intentions, cognitive states, and stress levels, representing an innovative pathway for mental health and well-being monitoring.
[0007] Despite these advancements, current systems lack an integrated framework that combines emotion recognition, eye-tracking, and mind voice interpretation. There is, therefore, felt a need for a system for interpreting cognitive and emotional states of a user that can leverage the power of artificial intelligence and human psychology to interpret cognitive and emotional states of a user.
OBJECTS
[0008] Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows.
[0009] It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
[0010] The main object of the present disclosure is to provide a system for interpreting cognitive and emotional states of a user.
[0011] An object of the present disclosure is to leverage artificial intelligence, psychological insights, eye movement tracking, and mind voice analysis to offers a unique way to gaining deeper insights into cognitive and emotional states of a user.
[0012] Another object of the present disclosure is to leverage the power of computational intelligence to optimize the allocation of resources, such as classrooms, faculty, time slots, etc. while considering various constraints and objectives.
[0013] Yet another object of the present disclosure is to solve critical limitations in emotion detection, mental health monitoring, and human-computer interaction by offering a more accurate, personalized, and deeply insightful system for understanding human thoughts and emotions.
[0014] Still another object of the present disclosure is to address the gaps in superficial emotion recognition, inconsistent mental health diagnosis, and the lack of emotional intelligence in machines, providing solutions that are more responsive to the complexity of human cognition and emotions.
[0015] Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
[0016] This summary is provided to introduce concepts related to the field of artificial intelligence and human psychology. More specifically, embodiments of the present invention relate to the concepts of automated timetable generation for academic institutions. The concepts are further described hereinbelow in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0017] The present disclosure envisages a system for interpreting cognitive and emotional states of a user. The system broadly comprises an eye-tracking device, a mind voice analysis module, an AI-based image processing module, a personalized profiling module, and a user interface.
[0018] The eye-tracking device is configured to capture real-time eye movement data, and the mind voice analysis module processes this real-time eye movement data using the AI-based image processing model to identify eye movement patterns. The AI-based image processing module, trained on psychological models, correlates these eye movement patterns with the user's cognitive and emotional states. Further, the personalized profiling module adapts the interpretation of cognitive and emotional states based on a user's personalized profile and continuously refines analysis over time. Lastly, the user interface delivers the adapted interpretation based on the eye movement data and the personalized profile.
[0019] In an embodiment, the eye-tracking device is a high-precision eye-tracking device capable of detecting pupil dilation, saccadic movement, fixation duration, and gaze direction to enhance the interpretation of the user's cognitive and emotional states.
[0020] In another embodiment, the mind voice analysis module is further configured to analyze multiple eye movement patterns, including gaze trajectory, blink frequency, and fixation points, to derive insights into a user's decision-making processes, emotional state, and memory recall patterns.
[0021] In yet another embodiment, the ai-processing module is configured to implement machine learning algorithms to continuously optimize the interpretation of emotional and cognitive states based on accumulated user data, allowing the system to dynamically adapt to user-specific behavioral patterns.
[0022] In still another embodiment, the personalized profiling module is further configured to adaptively refine the user's personalized profile by continuously analyzing ongoing eye movement data and responses over time to improve the system's interpretation accuracy.
[0023] In yet another embodiment, a psychological model database containing various psychological models associated with specific eye movement patterns, wherein the ai-processing module utilizes the psychological models to cor-relate eye movements with corresponding cognitive states.
[0024] In still another embodiment, the user interface is configured to communicate insights into the user's cognitive states and emotional states through visual, auditory, or haptic feedback.
[0025] In yet another embodiment, the mind voice analysis module includes a natural language processing (NLP) model to convert inferred cognitive signals into a textual or other interpretable output, allowing for readable insights into the user's mental state.
[0026] In still another embodiment, the personalized profiling module is configured to notify a mental health professional if certain emotional states indicative of psychological distress are consistently detected over a predefined period.
[0027] In yet another embodiment, the AI-based image processing module and the personalized profiling module enable the system to be applied in virtual reality environments to modify user interaction based on detected emotional states and cognitive engagement levels.
[0028] The present disclosure further envisages a method for interpreting cognitive and emotional states of a user, comprising the steps of:
• capturing, by an eye-tracking device, real-time eye movement data from the user;
• receiving, by a mind voice analysis module, the eye movement data from the eye-tracking device for identifying eye movement patterns from the received eye movement data by means of an artificial intelligence (AI) based image processing model;
• correlating, by an AI processing module, the identified eye movement patterns with the user's cognitive and emotional states by processing the data in trained psychological models;
• adapting, by a personalized profiling module, the interpretation of voices based on the eye movement patterns specific to the user's personalized profile;
• continuously updating, by the personalized profiling module, the user's personalized profile to refine the analysis of the user's cognitive and emotional states over time; and
• delivering, by a user interface, the adapted interpretation of the user's cognitive and emotional states, based on the eye movement data and the personalized profile.
[0029] In an aspect, the method further comprises dynamically adjusting genetic parameters for the timetable generation process based on the received user inputs.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
[0030] A system for interpreting cognitive and emotional states of a user, will now be described with the help of an accompanying drawings, in which:
[0031] Figure 1 depicts a high-level block diagram of a system for interpreting cognitive and emotional states of a user, in accordance with an embodiment of the present disclosure;
[0032] Figure 2 illustrates a flowchart illustrating the sequence of steps in decoding the internal mental voices of a human, in accordance with an embodiment of the present disclosure; and
[0033] Figure 3 illustrates a method flow diagram of a method for interpreting cognitive and emotional states of a user, in accordance with an embodiment of the present disclosure.
LIST OF REFERENCE NUMERALS USED IN THE DESCRIPTION AND DRAWINGS :
100 SYSTEM
102 EYE-TRACKING DEVICE
104 MIND VOICE ANALYSIS MODULE
106 AI-PROCESSING MODULE
108 PERSONALIZED PROFILING MODULE
110 USER INTERFACE
112 PSYCHOLOGICAL MODEL DATABASE

DETAILED DESCRIPTION
[0001] Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
[0002] Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components and methods to provide a complete understanding of the embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known apparatus structures, and well-known techniques are not described in detail.
[0003] The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a", "an", and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms, "comprises", "comprising", "including" and "having" are open-ended transitional phrases and therefore, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0004] When an element is referred to as being "embodied thereon", "engaged to", "coupled to" or "communicatively coupled to" another element, it may be directly on, engaged, connected, or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
[0005] In recent years, advancements in artificial intelligence (AI), human-computer interaction (HCI), and mental health monitoring have shown significant potential in enhancing understanding and response to human emotions. As an essential element of these technologies, emotion recognition leverages machine learning algorithms to interpret facial expressions, vocal tones, physiological signals, and other behavioral cues, allowing systems to detect emotional states. This capability is increasingly relevant in areas such as user experience optimization, therapeutic monitoring, and real-time assistance.
[0006] Traditional emotion recognition methods largely rely on facial recognition and vocal analysis. For example, systems utilize neural networks trained to interpret facial movements or voice modulations associated with particular emotions, such as joy, anger, or sadness. However, these systems often struggle to achieve high accuracy in real-world applications, where environmental conditions, individual variations, and the subjective nature of emotions pose challenges to reliable emotion recognition.
[0007] Another promising approach to understanding user emotions and mental states involves tracking eye movement patterns. Eye movement analysis is valuable in detecting user attention, engagement, and stress levels, as specific gaze patterns and pupillary responses often correlate with cognitive load and emotional arousal. Existing eye-tracking systems are employed in various applications, including cognitive assessment, usability studies, and clinical diagnostics. However, while eye-tracking contributes valuable insights, it is often used in isolation and lacks integration with complementary technologies that could provide a more comprehensive emotional and mental state profile.
[0008] A further emerging area involves interpreting the "mind voice," a term used to describe the user's internal or silent voice associated with subvocalized thoughts. Advances in sensors and AI algorithms have led to initial breakthroughs in capturing and decoding this subtle form of communication by tracking neuromuscular signals or subvocal speech markers. Mind voice interpretation can provide insights into unspoken intentions, stress, or cognitive processes, presenting an innovative approach to monitoring mental health.
[0009] Despite these advancements, current systems still lack an integrated approach that combines emotion recognition, eye-tracking, and mind voice interpretation into a single framework. An integrated solution could more accurately capture a user's emotional and cognitive states in real-time, improving human-computer interactions by enabling adaptive responses based on a multi-dimensional understanding of the user's condition. Such a solution could be transformative in fields such as mental health monitoring, HCI, and adaptive learning technologies, ultimately leading to more empathetic and responsive AI systems.
[0010] The system disclosed is a combination of multiple elements, incorporating both software and system components. It leverages AI technology trained with psychological knowledge to interpret the mental voices of a human through the analysis of eye movements. This system attempts to interpret these internal cognitive processes by analysing patterns in eye movements and correlating them with psychological models to infer emotional states and thoughts.
[0011] The core innovation in the system disclosed herein lies in the ability to decode thoughts and emotions through real-time analysis of internal cognitive patterns and eye movements, a field that is typically constrained to surface-level emotion recognition techniques, such as facial expressions or tone of voice.
[0012] Furthermore, by combining personalized psychological profiles with advanced AI, this invention has the potential to adapt to each user's unique patterns of thought and behaviour, offering highly personalized insights. This could revolutionize fields such as mental health care, where early detection of emotional or cognitive disorders could be achieved through continuous monitoring of a user's emotional state, or in communication technologies, where emotional recognition can enhance the accuracy and effectiveness of virtual assistants, customer service bots, or even in personalized therapy sessions.
[0013] Ultimately, the disclosure not only represents an improvement upon existing AI and HCI technologies but also introduces a new paradigm shift for interpreting human cognition and emotions. Its capacity to interpret unspoken thoughts and mental states pushes the boundaries of what current technologies can achieve, creating new opportunities in fields ranging from mental health, therapy, and customer interaction, to research on human behaviour.
[0014] A preferred embodiment of a system 100 for interpreting cognitive and emotional states of a user, will now be described in detail with reference to Figures 1 to 3. The preferred embodiment does not limit the scope and ambit of the present disclosure.
[0034] Figure 1 depicts a block diagram of a system 100 for interpreting cognitive and emotional states of a user, in accordance with an embodiment of the present disclosure.
[0015] The system 100 of the present disclosure broadly comprises an eye-tracking device 102, a mind voice analysis module 104, an ai-processing module 106, personalized profiling module 108, and a user interface 110.
[0016] The eye-tracking device 102 is configured to capture real-time eye movement data from the user. The mind voice analysis module 104 is configured to receive the eye movement data from the eye-tracking device 102 and further configured to identify the eye movement patterns from the received eye movement data using an artificial intelligence (AI) based image processing model. The ai-processing module 106 in communication with the eye-tracking device 102 and the mind voice analysis module (104), the AI-based image processing module 106 is trained on psychological models to correlate eye movement patterns with the cognitive and emotional states of the use, a personalized profiling module 108 is configured to adapt interpretation of inner voices based on the eye movement patterns specific to the user's personalized profile, and further configured to update continuously to refine the analysis of the user's cognitive and emotional states over time; and a user interface 110 is configured to deliver the adapted interpretation of the user's cognitive and emotional states based on the eye movement data and the personalized profile.
[0017] In an exemplary embodiment, the eye-tracking device 102 is a high-precision eye-tracking device capable of detecting pupil dilation, saccadic movement, fixation duration, and gaze direction to enhance the interpretation of the user's cognitive and emotional states.
[0018] In an exemplary embodiment, the mind voice analysis module 104 is further configured to analyze multiple eye movement patterns, including gaze trajectory, blink frequency, and fixation points, to derive insights into a user's decision-making processes, emotional state, and memory recall patterns.
[0019] In an exemplary embodiment, the mind voice analysis module 104 includes a natural language processing (NLP) model to convert inferred cognitive signals into a textual or other interpretable output, allowing for readable insights into the user's mental state.
[0020] In an exemplary embodiment, the ai-processing module 106 is configured to implement machine learning algorithms to continuously optimize the interpretation of emotional and cognitive states based on accumulated user data, allowing the system to dynamically adapt to user-specific behavioral patterns.
[0021] In an exemplary embodiment, the personalized profiling module 108 is configured to adaptively refine the user's personalized profile by continuously analyzing ongoing eye movement data and responses over time to improve the system's interpretation accuracy.
[0022] In an exemplary embodiment, the ai-processing module 106 and the personalized profiling module 108 enable the system 100 to be applied in virtual reality environments to modify user interaction based on detected emotional states and cognitive engagement levels.
[0023] In an exemplary embodiment, a psychological model database 112 containing various psychological models associated with specific eye movement patterns, wherein the ai-processing module 106 utilizes the psychological models to cor-relate eye movements with corresponding cognitive states.
[0024] In an exemplary embodiment, the user interface 110 is configured to communicate insights on the user's cognitive states and emotional states through visual, auditory, or haptic feedback.
[0025] The system 100 is a sophisticated apparatus designed to interpret the cognitive and emotional states of a user with an approach that combines multiple hardware and software components. This system 100 is composed of the eye-tracking device 102, the mind voice analysis module 104, the AI-based image processing module 106, the personalized profiling module 108, and the user interface 110. Each component works cohesively to capture, analyze, and interpret real-time eye movement data, producing a highly individualized and nuanced understanding of the user's mental state. The system 100 goes beyond traditional emotion detection mechanisms by incorporating advanced artificial intelligence (AI), machine learning, and psychological modeling to interpret both explicit and implicit cognitive cues.
[0026] The eye-tracking device 102 is a core component of system 100, configured to capture real-time eye movement data from the user, such as pupil dilation, saccadic movement, fixation duration, and gaze direction. Hardware components for this purpose may include high-precision infrared cameras and optical sensors that detect subtle variations in eye behavior. This raw data, reflecting a user's visual attention and engagement levels, forms the basis of further analysis. Captured eye data is then transmitted to the mind voice analysis module 104, which is equipped to receive this data and identify significant eye movement patterns using an AI-based image processing model. This mind voice analysis module 104 can track gaze trajectory, blink frequency, and fixation points, which provide insights into decision-making processes, emotional states, and memory recall patterns.
[0027] To further refine these insights, the AI-based image processing module 106, in communication with both the eye-tracking device 102 and the mind voice analysis module 104, plays a critical role in correlating eye movement patterns with cognitive and emotional states. The AI-based image processing module 106 leverages a psychological model database 112, containing trained models of eye movement patterns correlated with specific cognitive states, enabling it to perform a comprehensive and psychologically-grounded analysis. Using machine learning algorithms, the AI-based image processing module 106 continually optimizes its interpretive accuracy based on accumulated user data, creating a responsive system that dynamically adapts to user-specific behaviors. Over time, this AI processing ensures that the system's interpretations remain relevant and responsive to subtle changes in user behavior.
[0028] Another essential element is the personalized profiling module 108, which adapts the interpretation based on each user's unique patterns of eye movements and inferred inner voices. The personalized profiling module 108 continuously updates the user's personalized profile by analyzing ongoing eye movement data and feedback from prior responses, making the system more accurate over time. This adaptive profiling helps in capturing unique behavioral nuances and aligning the system's interpretations with the individual user's cognitive and emotional signatures. For example, if a user's eye patterns frequently correlate with high-stress indicators in certain situations, the system 100 will learn to interpret these patterns with higher accuracy and context sensitivity.
[0029] The results of these complex analyses are then delivered to the user through the user interface 110. This user interface 110 provides insights into the user's cognitive and emotional states, potentially through visual, auditory, or haptic feedback. Such feedback mechanisms could involve a graphical display or an audio notification system that conveys the user's emotional state or alerts related to high cognitive workload. In some cases, haptic devices, like a vibrating smartwatch, might be employed to communicate subtle signals to the user without visual or auditory disruptions.
[0030] The disclosed system addresses multiple limitations in existing emotion recognition and mental health monitoring technologies by using a combination of AI, eye movement tracking, and mind voice interpretation to enhance accuracy. It overcomes the limitations of superficial emotion detection, which typically relies on external cues such as facial expressions or voice tone, often yielding inaccurate results as these cues may not truly reflect a person's internal state. This invention goes beyond external indicators by using eye movement data and inner cognitive analysis, allowing for a deeper and more reliable understanding of a user's true emotional state and unspoken thoughts. Additionally, the invention provides access to internal cognitive processes, enabling the system to interpret non-verbal and subconscious cues that were previously inaccessible to human-computer interaction systems. As a result, interactions between the system and the user become more effective, as responses can be tailored based on unspoken cognitive and emotional states.
[0031] This system also moves beyond standardized emotion recognition models that often generalize emotions based on universal models, ignoring individual variations. By creating personalized profiles for each user, based on unique eye movement and cognitive patterns, the system tailors its analysis to the user, overcoming the limitations associated with generalized models. This capability allows the system to more accurately identify and interpret emotional and cognitive states, particularly in users whose expressions may deviate from standardized models due to cultural or psychological differences. Furthermore, the invention enhances early mental health monitoring by analyzing subtle changes in eye movement patterns and mind voice over time. By detecting early signs of mental health conditions like anxiety, depression, or cognitive decline, the system enables timely intervention and personalized treatment, helping to address mental health issues before they fully manifest.
[0032] The system also bridges the emotional gap in human-computer interaction by allowing machines to understand the emotional and cognitive context behind user inputs, creating more empathetic and effective communication. This function is particularly useful in fields requiring high emotional intelligence, such as customer service, therapy, and virtual assistance, where understanding user emotions is essential for meaningful engagement. The invention's use of eye movement tracking and mind voice analysis enhances this interaction by providing deeper insights into the emotional and cognitive state of the user, thereby supporting a more human-like and emotionally responsive interaction.
[0033] Through its integration of eye tracking, AI-based psychological analysis, and personalized profiling, the system provides a deeply insightful approach to understanding human cognition and emotions. It addresses gaps in traditional emotion recognition, inconsistent mental health diagnosis, and emotionally detached machine interactions. By offering a more responsive and individualized interpretation of cognitive and emotional states, the system significantly advances the capability of machines to understand and engage with the complexities of human emotion and thought. This unique combination of components and processing steps results in a system that is more aligned with the intricacies of human psychology, paving the way for applications across mental health, enhanced HCI, and personalized therapy or wellness solutions.
[0034] Figure 2 illustrates a flowchart of an exemplary embodiment of the system for interpreting cognitive and emotional states of a user. Figure 2 illustrates the operational flow of system 100, which begins with the eye-tracking device 102 capturing real-time eye movement data and proceeds through various analytical stages. First, eye tracking captures movements such as fixation, saccades, and blinks, which are core indicators of a user's visual attention and cognitive processing. These data points provide crucial visual cues that inform subsequent analysis. The mind voice analysis stage follows, where the mind voice analysis module 104 identifies cognitive patterns by interpreting subconscious mental cues reflected in eye movements. The term "mind voice" refers to inferred mental states, such as underlying thoughts or feelings, derived from eye behavior.
[0035] The flow then diverges into AI-psychological analysis, wherein AI-based models and psychological principles are combined to interpret complex cognitive and emotional states. Here, the AI-based image processing module 106 applies its trained psychological models to map specific eye movement patterns to cognitive and emotional conditions. By using a blend of psychological insights and machine learning algorithms, this step aims to deliver a comprehensive and nuanced interpretation that reflects the user's internal state with higher fidelity than conventional methods.
[0036] Results from this process are then presented in the final step, where detected emotional or cognitive states are summarized. This output may range from simple emotion labels (e.g., "calm" or "stressed") to more complex metrics, such as cognitive workload or engagement levels, based on the analyses conducted by the prior stages.
[0037] As shown in Figure 2, the flowchart illustrates a system for detecting emotional or cognitive states through a series of steps that combine eye tracking, AI-based analysis, and psychological interpretation.
[0038] Eye Tracking :
• The process begins with an image of an eye at the top, representing eye tracking.
• This step is labeled "Eye Tracking" and is described as capturing real-time eye movements. The system presumably collects data on eye movement patterns, like fixations, saccades, or blinks, to gather relevant visual cues.
[0039] Mind Voice Analysis :
• Below eye tracking is an image of a brain with digital elements, indicating cognitive analysis.
• This stage is labeled "Mind Voice Analysis" and involves AI identifying patterns based on the collected eye movement data. The term "mind voice" suggests that AI interprets subconscious or underlying mental patterns inferred from eye behavior.
[0040] AI-Psychological Analysis :
• The flow splits here into two paths: one leading to an image labeled "AI" and the other to an image labeled "Psychology."
• This step is labeled "AI-Psychological Analysis," where the AI integrates both technological and psychological models to interpret cognitive and emotional states.
• The system appears to use both AI algorithms and psychological principles to achieve a holistic analysis of the individual's mental state.
[0041] Results :
• The final step displays an icon labeled as "Results."
• This stage presents the detected emotional or cognitive states. This output could be a summary of emotions, cognitive workload, or other mental states as determined by the previous analyses.
[0042] Figure 3 illustrates a method 300 for interpreting cognitive and emotional states of a user. The method 300 includes the following steps of :
[0043] In method step 302, the method 300 includes capturing, by an eye-tracking device 102, real-time eye movement data from the user;
[0044] In method step 304, the method 300 includes receiving, by a mind voice analysis module 104, the eye movement data from the eye-tracking device 102 for identifying eye movement patterns from the received eye movement data by means of an artificial intelligence (AI) based image processing model;
[0045] In method step 306, the method 300 includes correlating, by an ai processing module 106, the identified eye movement patterns with the user's cognitive and emotional states by processing the data in trained psychological models;
[0046] In method step 308, the method 300 includes adapting, by a personalized profiling module 108, the interpretation of inner voices based on the eye movement patterns specific to the user's personalized profile;
[0047] In method step 310, the method 300 includes continuously updating, by the personalized profiling module 108, the user's personalized profile to refine the analysis of the user's cognitive and emotional states over time; and
[0048] In method step 312, the method 300 includes delivering, by a user interface 110, the adapted interpretation of the user's cognitive and emotional states, based on the eye movement data and the personalized profile.
[0049] The disclosed system herein addresses several key problems in existing technologies related to emotion recognition, human-computer interaction, and mental health monitoring. By combining AI, eye-movement tracking, and mind voice interpretation, it solves the following challenges:
[0050] Superficial Emotion Detection:
Current technologies primarily rely on surface-level cues like facial expressions, voice tones, or gestures to detect emotions. These methods are often inaccurate and limited because they only capture explicit, observable behaviours, which may not reflect a person's true internal state. For example, someone may mask their emotions, or their external expressions may not align with their feelings. The proposed invention solves this problem by going beyond external cues to analyse eye movements and mind voice (internal thoughts), providing a more accurate understanding of a person's true emotional state and unspoken thoughts.
[0051] Lack of Access to Inner Cognitive Processes:
One of the major challenges in current human-computer interaction systems is the inability to access or interpret a user's internal cognitive processes. Technologies today typically respond to explicit commands (such as spoken words, typed text, or gestures), but they cannot read or interpret unexpressed thoughts or subconscious emotions. Our invention addresses this problem by interpreting subtle eye movement patterns and mind voice to gain access to non-verbal, subconscious cues. This allows for a deeper interaction between humans and machines, where the system can respond to unspoken cognitive and emotional states, improving the effectiveness of interactions and responses.
[0052] Standardized Emotion Recognition Models:
Existing emotion recognition systems often use standardized models to interpret emotions, if all individuals express emotions in similar ways. However, emotions and their expression vary widely across different individuals due to cultural, psychological, and personal differences. This leads to inaccuracies when trying to generalize emotions from a broad population. Our invention solves this by creating personalized profiles based on a user's unique patterns of eye movements and mind voice, enabling the system to provide tailored insights. This helps in accurately identifying emotional and cognitive states, overcoming the limitations of one-size-fits-all models.
[0053] Inability to Detect Early Signs of Mental Health Issues:
Current mental health technologies are often limited to diagnosing conditions based on self-reported symptoms or observable behaviours, which are often detected at a later stage when the mental health issue has already manifested. This delay can result in missed opportunities for early intervention and personalized treatment. Our invention addresses this problem by analysing subtle changes in eye movement patterns and mind voice to detect early signs of mental health conditions, such as anxiety, depression, or cognitive decline. This real-time monitoring enables proactive intervention, improving the potential for early diagnosis and treatment.
[0054] Lack of Emotional Understanding in Human-Computer Interaction (HCI):
In current HCI technologies, machines respond based on explicit input (e.g., commands or gestures), but they lack the ability to understand the emotional state or intentions behind these inputs. As a result, interactions are often mechanical and impersonal. This can be particularly problematic in fields like customer service, therapy, or virtual assistant applications, where understanding emotions is key to effective communication. Our invention solves this problem by allowing the system to interpret the emotional context behind user interactions through the analysis of eye movements and inner thoughts, making the interaction more empathetic and human-like.
[0055] The system disclosed herein solves critical limitations in emotion detection, mental health monitoring, and human-computer interaction by offering a more accurate, personalized, and deeply insightful system for understanding human thoughts and emotions. It addresses the gaps in superficial emotion recognition, inconsistent mental health diagnosis, and the lack of emotional intelligence in machines, providing solutions that are more responsive to the complexity of human cognition and emotions.
[0056] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
[0057] The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS AND ECONOMIC SIGNIFICANCE
[0058] The present disclosure described herein above introduces several novel and distinctive features that make it stand out from current technologies in artificial intelligence, human psychology, and human-computer interaction :
[0059] Integration of Human Psychology and Artificial Intelligence:
The fusion of psychological knowledge with artificial intelligence (AI) is one of the core innovations of the system disclosed. Typically, AI systems used in emotion recognition or human-computer interaction focus on analysing external behaviours, such as facial expressions, voice tone, or gestures. However, the proposed approach is different by training AI with psychological models and data, we enable the system to understand internal emotional states and cognitive processes that go beyond surface-level interactions. This integration allows the AI to interpret unspoken thoughts and feelings, using human psychological models. The result is an AI system capable of more holistic and deeper insights into the human mind, bridging the gap between psychological understanding and technological capability in a way that has not been explored before.
[0060] Interpretation of Inner Voices through Eye Movements and Mind Voice:
A particularly novel aspect of the system disclosed herein is its ability to interpret a human's mental voices through eye movements and mind voice analysis. While eye-tracking technology has been used in fields like behavioural research, it is generally applied to understand visual attention or cognitive load. The proposed system goes beyond that by analysing eye movement patterns in real-time to decode a person's emotional state and vacuous thoughts. The mind voice, i.e. the internal dialogue or silent thoughts that individuals experience plays a key role here. By correlating subtle eye movements with patterns associated with different thought processes and emotional states, our AI system can interpret what a person might be thinking or feeling without them having to express it verbally. This represents a significant departure from traditional emotion-recognition systems, which rely on explicit behaviours like facial expressions or speech.
[0061] Complex Data Processing for Cognitive and Emotional Insights:
One of the major challenges and innovations of the proposed system is its ability to process the complex data involved in analyzing eye movements and mind voice simultaneously. Eye movements are intricately tied to cognitive processes such as decision-making, memory recall, and emotional reactions, and translating these into actionable insights require advanced AI algorithms capable of handling high-dimensional data. Our system uses sophisticated data-processing techniques to capture subtle nuances in eye movement patterns, linking them to specific cognitive and emotional states. This type of processing requires not only cutting-edge machine learning models but also the ability to adapt to individual differences, making it highly customizable and dynamic. Unlike other systems that analyze fixed, surface-level behaviours, the system proposed herein goes deeper, analyzing how these internal thought patterns are reflected in eye movements to deliver rich insights into human emotions and cognition.
[0062] Personalized Interpretation of Inner Voices:
A key innovative feature of the proposed system is its ability to provide personalized insights into everyone's unique internal voices. Unlike existing technologies that use standardized models to recognize emotions (which may not be accurate across all users), our system adapts to each user's distinct patterns of eye movements and mind voice. By learning from a person's behaviour over time, the AI system builds a personalized profile, which allows it to offer tailored insights into their emotional and cognitive states. This customization could be applied in fields like mental health monitoring, where continuous, real-time analysis of a person's internal thoughts could help identify changes in emotional well-being, potentially flagging early signs of mental health conditions. It also opens up possibilities in therapy and personalized interaction systems, where understanding an individual's unique thought patterns could lead to more effective interventions and emotional understanding. This highly individualized approach makes the proposed system significantly more powerful than the current technology that rely on generic models.
[0063] The foregoing disclosure has been described with reference to the accompanying embodiments which do not limit the scope and ambit of the disclosure. The description provided herein is purely by way of example and illustration.
[0064] The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0065] The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
[0066] Throughout this specification, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, or group of elements, but not the exclusion of any other element, or group of elements.
[0067] Any discussion of documents, acts, materials, devices, articles, or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
[0068] The numerical values mentioned for the various physical parameters, dimensions, or quantities are only approximations and it is envisaged that the values higher/lower than the numerical values assigned to the parameters, dimensions or quantities fall within the scope of the disclosure, unless there is a statement in the specification specific to the contrary.
[0069] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation. , Claims:WE CLAIM:
1. A system (100) for interpreting cognitive and emotional states of a user, comprising:
an eye-tracking device (102) configured to capture real-time eye movement data from the user;
a mind voice analysis module (104) configured to receive the eye movement data from the eye-tracking device (102) and further configured to identify the eye movement patterns from the received eye movement data using an artificial intelligence (AI) based image processing model;
an AI-based image processing module (106) in communication with the eye-tracking device (102) and the mind voice analysis module (104), the AI-based image processing module (106) trained on psychological models to correlate eye movement patterns with the cognitive and emotional states of the user;
a personalized profiling module (108) configured to adapt the interpretation of voices based on the eye movement patterns specific to the user's personalized profile, and further configured to update continuously to refine the analysis of the user's cognitive and emotional states over time; and
a user interface (110) configured to deliver the adapted interpretation of the user's cognitive and emotional states based on the eye movement data and the personalized profile.
2. The system (100) as claimed in claim 1, wherein the eye-tracking device (102) is a high-precision eye-tracking device capable of detecting pupil dilation, saccadic movement, fixation duration, and gaze direction to enhance interpretation of the user's cognitive and emotional states.
3. The system (100) as claimed in claim 1, wherein the mind voice analysis module (104) is further configured to analyze multiple eye movement patterns, including gaze trajectory, blink frequency, and fixation points, to derive insights into a user's decision-making processes, emotional state, and memory recall patterns.
4. The system (100) as claimed in claim 1, wherein the AI-based image processing module (106) is configured to implement machine learning algorithms to continuously optimize the interpretation of emotional and cognitive states based on accumulated user data, allowing the system to dynamically adapt to user-specific behavioral patterns.
5. The system (100) as claimed in claim 1, wherein the personalized profiling module (108) is further configured to adaptively refine the user's personalized profile by continuously analyzing ongoing eye movement data and responses over time to improve the system's interpretation accuracy.
6. The system (100) as claimed in claim 1, further comprises a psychological model database (112) containing various psychological models associated with specific eye movement patterns, wherein the ai-processing module (106) utilizes the psychological models to correlate eye movements with corresponding cognitive states.
7. The system (100) as claimed in claim 1, wherein the user interface (110) is configured to communicate insights on the user's cognitive states and emotional states through visual, auditory, or haptic feedback.
8. The system (100) as claimed in claim 1, wherein the mind voice analysis module (104) includes a natural language processing (NLP) model to convert inferred cognitive signals into a textual or other interpretable output, allowing for readable insights into the user's mental state.
9. The system (100) as claimed in claim 1, wherein the personalized profiling module (108) is further configured to notify a mental health professional if certain emotional states indicative of psychological distress are consistently detected over a predefined period.
10. The system (100) as claimed in claim 1, wherein the AI-based image processing module (106) and the personalized profiling module (108) enable the system to be applied in virtual reality environments to modify user interaction based on detected emotional states and cognitive engagement levels.
11. A method (300) for interpreting cognitive and emotional states of a user, comprising :
capturing (302), by an eye-tracking device (102), real-time eye movement data from the user;
receiving (304), by a mind voice analysis module (104), the eye movement data from the eye-tracking device (102) for identifying eye movement patterns from the received eye movement data by means of an artificial intelligence (AI) based image processing model;
correlating (306), by an AI-based image processing module (106), the identified eye movement patterns with the user's cognitive and emotional states by processing the data in trained psychological models;
adapting (308), by a personalized profiling module (108), the interpretation of voices based on the eye movement patterns specific to the user's personalized profile;
continuously updating (310), by the personalized profiling module (108), the user's personalized profile to refine the analysis of the user's cognitive and emotional states over time; and
delivering (312), by a user interface (110), the adapted interpretation of the user's cognitive and emotional states, based on the eye movement data and the personalized profile.

Dated this 09th Day of November, 2024

_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA - 25
OF R. K. DEWAN & CO.
AUTHORIZED AGENT OF APPLICANT

TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT CHENNAI

Documents

NameDate
202441086466-FORM-26 [11-11-2024(online)].pdf11/11/2024
202441086466-COMPLETE SPECIFICATION [09-11-2024(online)].pdf09/11/2024
202441086466-DECLARATION OF INVENTORSHIP (FORM 5) [09-11-2024(online)].pdf09/11/2024
202441086466-DRAWINGS [09-11-2024(online)].pdf09/11/2024
202441086466-EDUCATIONAL INSTITUTION(S) [09-11-2024(online)].pdf09/11/2024
202441086466-EVIDENCE FOR REGISTRATION UNDER SSI [09-11-2024(online)].pdf09/11/2024
202441086466-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-11-2024(online)].pdf09/11/2024
202441086466-FORM 1 [09-11-2024(online)].pdf09/11/2024
202441086466-FORM 18 [09-11-2024(online)].pdf09/11/2024
202441086466-FORM FOR SMALL ENTITY(FORM-28) [09-11-2024(online)].pdf09/11/2024
202441086466-FORM-9 [09-11-2024(online)].pdf09/11/2024
202441086466-PROOF OF RIGHT [09-11-2024(online)].pdf09/11/2024
202441086466-REQUEST FOR EARLY PUBLICATION(FORM-9) [09-11-2024(online)].pdf09/11/2024
202441086466-REQUEST FOR EXAMINATION (FORM-18) [09-11-2024(online)].pdf09/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.