Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A PERSONALIZED AFFECTIVE TOUCH SYSTEM FOR EMOTION DETECTION AND EMOTION REGULATION AND ITS METHOD THEREOF
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 21 November 2024
Abstract
The present invention relates to a system and method for real-time emotion detection and regulation based on identified emotion. The system includes biosensors to read physiological signals associated with a user's emotional state, and uses machine learning to identify/classify the user's emotions. A central unit with actuators provides tactile feedback with varying pressure and vibrations to regulate emotions. The system detects speech conversations, identifies conversation objects, determines the user's emotion state during conversations, and generates emotion records. A graphical user interface enables remote emotion monitoring, while an alert generation unit notifies caretakers of abnormal emotional conditions. The invention provides real-time emotion identification, personalized emotion regulation through tactile feedback, speech conversation emotion tracking, and remote monitoring/alerts - offering a comprehensive, adaptive solution for emotional awareness and support.
Patent Information
Application ID | 202441090589 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 21/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Gayathri Soman | Department Of Computer Applications CUSAT 28WG+C9W, University Road, South Kalamassery, Kalamassery, Ernakulam, Kochi, Kerala 682022 | India | India |
Dr. M.V. Judy | Department Of Computer Applications CUSAT 28WG+C9W, University Road, South Kalamassery, Kalamassery, Ernakulam, Kochi, Kerala 682022 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Gayathri Soman | Department Of Computer Applications CUSAT 28WG+C9W, University Road, South Kalamassery, Kalamassery, Ernakulam, Kochi, Kerala 682022 | India | India |
Dr. M.V. Judy | Department Of Computer Applications CUSAT 28WG+C9W, University Road, South Kalamassery, Kalamassery, Ernakulam, Kochi, Kerala 682022 | India | India |
Specification
Description:FIELD OF THE INVENTION
The present disclosure relates to a personalized affective touch system and method for emotion detection and emotion regulation. In more details, this system and method are configured to facilitate real time emotion monitoring and emotion regulation based on the identified emotion.
BACKGROUND OF THE INVENTION
Affective computing encompasses the understanding and analysis of human emotions, sentiments, and feelings, as well as the recognition of emotions and sentiments. Affective computing has significantly contributed to computers recognizing, communicating, and intelligently responding to human emotions. Emotion is a multifaceted phenomenon involving action, motivation, expression, information processing, feelings, and social interaction. It influences various cognitive processes such as attention, memory, and decision-making, and can be regulated through different mechanisms.
Emotion regulation refers to the conscious or unconscious control of the intensity and duration of positive or negative emotional states to achieve specific goals. Inadequate emotion regulation can lead to exaggerated, inappropriate, or inadequate emotional responses, as observed in various psychological disorders. There is mounting evidence linking deficiencies in adaptively coping with difficult emotions related to depression, borderline personality disorder, substance abuse, eating disorders, and others, highlighting the importance of emotion regulation not only in psychopathology but also in overall well-being.
Touch has been identified as an effective method for eliciting and modulating human emotions. Interpersonal touch contributes to physical, emotional, social, and spiritual well-being. Researchers have offered evidence of the favorable impacts of human touch on health, particularly for lowering anxiety levels, through studies on married couples.Recent advancements in affective computing have shown that rhythmic sensory stimuli such as sound, vibration, and light can help in regulating emotions by altering the physiological processes and interoceptive awareness.
Affective touch involves tactile processing with an emotional component, going beyond mere sensory discrimination to evoke positive emotions. It has the potential to deliver non-invasive peripheral nerve stimulation, offering therapeutic benefits for conditions like loneliness, which is a significant public mental health concern that contributes to the emergence of depression and other mental health issues. Affective touch, particularly slow touch optimized for the C-tactile nerve fibers, has been found to alleviate feelings of social exclusion.
Therapeutic non-invasive peripheral nerve stimulation is being researched for various conditions including gait abnormalities, pain, anxiety, and depression. Affective haptics is a research field that studies and designs devices and systems that can elicit, improve, or impact a human's emotional state through the sensation of touch.The three complementing communication channels used in affective haptics are tactile, thermal, and kinesthetic.
While haptic devices are used for providing touch like stimuli in virtual reality, games, and psychotherapy, there is still a gap in devices capable of real-time monitoring of human affect and providing affective touch when needed based on the identified affective state. Therefore, there is a need for a system capable of continuously monitoring human emotions and delivering affective touch interventions to alleviate negative affect as necessary.
SUMMARY OF THE INVENTION
The present disclosure relates to a system and method for real-time emotion monitoring and emotion regulation based on identified emotion.The system consists of a sensing node with biosensors to read physiological signals associated with a user's emotional state and transfer the signals to the cloud unit/platform.The data is preprocessed, and a machine learning approach is used to identify the user's emotional state. The system includes a central unit with actuators that can provide affective touch as a feedback to regulate the user's emotion. The system also includes a graphical user interface for remote emotion monitoring and an alert generation unit to notify caretakers of abnormal emotional conditions. The method involves reading physiological signals, preprocessing data, extracting features, identifying emotional states, regulating emotions through affective touch, detecting speech conversations, generating emotion records, assisting in remote monitoring, and generating alerts.
The present disclosure seeks to provide a system for real time emotion monitoring and emotion regulation based on the identified emotion. The system comprises: a sensing node consisting of an array of biosensors configured to read physiological signals from a user's body associated with user's emotional state; a data acquisition unit connected to the sensing node to receive and store physiological signals in a cloud unit; a data pre-processing unit coupled with the cloud unit to remove noise that arises in the signal read using biosensors; a real-time emotion monitoring unit connected to the data pre-processing unit to identify the emotional state of the user from the pre-processed signals using a machine learning approach; a central unit connected to the real-time emotion monitoring unit to regulate emotion based on the determined emotion, guided by real-time emotional data, wherein the central unit is equipped with a plurality of actuators for providing a touch like stimulation with vibration and pressure sensationsfostering emotion regulation and well-being, a graphical user interface connected to the real time emotion monitoring unit to assist the caretaker to monitor the emotional state remotely; and an alert generation unit coupled with the graphical user interface to alert the caretakers whenever an abnormal emotional condition arises.
In an embodiment, the system further comprises: a receiving unit configured to receive sound streams; a detection unit connected to the receiving unit to detect a speech conversation between a user and at least one conversation object from the sound streams; an identification unit connected to the detection unit to identify the conversation object at least according to speech of the conversation object in the speech conversation; a control unit connected to the identification unit to determine emotion state of at least one speech segment of the user in the speech conversation; and a generation unit connected to the control unit to generate an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of content of the speech conversation, and the emotion state of the at least one speech segment of the user.
In an embodiment, emotion state of each speech segment in at least one speech segment of the user includes emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams.
In an embodiment, the emotion state change of the user is determined at least according to the current emotion state of the current speech segment of the user and at least one previous emotion state of at least one previous speech segment of the user, and an emotion attention point is determined by a prediction model at least according to the emotion state change of the user, wherein the prediction model determines the emotion attention point further according to at least one of the current emotion state, at least a portion of content of the speech conversation, duration of the current emotion state, topic in the speech conversation, identity of the conversation object, and history emotion records of the user.
In an embodiment, the emotional states are identified and for negative emotions the touch sensations are dynamically administered using actuators based on the intensity of the identified emotion, wherein the physiological signals used to identify the emotional state comprise data obtained from biosensors.
In an embodiment, nerve activation is targeted particularly in C-Tactile (CT) afferents, which helps in emotion regulation.
The proposed disclosure uses the concept of haptic feedback .Simple vibrations to more intricate sensations like pressure, texture, and even temperature changes are all examples of haptic feedback. Haptic technologies are getting more advanced and crucial to building more immersive and engaging digital experiences as technology develops. If the system identifies a negative emotion, a touch with vibration and pressure sensations are given with the help of actuators attached to the affective touch device. This vibration and pressure sensations are haptic feedback which are based on the intensity of the emotions identified. The intensity of vibration and pressure sensations can be compared to an intensity similar to when someone is lightly tapping/touching the forearm which, thereby provides a tapping similar to human-like touch measured in cm/s.
In an embodiment, the features are extracted using a feature extraction technique to identify key indicators of emotion wherein the machine learning techniques employ multimodal fusion techniques to combine information from multiple sources, which includes various physiological signals to enhance the accuracy of emotion classification.
The present disclosure also seeks to provide a method for real time emotion monitoring and emotion regulation based on identified emotion. The method comprises: reading physiological signals from a user's body associated with the user's emotional state using an array of biosensors configured within a sensing node, receiving and storing the physiological signals in a cloud through a data acquisition unit connected to the sensing node; removing noise from the signals acquired using the biosensors through a data pre-processing unit coupled with the cloud unit; extracting features using a feature extraction technique to identify key indicators of emotion, wherein multimodal fusion techniques are employed to combine information from multiple sources, including various physiological signals, to enhance the accuracy of emotion classification; identifying the emotional state of the user from the pre-processed signals in real-time using a machine learning approach through a real-time emotion monitoring unit connected to the data pre-processing unit; regulating emotion based on the determined emotional state, guided by real-time emotional data, using a central unit connected with the real-time emotion monitoring unit, wherein the central unit is equipped with a plurality of actuators for providing a touch like stimulation with vibration and pressure sensations, fostering emotion regulation and well-being, wherein touch sensations are dynamically administered using actuators for negative emotions, wherein the physiological signals used to identify the emotional state comprise data obtained from biosensors;
receiving sound streams and detecting a speech conversation between a user and at least one conversation object from the sound streams thereby identifying the identity of the conversation object at least according to the speech of the conversation object in the speech conversation; determining the emotion state of at least one speech segment of the user in the speech conversation and generating an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of the content of the speech conversation, and the emotion state of the at least one speech segment of the user, wherein the emotion state of each speech segment in at least one speech segment of the user includes the emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to the speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams; assisting caretaker to monitor the emotional state remotely through a graphical user interface connected to the real time emotion monitoring unit; and alerting the caretakers whenever an abnormal emotional condition arises through an alert generation unit coupled with the graphical user interface.
An objective of the present disclosure is to provide a system and method for real-time emotion monitoring and emotion regulation based on identified emotions.
Another objective of the present disclosure is to enable remote monitoring of a user's emotional state through a graphical user interface.
Another objective of the present disclosure is to alert caretakers whenever an abnormal emotional condition arises.
Another objective of the present disclosure is to detect speech conversations between a user and conversation objects, identify the conversation objects' identities, determine the emotion state of the user's speech segments, and generate emotion records including the conversation object's identity, conversation content, and the user's emotion state.
Another objective of the present disclosure is to regulate emotions based on the determined emotional state, guided by real-time emotional data, using a central unit equipped with actuators to provide affective touch sensations including vibrations and pressure.
To further clarify advantages and features of the present disclosure, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
BRIEF DESCRIPTION OF FIGURES
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a block diagram of a system for real-time emotion monitoring and emotion regulation based on identified emotions in accordance with an embodiment of the present disclosure;
Figure 2 illustrates a flow chart of a method for real time emotion monitoring and emotion regulation based on identified emotions in accordance with an embodiment of the present disclosure; and
Figure 3 illustrates a block diagram of a proposed method in accordance with an embodiment of the present disclosure.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
DETAILED DESCRIPTION:
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to "an aspect", "another aspect" or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
The functional units described in this specification have been labeled as devices. A device may be implemented in programmable hardware devices such as processors, digital signal processors, central processing units, field programmable gate arrays, programmable array logic, programmable logic devices, cloud processing systems, or the like. The devices may also be implemented in software for execution by various types of processors. An identified device may include executable code and may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executable of an identified device need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the device and achieve the stated purpose of the device.
Indeed, an executable code of a device or module could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices. Similarly, operational data may be identified and illustrated herein within the device, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, as electronic signals on a system or network.
Reference throughout this specification to "a select embodiment," "one embodiment," or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter. Thus, appearances of the phrases "a select embodiment," "in one embodiment," or "in an embodiment" in various places throughout this specification are not necessarily referring to the same embodiment.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, to provide a thorough understanding of embodiments of the disclosed subject matter. One skilled in the relevant art will recognize, however, that the disclosed subject matter can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosed subject matter.
In accordance with the exemplary embodiments, the disclosed computer programs or modules can be executed in many exemplary ways, such as an application that is resident in the memory of a device or as a hosted application that is being executed on a server and communicating with the device application or browser via a number of standard protocols, such as TCP/IP, HTTP, XML, SOAP, REST, JSON and other sufficient protocols. The disclosed computer programs can be written in exemplary programming languages that execute from memory on the device or from a hosted server, such as BASIC, COBOL, C, C++, Java, Pascal, or scripting languages such as JavaScript, Python, Ruby, PHP, Perl or other sufficient programming languages.
Some of the disclosed embodiments include or otherwise involve data transfer over a network, such as communicating various inputs or files over the network. The network may include, for example, one or more of the Internet, Wide Area Networks (WANs), Local Area Networks (LANs), analog or digital wired and wireless telephone networks (e.g., a PSTN, Integrated Services Digital Network (ISDN), a cellular network, and Digital Subscriber Line (xDSL)), radio, television, cable, satellite, and/or any other delivery or tunneling mechanism for carrying data. The network may include multiple networks or sub networks, each of which may include, for example, a wired or wireless data pathway. The network may include a circuit-switched voice network, a packet-switched data network, or any other network able to carry electronic communications. For example, the network may include networks based on the Internet protocol (IP) or asynchronous transfer mode (ATM), and may support voice using, for example, VoIP, Voice-over-ATM, or other comparable protocols used for voice data communications. In one implementation, the network includes a cellular telephone network configured to enable exchange of text or SMS messages.
Examples of the network include, but are not limited to, a personal area network (PAN), a storage area network (SAN), a home area network (HAN), a campus area network (CAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a virtual private network (VPN), an enterprise private network (EPN), Internet, a global area network (GAN), and so forth.
Figure 1 illustrates a block diagram of a system for real-time emotion monitoring and emotion regulation based on identified emotions in accordance with an embodiment of the present disclosure.
Referring to Figure 1, the system (100) includes a sensing node (102) consisting of an array of biosensors (102a) configured to read physiological signals from a user's body associated with user's emotional state.
In an embodiment, a data acquisition unit (104) is connected to the sensing node (102) to receive and store physiological signals in a cloud unit (128).
In an embodiment, a data pre-processing unit (106) is coupled with the cloud unit (128)to remove noise or irrelevant data that arises in the signal read using biosensors.
In an embodiment, a real-time emotion monitoring unit (108) is connected to the data pre-processing unit (106) to identify emotional state of the user from the pre-processed signals using a machine learning approach.
In an embodiment, a central unit (110) isconnectedtothe real-time emotion monitoring unit (108) to regulate emotion based on the determined emotion, guided by real-time emotional data, wherein the central unit (110) is equipped with a plurality of actuators (112)for providing a touch like stimulation with vibration and pressure sensations fostering emotion regulation and well-being, wherein the attached actuators (112) provides a responsive and adaptive solution to emotional support.
In an embodiment, a graphical user interface (114) is connected to the real time emotion monitoring unit (108) to assist the caretaker to monitor the emotional state remotely.
In an embodiment, an alert generation unit (116) is coupled with the graphical user interface (114) to alert the caretakers whenever an abnormal emotional condition arises.
In an embodiment, if negative emotions are detected then the touch sensations with varying intensities are dynamically administered using actuators. The emotion is detected from the physiological signals which comprise data obtained from biosensors.
In an embodiment, nerve activation is targeted particularly in C-Tactile (CT) afferents, which helps in emotion regulation.
In an embodiment, the system further comprises real-time tactile feedback in response to detected emotional states, enabling immediate intervention and support for users experiencing heightened emotions, wherein personalization is also achieved as the tactile feedback is provided based on the emotion detected of a particular user.
In an embodiment, the features are extracted using a feature extraction technique to identify key indicators of emotion wherein the machine learning techniques employ multimodal fusion techniques to combine information from multiple sources, which includes various physiological signals to enhance the accuracy of emotion classification.
In an embodiment, the system (100) further comprises: a speech data receiving unit (118) configured to receive sound streams and store the received data in the cloud unit (128) ; a speech data detection unit (120) connected to the cloud unit (128) to detect a speech conversation between a user and at least one conversation object from the sound streams; an speech data identification unit (122) connected to the speech data detection unit (120) to identify the conversation object at least according to speech of the conversation object in the speech conversation; a speech data control unit (124) connected to the speech data identification unit (122) to determine emotion state of at least one speech segment of the user in the speech conversation; and a speech data generation unit (126) connected to the speech data control unit (124) to generate an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of content of the speech conversation, and the emotion state of the at least one speech segment of the user.
In an embodiment, emotion state of each speech segment in the at least one speech segment of the user includes emotion type of the speech segment and/or level of the emotion type , wherein the detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams.
In an embodiment, the emotion state change of the user is determined at least according to the current emotion state of the current speech segment of the user and at least one previous emotion state of at least one previous speech segment of the user, and an emotion attention point is determined by a prediction model at least according to the emotion state change of the user, wherein the prediction model determines the emotion attention point further according to at least one of the current emotion state, at least a portion of content of the speech conversation, duration of the current emotion state, topic in the speech conversation, identity of the conversation object, and history emotion records of the user.
Figure 2 illustrates a flow chart of a method for real time emotion monitoring and emotion regulation based on identified emotions in accordance with an embodiment of the present disclosure.
Referring to Figure 2, the method (200) includes pluralities of steps as described below,
At step (202), the method (200) includes reading physiological signals from a user's body associated with the user's emotional state using an array of biosensors (102a) configured within a sensing node (102).
At step (204), the method (200) includes receiving and storing the physiological signals in a cloud unit (128) through a data acquisition unit (104) connected to the sensing node (102).
At step (206), the method (200) includes removing noise and irrelevant data from the signals acquired using the biosensors through a data pre-processing unit (106) coupled with the cloud unit (128).
At step (208), the method (200) includes extracting features using a feature extraction technique to identify key indicators of emotion wherein the machine learning techniques employ multimodal fusion techniques to combine information from multiple sources, which includes various physiological signals to enhance the accuracy of emotion classification.
At step (210), the method (200) includes identifying emotional state of the user from the pre-processed signals in real-time using a machine learning approach through a real-time emotion monitoring unit (108) connected to the data pre-processing unit (106).
At step (212), the method (200) includes regulating emotion based on the determined emotional state, guided by real-time emotional data, using a central unit (110)connected to the real-time emotion monitoring unit (108), wherein the central unit (110) is equipped with a plurality of actuators (112)for providing a touch like stimulation with vibration and pressure sensations fostering emotion regulation and well-being, wherein touch sensations are dynamically administered using actuators for negative emotions, and wherein the physiological signals comprise data obtained from biosensors.
At step (214), the method (200) includes receiving sound streams using speech data receiving unit (118) and detecting a speech conversation between a user and at least one conversation object from the sound streams using a speech data detection unit (120),thereby identifying the identity of the conversation object at least according to the speech of the conversation object in the speech conversation using the speech data identification unit (122)
At step (216), the method (200) includes determining the emotion state of at least one speech segment of the user in the speech conversation using the speech data control unit (124)and generating an emotion record corresponding to the speech conversation using the speech data generation unit (126), the emotion record at least including the identity of the conversation object, at least a portion of the content of the speech conversation, and the emotion state of the at least one speech segment of the user, wherein the emotion state of each speech segment in the at least one speech segment of the user includes the emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to the speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams.
At step (218), the method (200) includes assisting the caretaker to monitor the user's emotional state remotely through a graphical user interface connected to the real-time emotion monitoring unit(108).
At step (220), the method (200) includes alerting the caretakers whenever an abnormal emotional condition arises through an alert generation unit coupled with the graphical user interface (114).
This present invention relates to a system and method for real time emotion monitoring and emotion regulation based on identified emotions. This invention discloses a personalized affective touch device designed to identify an individual's emotional state, through sensors that capture physiological data. Utilizing this data, the device is capable of delivering human-like touch experiences tailored to the detected emotionwith the help of a plurality of actuators for providing a touch like stimulation with vibration and pressure sensations fostering emotion regulation and well-being. The envisioned device is hoped to help individuals with psychological disorders, aiding them in managing their emotional well-being. It also addresses social isolation concerns by providing comforting touch interactions. Furthermore, the device serves as a practical purpose for individuals undergoing isolation due to infectious diseases like COVID-19, offering them a means of emotional support and connection during periods of physical isolation.
The proposed device utilizes biosensors to gather physiological signals and then employs haptic feedback. Haptic feedback encompasses a range of sensations from simple vibrations to nuanced experiences like pressure, texture, and temperature changes. As haptic technologies advance, they become increasingly essential for creating immersive digital experiences.
In this system, when negative emotions are detected, actuators within the affective touch device deliver vibrations and pressure sensations as haptic feedback. The intensity of these sensations corresponds to the intensity of the identified emotions. For instance, the vibrations and pressure mimic the sensation of a gentle tap or touch on the forearm, providing a human-like tactile response. This approach allows the device to simulate comforting human touch interactions based on the user's emotional state, enhancing its effectiveness in providing emotional support and connection.
Affective touch plays a crucial role in regulating emotions, and technological interventions can replicate the sensation of touch to aid in emotion regulation. This can be achieved by stimulating nerves through electrical or mechanical energy applied to the skin area innervated by the target nerves.
Researchers have identified C-Tactile (CT) afferents as a key channel for affective touch, primarily found in the hairy skin of the human body. The concept that CTs transmit positive affective touch was initially proposed in prior studies. CT fibers are specialized unmyelinated Group C peripheral nerve fibers that transmit afferent signals slowly from hairy skin to the insula. These mechanoreceptors, known as CTs, exhibit heightened responsiveness to light touch and generate signals that modulate emotional states rather than touch discrimination.
The "affective touch hypothesis" suggests that CTs play a role in social bonding based on their physiological characteristics. Various studies, including fMRI/PET scans, psychophysical assessments, and physiological analyses, support the notion of CT activation triggering affective emotional responses. According to this hypothesis, the primary function of the CT system is to elicit emotional, hormonal, and behavioral reactions during skin-to-skin contact with others.
Targeting C-tactile fibers (CT) as a stimulation target. This sensations provided can act as affective touch for the person.This emotive touch can help individuals cope with social isolation issues and aid in emotion regulation.
Figure 3 illustrates a block diagram of the proposed method in accordance with an embodiment of the present disclosure.
The affective touch device integrates an electrical circuit board equipped with sensors designed to capture physiological signals from the human body, specifically targeting features relevant to emotions. These raw physiological signals are susceptible to various interferences such as electromagnetic interference and measurement instrument noise, necessitating preprocessing to minimize noise and isolate emotion-related components within each signal.
The proposed model focuses on the following key physiological signals associated with human emotions: heart rate (HR), heart rate variability (HRV), electrodermal activity (EDA), skin conductance response (SCR), skin conductance level (SCL), and respiratory rate (RR). These signals are meticulously recorded using biosensors to ensure accuracy and reliability.
Through machine learning techniques, the signals captured by the sensors are analyzed to identify the user's emotional state. This analysis is crucial for the device to provide appropriate affective touch interventions. Based on the predicted emotional state, the device activates actuators to deliver tactile sensations such as vibrations and pressure. These sensations are finely tuned to mimic human-like touch experiences and are administered as needed to support the user's emotional well-being and regulation.
Through literature analysis, it was observed that C-Tactile (CT) fibers were prominently present in the forearm, constituting approximately 40% of the units studied, whereas their incidence in thigh skin was only around 10%. Consequently, the placement of actuators was optimized for the forearm region to maximize the effectiveness of affective touch stimulation.
Control over the sensors and actuators within the affective touch device will be managed through a microcontroller, ensuring precise and coordinated operation. Key components such as the Power button and the Circuit board will be strategically positioned on the dorsal side of the device for easy access and usability.
To accommodate a diverse range of users, the design of the affective touch device will be carefully developed to ensure comfort across various hand sizes. The focus will be on providing a secure grip and ease of handling, with thoughtful design elements aimed at delivering a user-friendly and effective experience for individuals seeking emotion regulation and comfort through affective touch interactions.
Referring to Figure 3, the biosensor module is equipped with sensors capable of capturing physiological signals from the human body, specifically targeting features relevant to emotions. These signals are then transmitted to the data acquisition module, where they are securely stored in the cloud for further analysis.
The pre-processing module plays a crucial role by filtering out any noise present in the signals acquired from the sensors. This ensures that only the essential components related to emotions are retained for analysis. The pre-processed signals are then fed into the real-time emotion monitoring module, which utilizes advanced algorithms to determine the user's current emotional state accurately.
To facilitate continuous monitoring and interaction, the application user interface module provides a user-friendly interface for caretakers. This interface allows for real-time tracking of the user's emotional state, providing valuable insights and support.
Additionally, the alert generation module, integrated within the application, serves as a proactive measure by sending alerts to caretakers whenever an abnormal emotional condition is detected. This feature ensures prompt intervention and support, enhancing the overall effectiveness of the affective touch device in promoting emotional well-being and support.
The speech data receiving unit receives sound streams and stores the received data in the cloud unit. The speech data detection unit detect a speech conversation between a user and at least one conversation object from the sound streams. The speech data identification unit identifies the conversation object at least according to speech of the conversation object in the speech conversation. The speech data control unit determines emotion state of at least one speech segment of the user in the speech conversation. The speech data generation unit generates an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of content of the speech conversation, and the emotion state of the at least one speech segment of the user.
The emotion state of each speech segment in the at least one speech segment of the user includes emotion type of the speech segment and/or level of the emotion type , wherein the detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams.
The emotion state change of the user is determined at least according to the current emotion state of the current speech segment of the user and at least one previous emotion state of at least one previous speech segment of the user, and an emotion attention point is determined by a prediction model at least according to the emotion state change of the user, wherein the prediction model determines the emotion attention point further according to at least one of the current emotion state, at least a portion of content of the speech conversation, duration of the current emotion state, topic in the speech conversation, identity of the conversation object, and history emotion records of the user.
The determined emotional states are selected from positive emotions and negative emotions wherein for negative emotions the touch sensations are dynamically administered using actuators, wherein the physiological signals comprise data obtained from biosensors measuring parameters such as heart rate (HR), heart rate variability (HRV), electrodermal activity (EDA), skin conductance response (SCR), skin conductance level (SCL), and respiratory rate (RR).
The feedback comprises personalized suggestions for emotion regulation techniques, including mindfulness exercises, cognitive reappraisal strategies, and relaxation techniques, wherein the user has the option to customize the feedback preferences, including the frequency and format of the recommendations, to align with their individual preferences and needs.
The features areextracted using a feature extraction technique to identify key indicators ofemotional arousal, valence, and intensity, wherein the machine learningtechniques employ multimodal fusion techniques to combine informationfrom multiple sources, including physiological signals and behavioral data,to enhance the accuracy of emotion classification, wherein valencecomprises pleasure, and the emotional dimensional label comprisesarousal.
The proposed system comprises several interconnected modules designed to provide comprehensive emotion monitoring and regulation support.
1. The sensor module includes biosensors capable of capturing emotion-related information from the human body. These biosensors transmit the collected signals to the data acquisition unit, which then stores the relevant data securely in the cloud unit for further analysis. The data pre-processing unit plays a crucial role in removing any noise or irrelevant data from the signals, ensuring that only essential emotion-related information is retained for analysis.
2. The system features an automated real-time emotion monitoring system that employs machine learning approaches to identify the user's affective state accurately. This monitoring system utilizes the pre-processed signals to determine the user's emotional state in real time, providing valuable insights into their emotional well-being.
3. To facilitate emotion regulation, the system incorporates attached actuators capable of providing pressure and vibration sensations. These actuators simulate affective touch experiences that resemble human like touch, offering a tangible and comforting response based on the detected emotional state.
4. An application interface is provided to enable caretakers to monitor the user's real-time emotion effortlessly. This user interface displays emotion-related clues and data, allowing caretakers to track changes and trends in the user's emotional state over time. Additionally, an alert generation unit is integrated into the application, generating alert signals to notify caretakers promptly when intervention or support is needed based on the detected emotional state.
Overall, this detailed system architecture aims to provide a holistic approach to emotion monitoring and regulation, leveraging advanced technology and machine learning algorithms to enhance emotional well-being and support for users and caretakers alike.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefit s, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
, Claims:1. A system for real-time emotion monitoring and emotion regulation based on identified emotions, the system comprises:
a) a sensing node consisting of an array of biosensors configured to read physiological signals from a user's body associated with user's emotional state;
b) a data acquisition unit connected to the sensing node to receive and store physiological signals in a cloud;
c) a data pre-processing unit coupled with the cloud unit to remove noise that arises in the signal read using biosensors;
d) a real-time emotion monitoring unit connected to the data pre-processing unit to identify and classify an emotional state of the user from the pre-processed signals
e) a central unit connected with the real-time emotion monitoring unit to regulate emotion based on the determined emotion, guided by real-time emotional data, wherein the central unit is equipped with a plurality of actuators for providing a touch like stimulation with vibration and pressure sensations fostering emotion regulation and well-being, wherein touch sensations are dynamically administered using actuators for negative emotions.
f) a graphical user interface connected to the real-time emotion monitoring unit to assist the caretaker to monitor the user's emotional state remotely; and
g) an alert generation unit coupled with the graphical user interface to alert the caretakers whenever an abnormal emotional condition arises.
h) An speech data unit for receiving sound streams and a speech data detection unit for detecting a speech conversation between a user and at least one conversation object from the sound streams and a speech identification unit for identifying the identity of the conversation object at least according to the speech of the conversation object in the speech conversation;
i) A speech data control unit for determining the emotion state of at least one speech segment of the user in the speech conversation and a speech data generation unit for generating an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of the content of the speech conversation, and the emotion state of the at least one speech segment of the user, wherein the emotion state of each speech segment in at least one speech segment of the user includes the emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to the speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams;
2. The system as claimed in claim 1, wherein for negative emotions detected from the physiological signals, the touch sensations are dynamically administered using actuators, wherein the physiological signals comprise data obtained from biosensors.
3. The system as claimed in claim 1, wherein nerve activation is targeted in dermal areas, particularly C-Tactile (CT) afferents, to elicit emotional responses akin to social bonding and promote emotional, hormonal, and behavioral reactions.
4. The system as claimed in claim 1, further comprises real-time tactile feedback in response to detected emotional states, enabling immediate intervention and support for users experiencing heightened emotion, wherein personalization is also achieved as the tactile feedback is provided based on the emotion detected of a particular user, wherein the feedback comprises personalized suggestions for emotion regulation techniques, including mindfulness exercises, cognitive reappraisal strategies, and relaxation techniques, wherein the user has the option to customize the feedback preferences, including the frequency and format of the recommendations, to align with their individual preferences and need.
5. The system as claimed in claim 1, wherein the features are extracted using a feature extraction technique to identify key indicators of emotion wherein the machine learning techniques employ multimodal fusion techniques to combine information from multiple sources, including various physiological signals to enhance the accuracy of emotion classification.
6. The system as claimed in claim 1, wherein the graphical user interface provide live monitoring of the user's vital measurements and emotional states.
7. The system as claimed in claim 1, wherein the alert generation unit uses methods to check if an abnormal emotional condition has occurred and for the generation of alerts accordingly.
8. The system as claimed in claim 1, wherein the data pre-processing unit is coupled with the cloud unit to remove noise or irrelevant data that arises in the signal read using biosensors with the help of data mining or machine learning algorithms and thereby identifying features/patterns specifically relevant to emotions, wherein the biosensors are selected from a group of thermometers, heart rate sensor IC, Galvanic Skin Response (GSR) Sensor, and integrated pulse oximeter, wherein the plurality of physiological signals selected from heart rate (HR), heart rate variability (HRV), electrodermal activity (EDA), skin conductance response (SCR), skin conductance level (SCL), and respiratory rate (RR).
9. The system as claimed in claim 1, wherein the real time emotion monitoring unit is connected to the data preprocessing unit to identify emotional state of the user from the preprocessed signals using machine learning approaches where multimodal fusion techniques are employed to combine features from multiple sources and give as input to the machine learning model.
10. The system as claimed in claim 1, further comprises:
• a receiving unit configured to receive sound streams;
• a detection unit connected to the receiving unit to detect a speech conversation between a user and at least one conversation object from the sound streams;
• an identification unit connected to the detection unit to identity of the conversation object at least according to speech of the conversation object in the speech conversation;
• a control unit connected to the identification unit to determine emotion state of at least one speech segment of the user in the speech conversation; and
• a generation unit connected to the control unit to generate an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of content of the speech conversation, and the emotion state of the at least one speech segment of the user.
11. The system as claimed in claim 10, wherein emotion state of each speech segment in at least one speech segment of the user includes emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams.
12. The system as claimed in claim 10, wherein the emotion state change of the user is determined at least according to the current emotion state of the current speech segment of the user and at least one previous emotion state of at least one previous speech segment of the user, and an emotion attention point is determined by a prediction model at least according to the emotion state change of the user, wherein the prediction model determines the emotion attention point further according to at least one of the current emotion state, at least a portion of content of the speech conversation, duration of the current emotion state, topic in the speech conversation, identity of the conversation object, and history emotion records of the user.
13. A method for real-time emotion monitoring and emotion regulation based on identified emotions, the method comprises:
a) reading physiological signals from a user's body associated with the user's emotional state using an array of biosensors configured within a sensing node. Receiving and storing the physiological signals in a cloud unit through a data acquisition unit connected to the sensing node;
b) Removing noise from the physiological signals acquired using the biosensors through a data pre-processing unit coupled with the cloud unit;
c) Extracting features using a feature extraction technique to identify key indicators of emotion, wherein multimodal fusion techniques are employed to combine information from multiple sources, including various physiological signals,to enhance the accuracy of emotion classification.
d) identifying and classifying an emotional state of the user from the pre-processed signals in real-time using a machine learning approach through a real-time emotion monitoring unit connected to the data pre-processing unit;
e) regulating emotion based on the determined emotional state, guided by real-time emotional data, using a central unit connected to the real-time emotion monitoring unit, wherein the central unit is equipped with a plurality of actuators for providing a touch like stimulation with vibration and pressure sensations fostering emotion regulation and well-being, wherein touch sensations dynamically administered using actuators for negative emotions.
f) assisting the user and caretaker to monitor the user's emotional state remotely through a graphical user interface connected to the real-time emotion monitoring unit; and
g) alerting the caretakers and users whenever an abnormal emotional condition arises through an alert generation unit coupled with the graphical user interface.
h) receiving sound streams and detecting a speech conversation between a user and at least one conversation object from the sound streams thereby identifying the identity of the conversation object at least according to the speech of the conversation object in the speech conversation;
i) determining the emotion state of at least one speech segment of the user in the speech conversation and generating an emotion record corresponding to the speech conversation, the emotion record at least including the identity of the conversation object, at least a portion of the content of the speech conversation, and the emotion state of the at least one speech segment of the user, wherein the emotion state of each speech segment in the at least one speech segment of the user includes the emotion type of the speech segment and/or level of the emotion type, wherein detecting the speech conversation comprises detecting a start point and an end point of the speech conversation at least according to the speech of the user and/or speech of the conversation object in the sound streams, wherein the start point and the end point of the speech conversation are detected using environment information of the speech conversation, and background sound in the sound streams;
Documents
Name | Date |
---|---|
202441090589-FORM 18A [18-12-2024(online)].pdf | 18/12/2024 |
202441090589-COMPLETE SPECIFICATION [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-DECLARATION OF INVENTORSHIP (FORM 5) [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-DRAWINGS [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-FIGURE OF ABSTRACT [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-FORM 1 [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-FORM-9 [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-POWER OF AUTHORITY [21-11-2024(online)].pdf | 21/11/2024 |
202441090589-REQUEST FOR EARLY PUBLICATION(FORM-9) [21-11-2024(online)].pdf | 21/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.