image
image
user-login
Patent search/

FACIAL EMOTION DETECTION SYSTEM FOR HUMANS

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

FACIAL EMOTION DETECTION SYSTEM FOR HUMANS

ORDINARY APPLICATION

Published

date

Filed on 30 October 2024

Abstract

This invention provides a system for real-time emotion detection through facial expression analysis. The system captures and analyzes facial landmarks, detecting emotions and tracking mood trends over time. Applications include mental health monitoring, customer service enhancement, and interactive gaming, offering real-time insights into human emotions.

Patent Information

Application ID202411083386
Invention FieldCOMPUTER SCIENCE
Date of Application30/10/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
DR. PRASANN KUMARLOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia
SHAGUN CHAUDHARYLOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia
DR. POLU PICHESWARA RAOLOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia
AMAR KUMARLOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia

Applicants

NameAddressCountryNationality
LOVELY PROFESSIONAL UNIVERSITYJALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia

Specification

Description:FIELD OF THE INVENTION
This invention relates to artificial intelligence and human-computer interaction, specifically a system for detecting human emotions through real-time facial expression analysis. Utilizing deep learning algorithms, the invention enables mood tracking and emotion detection to provide immediate insights into the emotional state of users, with applications in mental health monitoring, customer service, and user experience research.
BACKGROUND OF THE INVENTION
Traditional methods for assessing human emotions rely on self-reported data, which can be inaccurate due to individual biases or limited awareness of one's own emotions. Real-time emotion detection is challenging, especially when trying to capture subtle expressions such as micro-expressions, which are involuntary and often difficult to detect. Existing systems also struggle to provide accurate results across diverse lighting conditions, facial orientations, and cultural expressions, limiting their reliability and application.
There is a need for a system capable of detecting and interpreting emotions accurately in real-time, without relying on subjective self-reporting. Furthermore, integrating such a system with wearable or commonly used devices can enhance accessibility, providing users with continuous emotional feedback for applications like mental health monitoring, customer service, and immersive gaming. This invention addresses these challenges by providing a comprehensive solution that utilizes convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to analyze facial expressions, including subtle micro-expressions, with high accuracy and adaptability.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
This invention presents a facial emotion detection system that utilizes machine learning algorithms to analyze facial expressions in real-time. The system identifies key facial landmarks and analyzes expressions to infer the user's emotional state. The detected emotions are displayed through an intuitive graphical interface, offering real-time feedback and continuous mood tracking. Applications include mental health monitoring, customer service optimization, and interactive gaming, providing accurate insights into human emotions.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: DEPICTS THE FACIAL LANDMARK DETECTION PROCESS, HIGHLIGHTING KEY POINTS SUCH AS EYES, MOUTH, AND EYEBROWS.
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a"," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", "third", and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The Facial Emotion Detection System comprises a camera, processing unit, and software modules designed to capture, analyze, and interpret human facial expressions in real-time. The system uses a camera (e.g., a smartphone camera or webcam) to capture video or still images of the user's face, ensuring a resolution high enough to capture subtle facial movements. The captured images are processed through deep learning models that identify facial landmarks and classify emotions based on detected expressions.
The face detection module uses convolutional neural networks (CNNs) to detect the user's face in each frame, accurately isolating the facial region. The system adjusts for lighting variations and head poses to maintain consistency in analysis. Facial landmark detection identifies approximately 68-100 key points on the face, including the corners of the eyes, edges of the mouth, and the eyebrows' position, allowing for detailed tracking of facial movements.
A key feature is micro-expression detection, utilizing temporal convolutional networks (TCNs) and recurrent neural networks (RNNs) such as long short-term memory (LSTM) networks. This module analyzes brief, involuntary muscle movements across frames, capturing subtle expressions that reveal genuine emotions. By training on large datasets of facial expressions, the system achieves high accuracy in detecting emotions such as joy, sadness, anger, fear, and surprise.
Emotion classification is conducted through a multi-layer neural network that assigns probability scores to each detected emotion, reflecting the system's confidence in its classification. Detected emotions are categorized into predefined states, including happy, sad, angry, and neutral, with complex emotions inferred through a combination of expressions. Mood tracking functionality provides continuous analysis, creating a trend of the user's emotional state over time. This feature is especially useful for identifying patterns related to stress or happiness, supporting long-term mental health monitoring.
The user interface presents real-time feedback, allowing users to view their detected emotions through icons and mood graphs. Customization options let users adjust analysis frequency and choose specific feedback types, such as graphical charts or textual insights. The system's data storage is designed to prioritize privacy, allowing users to store data locally or in a secure cloud environment, with encrypted storage and deletion options to maintain data confidentiality.
, Claims:1. A system for detecting human emotions in real-time, comprising a camera, processing unit, and software modules for facial landmark detection, micro-expression analysis, and emotion classification.
2. The system as claimed in Claim 1, wherein the camera captures video frames or still images for emotion analysis.
3. The system as claimed in Claim 1, wherein the processing unit includes deep learning algorithms for identifying facial landmarks and analyzing expressions.
4. The system as claimed in Claim 1, wherein the facial landmark detection module utilizes convolutional neural networks (CNNs) to isolate key facial points.
5. The system as claimed in Claim 1, wherein a temporal convolutional network (TCN) and long short-term memory (LSTM) networks detect micro-expressions across frames.
6. The system as claimed in Claim 1, wherein the emotion classification module categorizes detected expressions into specific emotional states and assigns probability scores.
7. The system as claimed in Claim 1, wherein mood tracking functionality provides continuous analysis and generates mood trends over time.
8. The system as claimed in Claim 1, wherein the user interface displays detected emotions through icons, charts, and graphical representations.
9. The system as claimed in Claim 1, wherein it provides local or cloud storage options for data, with encryption to maintain data security and privacy.

10. The system as claimed in Claim 1, wherein it enables integration with external devices, such as smartwatches or virtual reality headsets, for real-time emotion feedback in diverse environments.

Documents

NameDate
202411083386-COMPLETE SPECIFICATION [30-10-2024(online)].pdf30/10/2024
202411083386-DECLARATION OF INVENTORSHIP (FORM 5) [30-10-2024(online)].pdf30/10/2024
202411083386-DRAWINGS [30-10-2024(online)].pdf30/10/2024
202411083386-EDUCATIONAL INSTITUTION(S) [30-10-2024(online)].pdf30/10/2024
202411083386-EVIDENCE FOR REGISTRATION UNDER SSI [30-10-2024(online)].pdf30/10/2024
202411083386-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-10-2024(online)].pdf30/10/2024
202411083386-FORM 1 [30-10-2024(online)].pdf30/10/2024
202411083386-FORM FOR SMALL ENTITY(FORM-28) [30-10-2024(online)].pdf30/10/2024
202411083386-FORM-9 [30-10-2024(online)].pdf30/10/2024
202411083386-POWER OF AUTHORITY [30-10-2024(online)].pdf30/10/2024
202411083386-PROOF OF RIGHT [30-10-2024(online)].pdf30/10/2024
202411083386-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-10-2024(online)].pdf30/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.