Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
METHOD FOR SENTIMENT ANALYSIS IN VIDEO CONFERENCING PLATFORMS USING MACHINE LEARNING
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 8 November 2024
Abstract
The present invention provides a method for sentiment analysis in teleconferencing platforms through an integrated plugin that processes spoken and written communication in real-time. The plugin captures audio and chat data, converts audio to text, and applies NLP and ML techniques to assess sentiments, presenting the results through an intuitive visual dashboard within the teleconferencing interface. It offers features like multilingual support, data encryption for privacy, and an opt-in mechanism for participant consent. This plugin enhances virtual communication experiences by monitoring participant emotions, generating post-meeting sentiment reports, and providing insights for improving engagement and mental health monitoring. (Accompanied Figure No. 1)
Patent Information
Application ID | 202411086287 |
Invention Field | ELECTRONICS |
Date of Application | 08/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Pooja Tomer | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Aman Upadhyay | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Aditya Pratap mall | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Ananya Gupta | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Arpita singh | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Aparna Sharma | Department of CSE, IMS Engineering College, Ghaziabad, Uttar Pradesh, India. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
IMS Engineering College | National Highway 24, Near Dasna, Adhyatmik Nagar, Ghaziabad, Uttar Pradesh- 201015 | India | India |
Specification
Description:The present invention relates to the field of natural language processing (NLP), machine learning (ML), and video conferencing platforms. It specifically focuses on the application of advanced algorithms and techniques for sentiment analysis within online video conferencing applications, such as Zoom, enabling the identification and visualization of the emotional states of participants during virtual meetings and webinars. This invention enhances the user experience and provides valuable real-time emotional feedback, which can be utilized for various purposes, including communication enhancement, mental health monitoring, and meeting engagement analysis.
BACKGROUND OF THE INVENTION
Communication is a fundamental aspect of human interaction, and understanding emotions plays a critical role in effective communication. In traditional face-to-face conversations, emotional cues such as facial expressions, gestures, and tone of voice provide insight into the emotional state of individuals, allowing for responsive and empathetic interactions. However, when communication shifts to telephonic or online platforms, particularly during video conferences, the ability to discern emotions accurately diminishes due to limitations in non-verbal cues or reduced visibility.
The COVID-19 pandemic accelerated the adoption of remote work and online communication platforms like Zoom, leading to an increased reliance on virtual meetings for professional, educational, and personal interactions. In such virtual settings, understanding participants' emotions becomes challenging, yet it is crucial for maintaining productivity, engagement, and emotional well-being. Traditional teleconferencing tools focus primarily on visual and auditory communication, without providing any means to assess or monitor the emotional state of participants.
To address these limitations, the present invention introduces a method for sentiment analysis that is specifically designed for video conferencing platforms. It uses advanced NLP and ML techniques to analyze spoken and written communication in real-time, providing hosts and participants with insights into the emotional atmosphere of meetings. This capability can improve the overall communication experience, enhance engagement, and offer valuable feedback for maintaining a positive virtual environment.
OBJECTIVE OF INVENTION
An object of the present invention is to provide an automated method for sentiment analysis during virtual meetings and webinars.
Another object of the present invention is to interpret emotional states based on spoken and written communication during online meetings using NLP and ML techniques.
Yet another object of the present invention is to offer a real-time, user-friendly plugin for teleconferencing platforms like Zoom that provides insights on participants' sentiments.
Another object of the present invention is to assist hosts and organizations in monitoring the emotional atmosphere of meetings, ensuring productive and engaging communication.
Another object of the present invention is to facilitate mental health monitoring by providing an understanding of participants' well-being during online interactions.
SUMMARY OF THE INVENTION
The present invention is a sentiment analysis plugin designed specifically for integration with video conferencing platforms, such as Zoom, to provide real-time emotional insights based on participants' spoken and written communications. The invention addresses the challenge of understanding emotional cues in virtual environments by leveraging advanced machine learning (ML) and natural language processing (NLP) techniques. The plugin captures audio streams and chat messages, processes them, and analyzes sentiment to categorize it into predefined emotional states like positive, negative, neutral, or mixed.
The plugin architecture comprises several modules:
Data Collection Module: Captures audio streams and chat messages using the platform's API. The audio is transcribed into text using speech-to-text technology enhanced with noise-cancellation and speaker diarization techniques.
Preprocessing Module: Cleans, normalizes, and segments text data from audio and chat inputs, ensuring accurate and consistent information is available for analysis.
Sentiment Analysis Module: Uses deep learning models such as Transformers and LSTM networks to interpret and classify the sentiment of the processed text. The module supports multiple languages, enabling it to analyze communication in diverse settings.
Visualization Module: Provides a real-time dashboard within the platform interface, displaying sentiment graphs, participant-specific emotional states, and overall meeting sentiment summaries. This visual representation allows hosts and participants to gauge the emotional atmosphere dynamically.
Report Generation Module: Compiles detailed post-meeting sentiment reports, offering summaries of the overall meeting atmosphere, participant engagement, and historical sentiment trends. These reports assist organizations, educators, and hosts in improving future meetings and monitoring the mental health of participants.
Security and Privacy Module: Ensures data privacy through encryption and provides an opt-in mechanism for participants, complying with global privacy regulations such as GDPR.
Scalability and Integration Features: Designed to operate efficiently in both small and large-scale meetings, the plugin integrates seamlessly with the platform's API, ensuring compatibility across various devices without compromising performance.
In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF DRAWINGS
The advantages and features of the present invention will be understood better with reference to the following detailed description and claims taken in conjunction with the accompanying drawings, wherein like elements are identified with like symbols, and in which:
Figure 1 illustrates the flow diagram for the sentiment analysis plugin designed for video conferencing platforms in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
An embodiment of this invention, illustrating its features, will now be described in detail. The words "comprising," "having," "containing," and "including," and other forms thereof are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items.
The terms "first," "second," and the like, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another, and the terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
The invention is a comprehensive sentiment analysis plugin designed to enhance teleconferencing platforms, such as Zoom, by providing real-time emotional insights based on participants' spoken and written communication. This plugin aims to bridge the gap in emotional understanding in virtual environments, offering hosts and participants a deeper connection and improved interaction through data-driven sentiment analysis. The plugin's architecture comprises several interrelated modules, each playing a critical role in collecting, processing, and visualizing sentiment data:
1. Data Collection Module:
This module is responsible for capturing audio streams and chat messages in real-time from the teleconferencing platform using its API. The data collection process occurs in two parallel streams:
Audio Stream Capture: The plugin accesses audio streams transmitted during the video conference. It extracts the voice data of participants and converts this voice input into text using a speech-to-text conversion system. The conversion system is optimized with noise-cancellation algorithms and speaker diarization techniques, ensuring high transcription accuracy even when multiple speakers are present or when background noise is prevalent. The system supports various languages, making it adaptable for multilingual meetings.
Chat Message Capture: The plugin simultaneously extracts chat messages exchanged during the meeting. This includes both individual messages and group messages. All chat content is logged and processed for analysis, ensuring that all forms of communication within the meeting are considered for sentiment analysis.
2. Preprocessing Module:
Once the data is collected, it is passed to the preprocessing module, where the text (both from chat messages and transcribed audio) is prepared for sentiment analysis. The module performs several functions to ensure that the text data is clean, structured, and ready for the machine learning model:
Text Normalization: The module removes irrelevant characters, expands contractions, and standardizes abbreviations to make the text consistent. For instance, special symbols and emoticons in chat messages are either removed or converted into interpretable text.
Noise Removal: Any non-relevant audio artifacts or background chatter captured during the meeting are filtered out during the transcription process. This step enhances the clarity and quality of the data used for sentiment analysis.
Segmentation: The text is segmented into sentences or smaller, manageable chunks to improve the precision of sentiment analysis. This segmentation ensures that the analysis captures the context of each statement independently, allowing for more accurate emotional interpretation.
3. Sentiment Analysis Module:
This module is the core of the invention, utilizing advanced machine learning (ML) and natural language processing (NLP) techniques to analyze and categorize sentiment in real-time. It is built on a robust architecture that includes the following components:
Machine Learning Model: The sentiment analysis model is based on deep learning architectures such as Transformers (e.g., BERT, GPT) or LSTM networks. These models are pre-trained on large conversational datasets that cover diverse emotional states and expressions, enabling them to interpret the nuanced language used during meetings.
Text Processing and Sentiment Classification: The system processes each segment of text and applies sentiment classification algorithms to determine the emotional state conveyed. It categorizes each segment into predefined sentiment categories like positive, negative, neutral, or mixed emotions. The system assigns a confidence score to each classification, providing a quantitative measure of sentiment intensity.
Contextual Understanding: The model is also trained to consider contextual cues, such as the use of sarcasm, irony, or rhetorical questions, which are often challenging to interpret correctly. This capability enhances the accuracy and reliability of the sentiment analysis results, ensuring that they reflect the true emotional state of the speaker or writer.
4. Visualization Module:
The visualization module integrates directly into the teleconferencing platform's interface, providing an intuitive real-time dashboard that displays sentiment insights. This module offers several key features designed to provide immediate and actionable feedback to users:
Sentiment Graphs and Trends: The dashboard presents a real-time graph of the sentiment trend throughout the meeting. It shows the fluctuations in sentiment as the meeting progresses, highlighting moments of heightened emotional intensity or positive/negative shifts. This graphical representation helps hosts and participants track the emotional pulse of the meeting dynamically.
Participant-Specific Sentiment Analysis: The system can also provide detailed insights for individual participants. It monitors and displays each participant's emotional state, offering visual cues like color-coded icons or sentiment labels (e.g., green for positive, red for negative). This feature allows hosts to identify participants who may need more attention, engagement, or follow-up.
Overall Meeting Sentiment Summary: The plugin aggregates sentiment data to provide an overall summary of the emotional state of the meeting. This summary includes an average sentiment score, the dominant emotion throughout the session, and any significant emotional shifts, helping hosts understand the general atmosphere and engagement levels.
5. Report Generation Module:
After the meeting, the plugin compiles all the collected and analyzed data into detailed reports, which can be accessed by the host or the organization. The report generation module performs the following functions:
Meeting Summary Report: This report provides a comprehensive overview of the meeting's emotional atmosphere, including sentiment trends, peaks, and shifts observed during the session. It highlights key moments where sentiments significantly changed, allowing hosts to reflect on those parts of the meeting.
Individual Participant Analysis: The report includes specific insights for each participant, detailing their emotional state and engagement level throughout the meeting. This analysis helps identify individuals who might need follow-up or support, promoting proactive engagement and mental health monitoring.
Historical Data and Trend Analysis: The module also supports the storage of past meeting sentiment data, enabling users to track trends over time. This feature is particularly useful for organizations monitoring team dynamics or educational institutions tracking student engagement levels across multiple sessions.
6. Security and Privacy Module:
Given the sensitive nature of emotional data, the invention incorporates a robust security and privacy module to protect users' information:
Data Encryption: All audio and text data collected by the plugin are encrypted during transmission and storage using state-of-the-art encryption techniques. This ensures that participant data remains secure and protected from unauthorized access.
Participant Consent Mechanism: The plugin includes an opt-in feature, allowing participants to choose whether they want to share their emotional data. Before a meeting begins, participants are prompted to consent to sentiment analysis, ensuring compliance with privacy regulations and respecting individuals' preferences.
Compliance with Privacy Regulations: The system is designed to adhere to global privacy standards, such as the General Data Protection Regulation (GDPR) and other relevant privacy laws. It ensures that users have control over their data and can opt out of sentiment analysis at any time.
7. Multilingual Support:
To accommodate a diverse range of users, the plugin is designed with multilingual capabilities. It supports sentiment analysis for multiple languages, enabling global teams and participants from different linguistic backgrounds to benefit from its features. The speech-to-text and NLP models are trained on multilingual datasets, ensuring accurate transcription and sentiment detection for a variety of languages commonly used in teleconferencing.
8. System Scalability and Integration:
The invention is engineered for scalability, ensuring that it can handle small meetings with a few participants to large webinars with thousands of attendees. The plugin is designed to seamlessly integrate with the API of teleconferencing platforms like Zoom without causing performance issues or requiring significant system modifications. The plugin operates as a lightweight add-on, ensuring compatibility and ease of use across various devices, including desktops, laptops, and mobile platforms.
9. Future Enhancement Capabilities:
The plugin is built with future enhancements in mind, allowing for integration with other analytical tools and systems. For instance, it can be expanded to incorporate non-verbal analysis using computer vision techniques, which would enable the system to detect and analyze facial expressions and body language, further enhancing the accuracy of emotional insights. Additionally, the system can be adapted to interface with mental health monitoring applications, providing comprehensive well-being support for organizations and individuals.
By combining these features, the present invention provides a robust, real-time sentiment analysis system tailored for teleconferencing platforms. It not only enhances communication but also supports engagement, well-being, and productivity in a virtual setting, making it an invaluable tool for professional, educational, and personal use.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present invention, and its practical application to thereby enable others skilled in the art to best utilize the present invention and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omission and substitutions of equivalents are contemplated as circumstance may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the claims of the present invention.
, Claims:1. A method for conducting sentiment analysis during video conferencing sessions, comprising:
a) capturing, in real-time, audio streams and chat messages from participants using an integrated API;
b) converting audio streams to text using a speech-to-text conversion system with noise cancellation and speaker diarization;
c) preprocessing the text data, including normalization, segmentation, and cleaning, to prepare it for analysis;
d) analyzing the processed text using a sentiment analysis model based on deep learning techniques to classify sentiment into categories including positive, negative, neutral, or mixed;
e) displaying the classified sentiment in real-time on a user interface, including sentiment graphs, participant-specific emotional states, and overall meeting summaries;
f) generating post-meeting reports detailing overall sentiment, participant insights, and historical trends;
g) ensuring the privacy and security of data by encrypting information and obtaining participant consent.
2. The method as claimed in claim 1, wherein the speech-to-text conversion system further employs speaker identification and differentiation to attribute spoken text to individual participants.
3. The method as claimed in claim 1, wherein the preprocessing step includes converting emoticons and special characters from chat messages into interpretable sentiment-related information.
4. The method as claimed in claim 1, wherein the sentiment analysis model uses a combination of transformer-based neural networks and recurrent neural networks to achieve high accuracy in sentiment classification.
5. The method as claimed in claim 1, further comprising providing multilingual support during the analysis step to classify sentiment from audio and chat inputs in multiple languages.
6. The method as claimed in claim 1, wherein the real-time display of sentiment data includes color-coded indicators representing the emotional state of each participant for immediate feedback.
7. The method as claimed in claim 1, wherein the post-meeting report generation further includes individual engagement levels and emotional variations for each participant throughout the meeting.
8. The method as claimed in claim 1, wherein the privacy and security step comprise using end-to-end encryption and an anonymization process to protect participant identities.
9. The method as claimed in claim 1, wherein the method further comprises providing the host with an option to configure sentiment analysis settings, including enabling or disabling real-time visualization and reports.
10. The method as claimed in claim 1, further comprising adapting the method for various meeting sizes, scaling efficiently from small meetings to large webinars with thousands of attendees without affecting performance or user experience.
Documents
Name | Date |
---|---|
202411086287-COMPLETE SPECIFICATION [08-11-2024(online)].pdf | 08/11/2024 |
202411086287-DECLARATION OF INVENTORSHIP (FORM 5) [08-11-2024(online)].pdf | 08/11/2024 |
202411086287-DRAWINGS [08-11-2024(online)].pdf | 08/11/2024 |
202411086287-FORM 1 [08-11-2024(online)].pdf | 08/11/2024 |
202411086287-FORM-9 [08-11-2024(online)].pdf | 08/11/2024 |
202411086287-REQUEST FOR EARLY PUBLICATION(FORM-9) [08-11-2024(online)].pdf | 08/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.