image
image
user-login
Patent search/

SENTIMENTS AND MOOD ANALYSIS FROM MUSIC THROUGH DEEP LEARNING

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

SENTIMENTS AND MOOD ANALYSIS FROM MUSIC THROUGH DEEP LEARNING

ORDINARY APPLICATION

Published

date

Filed on 26 October 2024

Abstract

ABSTRACT “SENTIMENTS AND MOOD ANALYSIS FROM MUSIC THROUGH DEEP LEARNING” The impact occurs in a person after going through a piece of text, opinion, a statement or a piece of music that can be positive which includes happiness, joy, love, inspiration, gratitude, pride, excitement etc. or negative which includes sadness, anger, betrayal, disgust, fear etc. or the person may not show any reaction that is neutral can be said as sentiments, emotions or mood. Analysis of music includes various aspects of music like rhythm, lyrics, pitch, tempo and the kinds of modulations used in the voice while rendering. With the help of Deep learning, it provides powerful tools for analyzing various aspects of music starting from the simple audio features to high level musical structures and can be applied to wide range of music analysis tasks which includes genre classification, music generation, recommendation and many more. Figure 1

Patent Information

Application ID202431081817
Invention FieldELECTRONICS
Date of Application26/10/2024
Publication Number45/2024

Inventors

NameAddressCountryNationality
Randeep PalitSchool of Computer Applications, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia
Jnana Ranjan MohantySchool of Computer Applications, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia
Satya Ranjan DashSchool of Computer Applications, Kalinga Institute of Industrial Technology (Deemed to be University), Patia Bhubaneswar Odisha India 751024IndiaIndia

Applicants

NameAddressCountryNationality
Kalinga Institute of Industrial Technology (Deemed to be University)Patia Bhubaneswar Odisha India 751024IndiaIndia

Specification

Description:TECHNICAL FIELD
[0001] The present invention relates to the field of Deep Learning and automated systems, and more particularly, the present invention relates to the sentiments and mood analysis from music through deep learning.
BACKGROUND ART
[0002] The following discussion of the background of the invention is intended to facilitate an understanding of the present invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known, or part of the common general knowledge in any jurisdiction as of the application's priority date. The details provided herein the background if belongs to any publication is taken only as a reference for describing the problems, in general terminologies or principles or both of science and technology in the associated prior art.
[0003] Music can set a mood or create that atmosphere mentally where a person has multiple emotions or change of sentiments. He or she can choose any track of music according to the moods as emotional expression has been regarded as one of the most important criteria for the aesthetic value of music (Juslin, 2013). Music has even been described as a "language of the emotions" by some authors (Cooke, 1959).
[0004] Sentiment and Mood analysis is a technique which is used to determine a person's state of mind after he or she goes through a statement, text or opinion which results in the opinion like reviews, suggestions or selection and choices he or she makes in decision making in that situation or environment. As music is a very important and a very integral part of any culture, it's been used worldwide to showcase or manifest any major implications.
[0005] Sentiment and mood analysis in music helps in music creation, giving inputs to professional artists, understanding the need of audience, therapeutic applications and research and innovative findings in the fields of psychology, neuroscience and computer science.
[0006] There are other algorithms and methodologies can be used for Sentiments and Moods classification like Roberts Thayer's traditional model of mood, the valence-arousal(V-A) model of emotion, the audio mood classification(AMC) method, Music emotion recognition(MER) model using regression methods etc. But all these require a need of constant monitoring and technical expertise but the accuracy cannot be confirmed.
[0007] In light of the foregoing, there is a need for Sentiments and mood analysis from music through deep learning that overcomes problems prevalent in the prior art associated with the traditionally available method or system, of the above-mentioned inventions that can be used with the presented disclosed technique with or without modification.
[0008] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies, and the definition of that term in the reference does not apply.
OBJECTS OF THE INVENTION
[0009] The principal object of the present invention is to overcome the disadvantages of the prior art by providing Sentiments and mood analysis from music through deep learning.
[0010] Another object of the present invention is to provide Sentiments and mood analysis from music through deep learning that understands audience requirements.
[0011] Another object of the present invention is to provide Sentiments and mood analysis from music through deep learning that inputs for music artists and industry professionals.
[0012] Another object of the present invention is to provide Sentiments and mood analysis from music through deep learning that is helpful for music creators.
[0013] Another object of the present invention is to provide Sentiments and mood analysis from music through deep learning that provides therapeutic applications and research and innovative findings in the fields of psychology, neuroscience and computer science.
[0014] The foregoing and other objects of the present invention will become readily apparent upon further review of the following detailed description of the embodiments as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0015] The present invention relates to Sentiments and mood analysis from music through deep learning. The use of deep learning model and techniques is key aspect to this work. The proposed approach is highly valuable if its comes out as the replica of the idea behind the work. It can be time-saving, effective and efficient.
[0016] Novel aspects of invention: It requires a weakly supervised learning techniques or unsupervised learning methods, it ensures fairness and transparency and addresses biases in these models and it continuous learning and adaptation to evolving music trends, user preferences and emotional semantics over time which supports long term sustainability and relevance of sentiment analysis systems in dynamic environments.
[0017] Environmental issues: It is completely environmental friendly rather it will be a boon to all who are related to music.
[0018] Innovative features: Some innovative features in this research includes fine grained emotion detection which includes to detect more nuanced emotions like nostalgia, excitement, tranquility etc., temporal and cross-modal analysis of music over time and variations, real time analysis for music events or interactive installations and continuous increase in improvements through dynamic adaptation to user feedback.
[0019] Utilities/Applications: The versatility and the potential impact of this application can be helpful for content creation and production, music therapy for curing mental health related issues and cognitive disorders, event planning and entertainment, educational tools and learning platforms. Keeping up with trends in social media and market research and advertising.
[0020] While the invention has been described and shown with reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF DRAWINGS
[0021] So that the manner in which the above-recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0022] These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
[0023] Figure 1: Recommended playlist architecture of Music through Deep Learning.
DETAILED DESCRIPTION OF THE INVENTION
[0024] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and the detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim.
[0025] As used throughout this description, the word "may" is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words "a" or "an" mean "at least one" and the word "plurality" means "one or more" unless otherwise mentioned. Furthermore, the terminology and phraseology used herein are solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers, or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
[0026] In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase "comprising", it is understood that we also contemplate the same composition, element, or group of elements with transitional phrases "consisting of", "consisting", "selected from the group of consisting of, "including", or "is" preceding the recitation of the composition, element or group of elements and vice versa.
[0027] The present invention is described hereinafter by various embodiments with reference to the accompanying drawing, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, several materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
[0028] The present invention relates to Sentiments and mood analysis from music through deep learning.
[0029] This Schematic diagram helps to provide an idea where it breaks down the music analysis process through deep learning techniques into distinct components providing an overview of each component's role. Depending on the specific task and architecture, additional components or variations may exist.
As the diagram depicts, Raw Music Data is used for the input for music analysis which can be in wave-forms, MIDI files etc. Preprocessing of these raw data takes place which includes audio signal processing techniques after which relevant features are extracted from preprocessed music data to represent essential characteristics of music. The features extracted then fed into a deep learning model for analysis.
[0030] After the analysis with various techniques of deep learning model predictions of sentiment and mood detection, genre classification is done and accordingly recommended play list is generated is shown in Figure 1.
[0031] Novel aspects of invention: It requires a weakly supervised learning techniques or unsupervised learning methods, it ensures fairness and transparency and addresses biases in these models and it continuous learning and adaptation to evolving music trends, user preferences and emotional semantics over time which supports long term sustainability and relevance of sentiment analysis systems in dynamic environments.
[0032] Environmental issues: It is completely environmental friendly rather it will be a boon to all who are related to music.
[0033] Innovative features: Some innovative features in this research includes fine grained emotion detection which includes to detect more nuanced emotions like nostalgia, excitement, tranquility etc., temporal and cross-modal analysis of music over time and variations, real time analysis for music events or interactive installations and continuous increase in improvements through dynamic adaptation to user feedback.
[0034] Utilities/Applications: The versatility and the potential impact of this application can be helpful for content creation and production, music therapy for curing mental health related issues and cognitive disorders, event planning and entertainment, educational tools and learning platforms. Keeping up with trends in social media and market research and advertising.
[0035] Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the 5 embodiments shown along with the accompanying drawings but is to be providing the broadest scope consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and appended claims. , Claims:CLAIMS
We Claim:
1) A system for sentiment and mood analysis from music using deep learning, comprising:
- a module for receiving raw music data in various formats including waveforms and MIDI files;
- a preprocessing module that applies audio signal processing techniques to the raw music data to extract relevant features representing the essential characteristics of the music;
- a deep learning model trained to analyze the extracted features, wherein the model predicts sentiments, moods, and genres from the music data; and
- a recommendation engine that generates personalized playlists based on the predicted sentiment, mood, and genre classifications.
2) The system as claimed in claim 1, wherein the deep learning model utilizes weakly supervised or unsupervised learning techniques for improved sentiment and mood prediction accuracy.
3) The system as claimed in claim 1, wherein the deep learning model is trained to detect fine-grained emotions such as nostalgia, excitement, and tranquility, providing more nuanced emotion detection from the analyzed music data.
4) The system as claimed in claim 1, wherein the temporal analysis of the music is performed to detect variations in sentiment and mood over time that allows for the dynamic adaptation of mood predictions.
5) A method for real-time sentiment and mood analysis of music, the method comprising:
- receiving raw music data and preprocessing it to extract relevant features;
- feeding the preprocessed data into a deep learning model;
- predicting sentiments, moods, and genres from the music in real-time; and
- generating a recommended playlist based on the analysis for real-time events or interactive installations.

Documents

NameDate
202431081817-COMPLETE SPECIFICATION [26-10-2024(online)].pdf26/10/2024
202431081817-DECLARATION OF INVENTORSHIP (FORM 5) [26-10-2024(online)].pdf26/10/2024
202431081817-DRAWINGS [26-10-2024(online)].pdf26/10/2024
202431081817-EDUCATIONAL INSTITUTION(S) [26-10-2024(online)].pdf26/10/2024
202431081817-EVIDENCE FOR REGISTRATION UNDER SSI [26-10-2024(online)].pdf26/10/2024
202431081817-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-10-2024(online)].pdf26/10/2024
202431081817-FORM 1 [26-10-2024(online)].pdf26/10/2024
202431081817-FORM FOR SMALL ENTITY(FORM-28) [26-10-2024(online)].pdf26/10/2024
202431081817-FORM-9 [26-10-2024(online)].pdf26/10/2024
202431081817-POWER OF AUTHORITY [26-10-2024(online)].pdf26/10/2024
202431081817-REQUEST FOR EARLY PUBLICATION(FORM-9) [26-10-2024(online)].pdf26/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.