image
image
user-login
Patent search/

LOWER LIMB MOTION INTENT RECOGNITION SYSTEM BASED ON SENSOR FUSION AND FUZZY MULTITASK LEARNING

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

LOWER LIMB MOTION INTENT RECOGNITION SYSTEM BASED ON SENSOR FUSION AND FUZZY MULTITASK LEARNING

ORDINARY APPLICATION

Published

date

Filed on 28 October 2024

Abstract

This invention describes a novel lower limb motion intent recognition system that utilizes sensor fusion and fuzzy multitask learning to achieve high accuracy, real-time responsiveness, and adaptability. The system integrates multiple sensor modalities, employs advanced sensor fusion techniques, and incorporates a feedback mechanism for continuous learning and adaptation, making it suitable for applications in prosthetics, exoskeletons, rehabilitation, and robotics.

Patent Information

Application ID202411081957
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application28/10/2024
Publication Number45/2024

Inventors

NameAddressCountryNationality
TIYAS SARKARLOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia
DR. MANIK RAKHRALOVELY PROFESSIONAL UNIVERSITY, JALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia

Applicants

NameAddressCountryNationality
LOVELY PROFESSIONAL UNIVERSITYJALANDHAR-DELHI G.T. ROAD, PHAGWARA, PUNJAB-144 411, INDIA.IndiaIndia

Specification

Description:FIELD OF THE INVENTION
This invention relates to the field of assistive technology, specifically in the area of motion recognition for lower limb movements. It relates to systems that use sensor fusion techniques, particularly combining data from multiple sensor modalities (accelerometers, gyroscopes, magnetometers, EMG), along with advanced machine learning algorithms, specifically fuzzy multitask learning, to accurately and reliably recognize a user's intended movement. This technology is particularly applicable in prosthetics, exoskeletons, rehabilitation, and robotics.
BACKGROUND OF THE INVENTION
Accurate and reliable recognition of lower limb motion intent is crucial for the successful implementation of assistive devices and systems designed to enhance mobility and improve the quality of life for individuals with limited mobility. Existing systems often rely on single-sensor approaches, such as using only accelerometers or electromyography (EMG). These methods are inherently limited in their ability to provide a comprehensive understanding of complex human movement due to noise and inconsistencies in sensor data. Single-sensor approaches are susceptible to errors caused by variations in user movement patterns, environmental factors (e.g., changes in lighting, surface conditions), and individual physiological differences. The inherent limitations of single-sensor systems significantly reduce their accuracy and reliability. Furthermore, traditional machine learning methods employed in motion recognition often struggle with the inherent ambiguities and complexities of human movement, particularly in scenarios involving dynamic transitions between different movement patterns (e.g., transitioning from walking to running). These traditional methods frequently require extensive training datasets, which can be costly and time-consuming to obtain. Moreover, single-task learning algorithms, commonly used in existing motion recognition systems, are ill-suited for handling the diverse and dynamically changing nature of human movement. These algorithms typically require retraining for every new task or movement pattern, making them inflexible and difficult to adapt to real-world scenarios. The lack of real-time responsiveness and adaptive learning capabilities further limits the effectiveness of existing technologies in providing seamless and intuitive mobility assistance. This invention addresses these limitations by employing a more robust and sophisticated approach that integrates multiple sensor modalities, advanced sensor fusion techniques, and a novel fuzzy multitask learning algorithm to accurately and reliably recognize lower limb motion intent in real-time.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
This invention presents a novel Lower Limb Motion Intent Recognition system that leverages sensor fusion and fuzzy multitask learning to achieve superior accuracy, real-time responsiveness, and adaptability in recognizing lower limb motion intent. The system integrates multiple sensor modalities (accelerometers, gyroscopes, magnetometers, and EMG) to capture a comprehensive view of lower limb movement dynamics. Advanced sensor fusion algorithms combine data from these sensors to create a robust and noise-resistant dataset, mitigating the limitations of individual sensors and improving overall accuracy. A novel fuzzy multitask learning algorithm enables the system to simultaneously learn and adapt to multiple movement patterns, handling ambiguities and variations in user movement effectively. This algorithm continuously refines its parameters based on real-time feedback, allowing the system to dynamically adapt to changes in the user's physical condition or movement patterns. The system provides real-time feedback, enhancing the user experience and ensuring seamless interaction with assistive devices such as prosthetics and exoskeletons.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: PRINCIPLE OF THE PMMG SENSOR
FIGURE 2: STRUCTURE OF THE PROPOSED MOTION INTENT RECOGNITION SYSTEM
FIGURE 3: DATA FLOW CHART OF THE PROPOSED MOTION INTENT RECOGNITION SYSTEM
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a"," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", "third", and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The Lower Limb Motion Intent Recognition system comprises several key components: a sensor array for data acquisition, a sensor fusion module for data preprocessing and integration, a fuzzy multitask learning algorithm for motion intent classification, and a feedback mechanism for continuous adaptation. The sensor array strategically positions multiple sensor types (accelerometers, gyroscopes, magnetometers, EMG) on the lower limbs to capture a comprehensive dataset of movement data. Data acquisition is performed in real-time, capturing high-frequency sensor readings. The sensor fusion module performs crucial preprocessing steps on raw sensor data. This includes filtering (e.g., using low-pass and high-pass filters) to eliminate high-frequency noise and enhance data consistency; normalization to scale data from different sensors to a common range, improving data comparability; and feature extraction to identify key features (e.g., peak acceleration, angular position, muscle activation thresholds) indicative of motion intent. These pre-processed data streams from different sensors are then fused using advanced algorithms, such as Kalman filters or complementary filters. This fusion process combines data from multiple sensors, mitigating individual sensor limitations and improving robustness against noise and inconsistencies. The core of the system's intelligence resides in its novel fuzzy multitask learning algorithm. Unlike traditional machine learning methods that often require extensive retraining for each new task or movement pattern, this algorithm employs fuzzy logic to handle the complexities and ambiguities inherent in human movement. Fuzzy logic allows the system to handle overlapping or uncertain motion states, significantly improving its adaptability and accuracy. Furthermore, the multitask learning capability enables the system to learn and adapt to multiple movement patterns simultaneously, enhancing learning efficiency and reducing the need for extensive retraining. This capability is crucial for handling diverse and dynamically changing user movement patterns. The system incorporates a continuous feedback loop that refines the fuzzy multitask learning algorithm based on user input and environmental factors. This adaptive mechanism allows the system to dynamically adapt to changes in the user's physical condition, movement patterns, and environmental conditions (e.g., changes in lighting, surface conditions). This feedback mechanism is essential for maintaining real-time responsiveness and robustness in real-world scenarios.


, Claims:1. A lower limb motion intent recognition system, comprising a sensor array for acquiring real-time data from multiple sensor modalities including accelerometers, gyroscopes, magnetometers, and electromyography (EMG) sensors.
2. The system, as claimed in Claim 1, further comprising a sensor fusion module that preprocesses sensor data, including filtering, normalization, and feature extraction, and integrates the processed data to create a unified representation of lower limb movement.
3. The system, as claimed in Claim 2, further comprising a fuzzy multitask learning algorithm for classifying lower limb motion intent, said algorithm employing fuzzy logic to handle ambiguities and uncertainties in human movement.
4. The system, as claimed in Claim 3, wherein said fuzzy multitask learning algorithm is capable of simultaneously learning and adapting to multiple lower limb motion patterns.
5. The system, as claimed in Claim 4, further comprising a feedback mechanism for continuous adaptation of the fuzzy multitask learning algorithm based on real-time data and user input.
6. The system, as claimed in Claim 5, wherein said system provides real-time feedback and is capable of integrating with assistive devices such as prosthetics and exoskeletons.
7. The system, as claimed in Claim 6, wherein said system is configured to operate in real-time, providing quick and reliable recognition of lower limb motion intent.

8. The system, as claimed in Claim 7, wherein said system utilizes a user-friendly interface for monitoring system performance, adjusting settings, and providing feedback.

Documents

NameDate
202411081957-COMPLETE SPECIFICATION [28-10-2024(online)].pdf28/10/2024
202411081957-DECLARATION OF INVENTORSHIP (FORM 5) [28-10-2024(online)].pdf28/10/2024
202411081957-DRAWINGS [28-10-2024(online)].pdf28/10/2024
202411081957-EDUCATIONAL INSTITUTION(S) [28-10-2024(online)].pdf28/10/2024
202411081957-EVIDENCE FOR REGISTRATION UNDER SSI [28-10-2024(online)].pdf28/10/2024
202411081957-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-10-2024(online)].pdf28/10/2024
202411081957-FORM 1 [28-10-2024(online)].pdf28/10/2024
202411081957-FORM FOR SMALL ENTITY(FORM-28) [28-10-2024(online)].pdf28/10/2024
202411081957-FORM-9 [28-10-2024(online)].pdf28/10/2024
202411081957-POWER OF AUTHORITY [28-10-2024(online)].pdf28/10/2024
202411081957-PROOF OF RIGHT [28-10-2024(online)].pdf28/10/2024
202411081957-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-10-2024(online)].pdf28/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.