Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A SYSTEM AND A METHOD FOR OPTIMIZING MACHINE LEARNING MODELS
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 22 November 2024
Abstract
ABSTRACT A SYSTEM AND A METHOD FOR OPTIMIZING MACHINE LEARNING MODELS The present disclosure discloses a system for optimizing machine learning models. The system(100) comprises a data preprocessing module(102) to ingest and receive raw input data and implement an anomaly detection model(102a) to identify and correct anomalies and generate a filtered dataset, normalize the filtered dataset, and generate a preprocessed dataset; a quantum-inspired optimization module(104) includes a model generation unit(104a) to generate multiple candidate machine learning models(104a-1) on preprocessed dataset, a quantum-inspired parallel evaluation unit(104b) to evaluate performance of the multiple candidate machine learning models(104a-1), a model selection unit(104c) to select an optimal machine learning model, and a recursive optimization unit(104d) to recursively optimize the candidate machine learning models(104a-1); an adjustment module(106) to dynamically adjust an optimized machine learning model(106a) and continuously monitors business-related metrics; a feedback loop module(108) to continuously track the performance of the optimized machine learning models(106a) and automatically initiates retraining of the optimized machine learning models(106a). Figure 1
Patent Information
Application ID | 202441091058 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 22/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
VADAKARAI MEENAKSHISUNDARAM PONNIAH | Department of MBA, SRMIST College of Management, Potheri, SRM Nagar, Kattankulathur, Chennai-603203, Tamil Nadu, INDIA | India | India |
NAZIM SHA SALIM | Department of MBA, SRMIST College of Management, Potheri, SRM Nagar, Kattankulathur, Chennai-603203, Tamil Nadu, INDIA | India | India |
ADHENKEY KIRUPAKARAN BALAJI | Department of MBA, SRMIST College of Management, Potheri, SRM Nagar, Kattankulathur, Chennai-603203, Tamil Nadu, INDIA | India | India |
NITHYA SOMASUNDARAM | SRM School of Law, SRM IST, Potheri, SRM Nagar, Kattankulathur, Chennai-603203, Tamil Nadu, INDIA | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
SRM Institute of Science and Technology | Kattankulathur, Chennai-603203, Tamil Nadu, India | India | India |
Specification
Description:FIELD
The present disclosure generally relates to the field of machine learning systems. More particularly, the present disclosure relates to a system and a method for optimizing machine learning models.
DEFINITION
As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used indicates otherwise.
The term "outliers" refers to the data points that deviate significantly from the rest of the dataset, appearing as unusually high or low values.
The term "performance error threshold" refers to a predetermined metric that specifies the acceptable limit of deviation in model predictions relative to actual outcomes.
The term "hyperparameter" refers to parameters in machine learning models that govern the learning process and architecture, influencing model behavior and performance.
The above definitions are in addition to those expressed in the art.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
The traditional method for optimizing machine learning models primarily relies on conventional optimization techniques that evaluate one model at a time. These techniques utilize a sequential approach, where each model undergoes its own cycle of training and performance assessment before the next is considered. This methodology has been widely adopted due to its straightforward implementation and ease of understanding, allowing to development of machine learning models based on well-established principles. The traditional artificial intelligence (AI) models are often rigid and require significant manual intervention for adjustments, which can control their effectiveness in rapidly evolving environments.
Despite their benefits, traditional methods of optimizing machine learning models face several limitations. The foremost issue associated with the traditional methods is that the traditional optimization technique evaluates only one model at a time and evaluates models individually resulting in prolonged computational time and inefficiencies, particularly in dynamic business environments where multiple metrics are continuously changing. The traditional methods lack the capability for real-time, dynamic optimization based on business performance metrics, leading to inefficiencies and constraints in the decision-making process. The rigidity of the traditional AI models requires significant manual intervention for modifications or retraining, increasing the problem of suboptimal performance. In the traditional method, it's challenging to achieve optimal model responsiveness and adaptability in rapidly changing conditions.
Traditional methods mostly rely on traditional optimization technique which evaluates only one model at a time. Further, traditional AI models are often rigid and require significant manual intervention for adjustments or retraining.
Therefore, there is felt a need for a system for optimizing machine learning models that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system and a method for optimizing machine learning models.
Another object of the present disclosure is to provide a system that reduces noise and enhances model accuracy.
Still another object of the present disclosure is to provide a system that reduces model training time and improves efficiency.
Yet another object of the present disclosure is to provide a system that allows quick adjustments based on changing business needs and improves decision-making.
Still another object of the present disclosure is to provide a system that automates the improvement of AI models.
Yet another object of the present disclosure is to provide a system that ensures high-quality data and more accurate results.
Still another object of the present disclosure is to provide a system that ensures continuous improvement of the AI model through a feedback loop.
Yet another object of the present disclosure is to provide a system that provides businesses a competitive advantage by ensuring decisions based on the latest data.
Still another object of the present disclosure is to provide a system that ensures ongoing effectiveness in environments where information changes frequently.
Yet another object of the present disclosure is to provide a system that reduces the need for constant manual updates.
Still another object of the present disclosure is to provide a system that encourages innovation in AI.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a system for optimizing machine learning models. The system comprises a data preprocessing module, a quantum-inspired optimization module, an adjustment module, and a feedback loop module.
The data preprocessing module is configured to ingest and receive raw input data from multiple sources and further configured to implement an anomaly detection model to identify and correct outliers or anomalies in the raw input data using predefined rules/interpolation techniques so as to generate the filtered dataset, then normalize the filtered data to create a preprocessed dataset for subsequent processing.
The quantum-inspired optimization module includes a model generation unit, a quantum-inspired parallel evaluation unit, a model selection unit, and a recursive optimization unit.
The model generation unit is configured to generate multiple candidate machine learning models with varying hyperparameter configurations based on the preprocessed dataset.
The quantum-inspired parallel evaluation unit is configured to evaluate the performance of the multiple candidate machine learning models simultaneously by mimicking quantum parallelism.
The model selection unit is configured to select an optimal machine learning model from the multiple candidate machine learning models based on performance metrics.
The recursive optimization unit is configured to recursively optimize the candidate machine learning models until a performance error threshold is met.
The adjustment module is configured to dynamically adjust an optimized machine learning models in response to changes in external conditions without disrupting operation, wherein the adjustment module continuously monitors business-related metrics and adjusts internal parameters of the optimized machine learning models allowing for non-disruptive updates that maintain operational continuity.
The feedback loop module is configured to continuously track the performance of the optimized machine learning models using predefined performance metrics, wherein the feedback loop module automatically initiates retraining of the optimized machine learning models when performance falls below the performance error threshold, collects new data and re-optimizes the optimized machine learning models to maintain long-term optimal performance based on the updated data.
In an embodiment, the data preprocessing module applies min-max scaling or Z-score standardization as part of the data normalization process to ensure uniformity across the dataset.
In an embodiment, the data preprocessing module comprises the anomaly detection model and is configured to use statistical methods, including Z-score and DBSCAN (Density-Based Spatial Clustering of Applications with Noise), to identify and correct outliers.
In an embodiment, the quantum-inspired optimization module uses the quantum-inspired parallelism to evaluate the candidate machine learning models simultaneously, significantly reducing computational time compared to traditional optimization techniques.
In an embodiment, the quantum-inspired optimization module evaluates the performance of the candidate machine learning models using metrics such as Mean Squared Error (MSE), Cross-Entropy Loss, or F1-score to select the model with the lowest error.
In an embodiment, the adjustment module adjusts model complexity by dynamically modifying the number of decision trees, layers in neural networks, or the learning rate in response to changes in business-related metrics.
In an embodiment, the adjustment module continuously monitors market trends, customer behavior, or external economic factors, and adjusts the optimized machine learning model's internal parameters to maintain high accuracy and responsiveness. In an embodiment, the feedback loop module is configured to incorporate a self-learning mechanism that allows the optimized machine-learning models to learn from previous errors, thereby continuously improving their performance over time.
In an embodiment, the feedback loop module is configured to retrain the optimized machine learning models using the latest available dataset and integrates new features or data patterns identified through continuous monitoring.
The present disclosure further envisages a method for optimizing machine learning models. The method includes the following steps:
• ingesting and receiving, by the data preprocessing module, raw input data from multiple resources and implementing the anomaly detection model to identify and correct outliers or anomalies in the raw input data using predefined rules or interpolation techniques to generate the filtered dataset, then normalize the filtered data to create a preprocessed dataset;
• generating, by the model generation unit of the quantum-inspired optimization module, the multiple candidate machine learning models with varying hyperparameter configurations based on a preprocessed dataset;
• evaluating, by the quantum-inspired parallel evaluation unit of the quantum-inspired optimization module, the performance of the multiple candidate machine learning models simultaneously by mimicking quantum parallelism;
• selecting, by the model selection unit of the quantum-inspired optimization module, an optimal machine learning model from the multiple candidate machine learning models based on performance metrics;
• recursively optimizing, by the recursive optimization unit of the quantum-inspired optimization module, recursively optimizing the candidate machine learning models until a performance error threshold is met;
• dynamically adjusting, by the adjustment module, the optimized machine learning models in response to changes in external conditions without disrupting operation, wherein the adjustment module continuously monitors business-related metrics and adjusting internal parameters of the optimized machine learning models and allowing for non-disruptive updates that maintain operational continuity; and
• continuously tracking, by the feedback loop module, the performance of the optimized machine learning models using predefined performance metrics, wherein the feedback loop module automatically initiates retraining of the optimized machine learning models when performance falls below a specified threshold, collecting new data and re-optimizing the optimized machine learning models to maintain long-term optimal performance based on the updated data.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system and a method for optimizing machine learning models of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a block diagram of a system for optimizing machine learning models in accordance with the present disclosure; and
Figure 2A and Figure 2B illustrate a flow chart depicting the steps involved in a method for optimizing machine learning models in accordance with an embodiment of the present disclosure.
LIST OF REFERENCE NUMERALS
100 - System
102 - Data Preprocessing module
102a - Anomaly Detection Model
104 - Quantum-Inspired Optimization Module
104a - Model Generation Unit
104a-1 - Candidate Machine Learning Models
104b - Quantum-Inspired Parallel Evaluation Unit
104c - Model Selection Unit
104d - Recursive Optimization Unit
106 - Adjustment Module
106a - Optimized Machine Learning Models
108 - Feedback Loop Module
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "including," and "having," are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being "engaged to," "connected to," or "coupled to" another element, it may be directly engaged, connected, or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
Traditional methods of optimizing machine learning models face several limitations. The foremost issue associated with the traditional methods is that the traditional optimization technique evaluates only one model at a time and evaluates models individually resulting in prolonged computational time and inefficiencies, particularly in dynamic business environments where multiple metrics are continuously changing. The traditional methods lack the capability for real-time, dynamic optimization based on business performance metrics, leading to inefficiencies and constraints in the decision-making process. The rigidity of the traditional AI models requires significant manual intervention for modifications or retraining, increasing the problem of suboptimal performance. In the traditional method, it's challenging to achieve optimal model responsiveness and adaptability in rapidly changing conditions. These issues highlighted the need for the development of more robust, accurate, efficient, and real-time monitoring of business metrics solutions.
To address the issues of the existing systems and methods, the present disclosure envisages a system (hereinafter referred to as "system 100") for optimizing machine learning models and a method (hereinafter referred to as "method 200") for optimizing machine learning models. The system 100 will now be described with reference to Figure 1 and method 200 will be described with reference to Figure 2A and Figure 2B.
Referring to Figure 1, the system 100 comprises a data preprocessing module 102, a quantum-inspired optimization module 104, an adjustment module 106, and a feedback loop module 108.
The data preprocessing module 102 is configured to ingest and receive raw input data from multiple sources and further configured to implement an anomaly detection model 102a to identify and correct outliers or anomalies in the raw input data using predefined rules/interpolation techniques so as to generate the filtered dataset, then normalize the filtered data to create a preprocessed dataset for subsequent processing.
In an embodiment, the data preprocessing module 102 applies min-max scaling or Z-score standardization as part of the data normalization process to ensure uniformity across the dataset.
In an embodiment, the data preprocessing module 102 comprises the anomaly detection model 102a, which is configured to use statistical methods, including Z-score and DBSCAN (Density-Based Spatial Clustering of Applications with Noise), to identify and correct outliers.
In an embodiment, a set of preprocessing techniques includes cleaning, transforming, and normalizing the raw input data, wherein the raw input data consists of riddled with noise, missing values, and anomalies.
In an embodiment, the raw input data is ingested from multiple sources such as databases, APIs, sensors, and user input. The data includes structured formats such as spreadsheets, relational databases, or unstructured formats such as text files, images, or sensor data.
In an embodiment, the anomaly detection model 102a is selected from a group of anomaly detection models consisting of Z-score and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) to identify outliers and anomalies in the dataset. It then removes or corrects these anomalies using predefined rules or interpolation techniques.
In an embodiment, the data normalization technique includes min-max scaling or z-score standardization to ensure uniformity in the dataset. This transformation enables the AI model to process data more efficiently and accurately.
In an embodiment, the preprocessing step ensures that only clean, consistent, and high-quality data is input to the AI model, leading to enhanced accuracy in decision-making and predictions.
The quantum-inspired optimization module 104 is configured to receive the preprocessed data from the data preprocessing module 102 and further includes a model generation unit 104a, a quantum-inspired parallel evaluation unit 104b, a model selection unit 104c, and a recursive optimization unit 104d.
In an embodiment, the quantum-inspired optimization module 104 uses quantum-inspired parallelism to evaluate the candidate machine learning models 104a-1 simultaneously, significantly reducing computational time compared to traditional optimization techniques.
In an embodiment, the quantum-inspired optimization module 104 evaluates the performance of the candidate machine learning models 104a-1 using metrics such as Mean Squared Error (MSE), Cross-Entropy Loss, or F1-score to select the model with the lowest error.
The model generation unit 104a is configured to generate multiple candidate machine learning models 104a-1 with varying hyperparameter configurations based on the preprocessed dataset.
In an embodiment, the hyperparameters include learning rate, number of layers, and tree depth in decision trees.
In an embodiment, the multiple candidate machine learning models 104a-1 are generated based on the input preprocessed data, ensuring that a wide range of configurations is explored.
The quantum-inspired parallel evaluation unit 104b is configured to evaluate the performance of the multiple candidate machine learning models 104a-1 simultaneously by mimicking quantum parallelism.
The model selection unit 104c is configured to select an optimal machine learning model from the multiple candidate machine learning models 104a-1 based on performance metrics.
In an embodiment, the performance metrics are measured using an error metric, such as Mean Squared Error (MSE) or Cross-Entropy Loss, accuracy, precision, recall, or F1-score.
In an embodiment, the model selection unit 104c is configured to select the model with the lowest error for further optimization.
The recursive optimization unit 104d is configured to recursively optimize the candidate machine learning models 104a-1 until a performance error threshold is met.
In an embodiment, when the error threshold is not met, the model generation unit 104a recursively generates new candidate machine learning models 104a-1 by adjusting hyperparameters, repeating the process until an optimal model is identified/found.
In an embodiment, the quantum-inspired parallelism utilizes principles derived from quantum computing to enhance computational efficiency in optimization tasks. By enabling the simultaneous evaluation of the multiple candidate machine learning models 104a-1, this approach significantly accelerates the optimization process. It employs probabilistic methods and heuristic techniques to traverse complex solution spaces more effectively, thereby reducing computational time and improving convergence rates in machine learning model training and other optimization scenarios.
In an embodiment, the different configuration of hyperparameters includes learning rate, number of layers, and tree depth in the decision tree.
In an embodiment, the model selection unit 104c configures each of the candidate machine learning models 104a-1 performance is measured using error metrics, such as mean squared error or cross-entropy loss. The model selection unit 104c selects the model with the lowest error for further optimization.
In an embodiment, the recursive optimization unit 104d cooperates with the model generation unit 104a to recursively generate the new candidate machine learning models 104a-1 by adjusting hyperparameters, repeating the processing unit an optimal model is found.
The adjustment module 106 is configured to dynamically adjust an optimized machine learning models 106a in response to changes in external conditions without disrupting operation, wherein the adjustment module 106 continuously monitors business-related metrics and adjusts internal parameters of the optimized machine learning models 106a allowing for non-disruptive updates that maintain operational continuity.
In an embodiment, the adjustments are made in real-time without disrupting the AI model's operations. This non-disruptive updating mechanism allows the model to maintain high accuracy and responsiveness without requiring full retraining.
In an embodiment, the external conditions may include but are not limited to market conditions, user behavior, regulatory changes, operational performance, and technological advancement.
In an embodiment, the adjustment module 106 is configured to adjust model complexity by dynamically modifying the number of decision trees, layers in neural networks, or the learning rate in response to changes in business-related metrics.
In an embodiment, the adjustment module 106 continuously monitors market trends, customer behavior, or external economic factors, and adjusts the optimized machine learning models 106a internal parameters to maintain high accuracy and responsiveness.
In an embodiment, the business-related metrics include market trends, customer behavior, and sales performance through real-time inputs from sensors, databases, or APIs.
In an embodiment, the internal parameters include learning rates or model complexity, to better fit evolving conditions.
In an embodiment, the adjustment module 106 adjusts the model's internal parameters when:
• If the system detects a downturn in market trends, it reduces model complexity to prevent overfitting; and
• If customer behavior becomes unpredictable, the system increases model complexity to handle more varied data patterns.
In an embodiment, the adjustment module 106 allows the optimized machine learning models 106a to adapt to changing business environments or external factors without the need for retraining from scratch, based on real-time inputs, the adjustment module 106 dynamically adjusts the model's internal parameter (learning rate, number of decision trees, model complexity), the adjustment module 106 detects a downturn in market trends, the adjustment module 106 reduce the model complexity to prevent overfitting, customer behavior become unpredictable, the dynamic module 106 increase model complexity to handle more varied data patterns.
The feedback loop module 108 is configured to continuously track the performance of the optimized machine learning models 106a using predefined performance metrics, wherein the feedback loop module 108 automatically initiates retraining of the optimized machine learning models 106a when performance falls below the performance error threshold, collects new data and re-optimizes the optimized machine learning models 106a to maintain long-term optimal performance based on the updated data.
In an embodiment, the feedback loop module 108 is configured to incorporate a self-learning mechanism that allows the optimized machine-learning models 106a to learn from previous errors, thereby continuously improving their performance over time.
In an embodiment, the self-learning mechanism may include but is not limited to error analysis, learning from mistakes, model updates, iterative improvements, and performance metrics.
In an embodiment, the feedback loop ensures that retraining is carried out iteratively, with the system 100 continuously improving its performance over time. Each iteration refines the model based on the latest data, preventing degradation and ensuring long-term accuracy.
In an embodiment, the feedback loop module 108 is configured to retrain the optimized machine learning models 106a using the latest available dataset and integrates new features or data patterns identified through continuous monitoring.
In an embodiment, the collection of new data may include real-time user interactions, operational databases, APIs, survey and feedback forms, data warehouses, and logs and monitoring systems.
In an embodiment, the monitoring of the performance of the optimized machine learning model 106a using predefined performance metrics including accuracy, precision, recall, or F1-score, the optimized machine learning model 106a performance drops below a specific threshold, the optimized machine learning model 106a need for retraining.
In an embodiment, the quantum-inspired optimization module 104 is configured and used to re-optimize the model with the updated dataset.
The disclosed system 100 for optimizing machine learning models further comprises hardware components including one or more processors, memory units, and data storage modules, configured to execute the quantum-inspired parallel evaluation unit 104b and facilitate real-time dynamic adjustments and continuous retraining of machine learning models.
In an embodiment, the processors are configured to support parallel execution of the candidate machine learning models 104a-1 using multi-threaded processing and distributed computing architectures, enabling scalable evaluation of the candidate machine learning models 104a-1.
In an embodiment, the system 100 can include one or more processors and may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processors are configured to fetch and execute computer-readable instructions stored in a memory of the system 100. The memory may store one or more computer-readable instructions or routines, which may be fetched and executed for executing the instructions. The memory may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. The functions of one or more processor(s) may be provided through the use of dedicated hardware as well as hardware capable of executing machine-readable instructions. In other examples, one or more processors may be implemented by electronic circuitry or printed circuit board. One or more processors may be configured to execute functions of various modules of the system 100 such as the data preprocessing module 102 and the quantum-inspired optimization module 104.
In an alternative aspect, the memory may be an external data storage device coupled to the system 100 directly or through one or more offline/online data servers.
In an embodiment, the system 100 further comprises a network interface to receive real-time data inputs from external sources such as databases, APIs, and sensors, which are used by the adjustment module 106 to continuously update model parameters in response to changing external conditions. The network interface may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, transceivers, storage devices, and the like. The network interface may facilitate communication of the system 100 with various devices coupled to the system 100. The network interface may also provide a communication pathway for one or more components of the system 100. Examples of such components include, but are not limited to, processing module(s) and data storage.
The processing modules(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing module(s). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing module(s) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing module(s) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing module(s). In such examples, the system 100 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 100 and the processing resource. In other examples, the processing module(s) may be implemented by electronic circuitry and include the data preprocessing module 102, the quantum-inspired optimization module 104, the adjustment module 106, and the feedback loop module 108.
In an embodiment, the system 100 can be deployed across various industries including finance, healthcare, retail, and manufacturing, and adapt to different business environments by leveraging dynamic adjustment and continuous retraining capabilities.
In another embodiment, the data preprocessing module 102 ensures the quality of input data. This module utilizes an anomaly detection model 102a that is configured to apply advanced statistical techniques such as Z-score analysis and DBSCAN to identify and correct data anomalies. These techniques enable the system to detect outliers and either remove or interpolate them using predefined rules, thus producing a cleaner dataset. After anomaly detection, the data undergoes normalization using min-max scaling or Z-score standardization, which transforms the data into a uniform format, ensuring that all input features are on the same scale and enhancing the performance of machine learning models during training.
In a further embodiment, the system 100 leverages quantum-inspired parallelism for evaluating and selecting optimal machine learning models. The quantum-inspired optimization module 104 consists of a model generation unit 104a, which generates multiple candidate machine learning models 104a-1 with diverse hyperparameter configurations. Instead of evaluating these models sequentially, the system's parallel evaluation unit 104b performs simultaneous evaluations, mimicking quantum parallelism, which drastically reduces the computational time required for this task. Performance metrics such as Mean Squared Error (MSE), Cross-Entropy Loss, or F1-score are used by the model selection unit 104c to choose the model that offers the best performance with the lowest error rate. If none of the models meet the desired performance criteria, the recursive optimization unit 104d re-optimizes the models through multiple iterations until the optimal model is identified.
In yet another embodiment, the feedback loop module 108 ensures the system's long-term performance by continuously monitoring the accuracy and effectiveness of the machine learning models. This module tracks the performance of the optimized models 106a against predefined metrics and automatically triggers retraining when performance degrades below a specified error threshold. The retraining process uses newly available data to update the models and integrate any emerging patterns. Furthermore, the feedback loop incorporates a self-learning mechanism, which enables the system to learn from prior errors and improve over time. This continuous improvement ensures that the machine learning models maintain high accuracy and adapt to evolving datasets and environments.
In another embodiment, the system 100 focuses on optimizing hyperparameters efficiently by employing quantum-inspired techniques. The quantum-inspired optimization module 104 uses quantum parallelism to evaluate multiple candidate machine learning models 104a-1 with different hyperparameter configurations. This parallel evaluation method significantly reduces the computational resources and time required for hyperparameter tuning compared to traditional methods. By evaluating several models simultaneously, the system can quickly identify the model with the optimal hyperparameters that deliver the lowest performance error. The recursive optimization feature allows for continuous improvement until an optimal model is selected.
Figure 2A and Figure 2B illustrate a flow chart depicting the steps involved in a method for optimizing machine learning models in accordance with an embodiment of the present disclosure. The order in which method 200 is described is not intended to be construed as a limitation, and any number of the described method steps may be combined in any order to implement method 200, or an alternative method. Furthermore, method 200 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine-readable medium/instructions, or a combination thereof. The method 200 comprises the following steps:
At step 202, the method 200 includes ingesting and receiving, by the data preprocessing module 102, raw input data from multiple resources and implementing the anomaly detection model 102a to identify and correct outliers or anomalies in the raw input data using predefined rules/interpolation technique to generate the filtered dataset, then normalize the filtered data to create a preprocessed dataset.
At step 204, the method 200 includes generating, by the model generation unit 104a of the quantum-inspired optimization module 104, the multiple candidate machine learning models 104a-1 with varying hyperparameter configurations based on a preprocessed dataset.
At step 206, the method 200 includes evaluating, by the quantum-inspired parallel evaluation unit 104b of the quantum-inspired optimization module 104, the performance of the multiple candidate machine learning models 104a-1 simultaneously by mimicking quantum parallelism.
At step 208, the method 200 includes selecting, by the model selection unit 104c of the quantum-inspired optimization module 104, an optimal machine learning model from the multiple candidate machine learning models 104a-1 based on performance metrics.
At step 210, the method 200 includes recursively optimizing, by the recursive optimization unit 104d of the quantum-inspired optimization module 104, recursively optimizing the candidate machine learning models 104a-1 until a performance error threshold is met.
At step 212, the method 200 includes dynamically adjusting, by the adjustment module 106, the optimized machine learning models 106a in response to changes in external conditions without disrupting operation, wherein the adjustment module 106 continuously monitors business-related metrics and adjusting internal parameters of the optimized machine learning models 106a and allowing for non-disruptive updates that maintain operational continuity.
At step 214, the method 200 includes continuously tracking, by the feedback loop module 108, the performance of the optimized machine learning models 106a using predefined performance metrics, wherein the feedback loop module 108 automatically initiates retraining of the optimized machine learning models 106a when performance falls below a specified threshold, collecting new data and re-optimizing the optimized machine learning models 106a to maintain long-term optimal performance based on the updated data.
In an operative configuration, the system 100 for optimizing machine learning models is designed to enhance model performance through a series of interconnected modules. The system begins with a data preprocessing module 102, which receives raw input data from multiple sources and prepares it for further processing. This module includes an anomaly detection model 102a that identifies and corrects outliers using statistical methods such as Z-score and DBSCAN (Density-Based Spatial Clustering of Applications with Noise). After the anomalies are addressed, the data is normalized using techniques like min-max scaling or Z-score standardization, ensuring uniformity across the dataset. The result is a preprocessed dataset that is optimized for the next steps in the machine-learning process.
Further, the quantum-inspired optimization module 104 leverages quantum-inspired techniques to efficiently generate and evaluate multiple candidate machine learning models. The model generation unit 104a produces various models, each with different hyperparameter configurations. These models are then evaluated simultaneously by the quantum-inspired parallel evaluation unit 104b, which mimics quantum parallelism, significantly reducing the computational time required compared to traditional methods. The model selection unit 104c uses performance metrics such as Mean Squared Error (MSE), Cross-Entropy Loss, or F1-score to choose the most accurate model. If none of the models meet the required performance threshold, the recursive optimization unit 104d re-optimizes the models in an iterative process until an optimal performance level is achieved.
To ensure that the machine learning models remain effective as conditions change, the system incorporates an adjustment module 106. The adjustment module 106 dynamically adjusts the optimized models 106a based on real-world changes in external conditions, such as market trends or customer behavior. By continuously monitoring business-related metrics, the module can modify key internal parameters, such as the number of layers in neural networks or the learning rate, without disrupting the system's operations. This ensures that the models remain highly responsive and accurate in dynamic environments.
Further, the system 100 is equipped with a feedback loop module 108, which continuously monitors the performance of the optimized machine learning models. If the performance drops below a predefined error threshold, the feedback loop automatically initiates the retraining of the models using the latest available dataset. The module also incorporates a self-learning mechanism that allows the models to improve over time by learning from past errors. By integrating new data patterns and retraining models as needed, the feedback loop ensures that the system maintains long-term optimal performance.
Advantageously, the system 100 for optimizing machine learning models represents a significant advancement in accelerated model training by using the quantum-inspired optimization module 104 and in real-time adaptability and responsiveness. The system 100 optimizes the machine-learning models through quantum-inspired techniques and real-time adjustment based on business metrics. The system 100 reduces computational overhead and shortens the training cycle, enabling quicker deployment and iteration of AI models compared to conventional methods. The real-time dynamic adjustment technique ensures that the model can seamlessly respond to changes in external conditions. The system 100 includes the data preprocessing module 102 to ensure that only high-quality, normalized data is fed into the model, resulting in more accurate prediction and decision. The system 100 provides a continuous retraining feedback loop that ensures that the model evolves with new data, preventing performance degradation over time. These advancements collectively enhance the system's reliability, accuracy, and decision-making, making it a cutting-edge solution for optimizing machine-learning models through quantum-inspired techniques.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or codes on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for optimizing machine learning models that:
• allows quick adjustment based on changing business needs and improves decision-making;
• reduces model training time and improves efficiency;
• ensures high-quality data and more accurate results;
• reduces noise and enhances model accuracy;
• automates the improvement of AI models;
• ensures continuous improvement of the AI model through a feedback loop;
• provides businesses a competitive advantage by ensuring decisions based on the latest data;
• reduces the need for constant manual updates; and
• ensures real-time adaptability and responsiveness.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression "at least" or "at least one" suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
, Claims:WE CLAIM
1. A system (100) for optimizing machine learning models, said system (100) comprising:
• a data preprocessing module (102) configured to ingest and receive raw input data from multiple sources, and further configured to implement an anomaly detection model (102a) to identify and correct outliers or anomalies in said raw input data using predefined rules/interpolation techniques so as to generate filtered dataset, then normalize the filtered data to create a preprocessed dataset for subsequent processing;
• a quantum-inspired optimization module (104) including:
• a model generation unit (104a) configured to generate multiple candidate machine learning models (104a-1) with varying hyperparameter configurations based on the preprocessed dataset;
• a quantum-inspired parallel evaluation unit (104b) configured to evaluate the performance of said multiple candidate machine learning models (104a-1) simultaneously by mimicking quantum parallelism;
• a model selection unit (104c) configured to select an optimal machine learning model from said multiple candidate machine learning models (104a-1) based on performance metrics; and
• a recursive optimization unit (104d) configured to recursively optimize said candidate machine learning models (104a-1) until a performance error threshold is met;
• an adjustment module (106) configured to dynamically adjust an optimized machine learning models (106a) in response to changes in external conditions without disrupting operation, wherein said adjustment module (106) continuously monitors business-related metrics and adjusts internal parameters of said optimized machine learning models (106a) and allowing for non-disruptive updates that maintain operational continuity; and
• a feedback loop module (108) configured to continuously track the performance of the optimized machine learning models (106a) using predefined performance metrics, wherein said feedback loop module (108) automatically initiates retraining of the optimized machine learning models (106a) when performance falls below the performance error threshold, collects new data and re-optimizes said optimized machine learning models (106a) to maintain long-term optimal performance based on the updated data.
2. The system (100) as claimed in claim 1, wherein said data preprocessing module (102) applies min-max scaling or Z-score standardization as part of the data normalization process to ensure uniformity across the dataset.
3. The system as claimed in claim 1, wherein said data preprocessing module (102) comprises an anomaly detection model (102a) is configured to use statistical methods, including Z-score and DBSCAN (Density-Based Spatial Clustering of Applications with Noise), to identify and correct outliers.
4. The system as claimed in claim 1, wherein said quantum-inspired optimization module (104) uses quantum-inspired parallelism to evaluate candidate machine learning models (104a-1) simultaneously, significantly reducing computational time compared to traditional optimization techniques.
5. The system as claimed in claim 1, wherein said quantum-inspired optimization module (104) evaluates the performance of candidate machine learning models (104a-1) using metrics such as Mean Squared Error (MSE), Cross-Entropy Loss, or F1-score to select the model with the lowest error.
6. The system as claimed in claim 1, wherein said adjustment module (106) adjusts the model complexity by dynamically modifying the number of decision trees, layers in neural networks, or the learning rate in response to changes in business-related metrics.
7. The system as claimed in claim 1, wherein said adjustment module (106) continuously monitors market trends, customer behavior, or external economic factors, and adjusts the optimized machine learning models (106a) internal parameters to maintain high accuracy and responsiveness.
8. The system as claimed in claim 1, wherein said feedback loop module (108) incorporates a self-learning mechanism that allows the optimized machine learning models (106a) to learn from previous errors, thereby continuously improving its performance over time.
9. The system as claimed in claim 1, wherein said feedback loop module (108) retrains the optimized machine learning models (106a) using the latest available dataset and integrates new features or data patterns identified through continuous monitoring.
10. A method (200) for optimizing machine learning models, said method (200) comprises the following steps:
• ingesting and receiving, by a data preprocessing module (102), raw input data from multiple resources and implementing an anomaly detection model (102a) to identify and correct outliers or anomalies in said raw input data using predefined rules/interpolation technique to generate the filtered dataset, then normalize the filtered data to create a preprocessed dataset;
• generating, by a model generation unit (104a) of a quantum-inspired optimization module (104), multiple candidate machine learning models (104a-1) with varying hyperparameter configurations based on a preprocessed dataset;
• evaluating, by a quantum-inspired parallel evaluation unit (104b) of said quantum-inspired optimization module (104), the performance of said multiple candidate machine learning models (104a-1) simultaneously by mimicking quantum parallelism;
• selecting, by a model selection unit (104c) of said quantum-inspired optimization module (104), optimal machine learning model from said multiple candidate machine learning models (104a-1) based on performance metrics;
• recursively optimizing, by a recursive optimization unit (104d) of said quantum-inspired optimization module (104), recursively optimizing the candidate machine learning models (104a-1) until a performance error threshold is met;
• dynamically adjusting, by an adjustment module (106), optimized machine learning models (106a) in response to changes in external conditions without disrupting operation, wherein said adjustment module (106) continuously monitors business-related metrics and adjusting internal parameters of said optimized machine learning models (106a) and allowing for non-disruptive updates that maintain operational continuity; and
• continuously tracking, by a feedback loop module (108), the performance of said optimized machine learning models (106a) using predefined performance metrics, wherein said feedback loop module (108) automatically initiates retraining of the optimized machine learning models (106a) when performance falls below a specified threshold, collecting new data and re-optimizing said optimized machine learning models (106a) to maintain long-term optimal performance based on the updated data.
Dated this 22nd day of November, 2024
_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA - 25
of R.K.DEWAN & CO.
Authorized Agent of Applicant
TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT CHENNAI
Documents
Name | Date |
---|---|
202441091058-FORM-26 [23-11-2024(online)].pdf | 23/11/2024 |
202441091058-COMPLETE SPECIFICATION [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-DECLARATION OF INVENTORSHIP (FORM 5) [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-DRAWINGS [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-EDUCATIONAL INSTITUTION(S) [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-EVIDENCE FOR REGISTRATION UNDER SSI [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-FORM 1 [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-FORM 18 [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-FORM FOR SMALL ENTITY(FORM-28) [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-FORM-9 [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-PROOF OF RIGHT [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-11-2024(online)].pdf | 22/11/2024 |
202441091058-REQUEST FOR EXAMINATION (FORM-18) [22-11-2024(online)].pdf | 22/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.