Vakilsearch LogoIs NowZolvit Logo
close icon
image
image
user-login
Patent search/

AI SERVER SYSTEM FOR MODEL UPDATES VIA MERGING MULTIPLE INFORMATION SOURCES

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

AI SERVER SYSTEM FOR MODEL UPDATES VIA MERGING MULTIPLE INFORMATION SOURCES

ORDINARY APPLICATION

Published

date

Filed on 26 October 2024

Abstract

ABSTRACT AI SERVER SYSTEM FOR MODEL UPDATES VIA MERGING MULTIPLE INFORMATION SOURCES The present disclosure introduces AI server system for model updates via merging multiple information sources 100, designed to enhance machine learning model performance. It utilizes data ingestion module 102 for data collection and preprocessing. The system features a data merging engine 104 to merge data, and model update mechanism 106 for incremental learning and automated hyperparameter tuning. An evaluation and feedback loop 108 monitors model performance with real-time metrics. The other components are user interface 110 , privacy-aware data handling 112, real-time data validation layer 114, context-aware data integration 116, collaborative model training support 118, adaptive learning rates 120, multi-lingual and multi-modal data processing 122, real-time data acquisition and predictive data acquisition 124, customizable workflow automation 126, integration with edge computing 128, ensemble model support 130, energy-efficient processing 132, historical data analysis and insights 134, user-specified data privacy settings 136, integration of domain-specific knowledge 138. Reference Fig 1

Patent Information

Application ID202441081739
Invention FieldCOMPUTER SCIENCE
Date of Application26/10/2024
Publication Number44/2024

Inventors

NameAddressCountryNationality
Addulapuri AshwithaAnurag University, Venkatapur (V), Ghatkesar (M), Medchal Malkajgiri DT. Hyderabad, Telangana, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
Anurag UniversityVenkatapur (V), Ghatkesar (M), Medchal Malkajgiri DT. Hyderabad, Telangana, IndiaIndiaIndia

Specification

Description:AI Server System for Model Updates via Merging Multiple Information Sources
TECHNICAL FIELD
[0001] The present innovation relates to an AI server system for real-time model updates by merging multiple information sources to enhance machine learning model accuracy and performance.

BACKGROUND

[0002] Artificial Intelligence (AI) systems rely heavily on machine learning models that need continuous updates to maintain accuracy and relevance as data changes over time. One of the primary challenges in this process is the integration of multiple, diverse information sources, including structured data, unstructured text, and streaming data from IoT devices. Conventional model update methods often involve retraining models on isolated datasets, which can be resource-intensive, time-consuming, and unable to fully leverage the potential of all available data. Additionally, users face challenges with data silos, varying data formats, and model drift, where a model's performance deteriorates as the underlying data distribution shifts. Available options, such as manual data merging or ad hoc retraining processes, struggle with scalability, inefficiency, and difficulty in handling complex data types.

[0003] The invention of an AI server system designed to merge multiple information sources addresses these issues by introducing a seamless integration framework for real-time model updates. Unlike existing solutions, this system uses advanced algorithms for data fusion, incremental learning, and automated hyperparameter tuning, significantly reducing the need for complete model retraining. It supports continuous ingestion and preprocessing of data in diverse formats, which ensures the AI models remain up-to-date and effective across various applications such as natural language processing and predictive analytics.
[0004] What sets this invention apart is its ability to handle diverse data streams, perform intelligent data merging, and offer a scalable architecture for handling large volumes of data in real time. Its novelty lies in features such as incremental learning, adaptive learning rates, privacy-aware data handling, and real-time data validation. By solving the inefficiencies of traditional systems, the invention empowers users to maintain high-performing AI models in a dynamic data environment while reducing computational costs and improving overall system scalability.

OBJECTS OF THE INVENTION

[0005] The primary object of the invention is to facilitate real-time updates of machine learning models by merging multiple information sources effectively.

[0006] Another object of the invention is to enhance the accuracy and performance of AI models by integrating structured, unstructured, and streaming data.

[0007] Another object of the invention is to reduce computational resources required for model retraining through the use of incremental learning techniques.

[0008] Another object of the invention is to enable seamless data integration from diverse sources, eliminating issues with data silos and incompatible formats.

[0009] Another object of the invention is to improve decision-making in AI applications by offering more comprehensive datasets through advanced data fusion techniques.

[00010] Another object of the invention is to support scalability, allowing the system to handle large data volumes and growing complexities across various industries.
[00011] Another object of the invention is to offer an automated hyperparameter tuning mechanism, optimizing model performance with minimal user intervention.

[00012] Another object of the invention is to ensure privacy-aware data handling, complying with data protection regulations while maintaining model efficacy.

[00013] Another object of the invention is to provide a user-friendly interface for configuring model updates, monitoring performance, and visualizing data analytics.

[00014] Another object of the invention is to minimize latency in model updates, ensuring that AI applications remain responsive and relevant in real-time environments.


SUMMARY OF THE INVENTION

[00015] In accordance with the different aspects of the present invention, AI server system for model updates via merging multiple information sources is presented. It is designed for real-time updates of machine learning models by merging multiple information sources, including structured, unstructured, and streaming data. It features components like a data ingestion module, data merging engine, and model update mechanism for incremental learning and automated hyperparameter tuning. The system enhances AI model accuracy, reduces computational demands, and ensures scalability for various applications. It also incorporates privacy-aware data handling and real-time evaluation to maintain model relevance. This innovation addresses key challenges in AI model maintenance and data integration, ensuring high performance in dynamic environments.

[00016] Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow.

[00017] It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.

BRIEF DESCRIPTION OF DRAWINGS
[00018] The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.

[00019] Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:

[00020] FIG. 1 is component wise drawing for AI server system for model updates via merging multiple information sources.

[00021] FIG 2 is working methodology of AI server system for model updates via merging multiple information sources.

DETAILED DESCRIPTION

[00022] The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognise that other embodiments for carrying out or practising the present disclosure are also possible.

[00023] The description set forth below in connection with the appended drawings is intended as a description of certain embodiments of AI server system for model updates via merging multiple information sources and is not intended to represent the only forms that may be developed or utilised. The description sets forth the various structures and/or functions in connection with the illustrated embodiments; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimised to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.

[00024] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.

[00025] The terms "comprises", "comprising", "include(s)", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, or system that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system. In other words, one or more elements in a system or apparatus preceded by "comprises... a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.

[00026] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings and which are shown by way of illustration-specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

[00027] The present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.

[00028] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is disclosed, in accordance with one embodiment of the present invention. It comprises of data ingestion module 102, data merging engine 104, model update mechanism 106, evaluation and feedback loop 108, user interface 110, privacy-aware data handling 112, real-time data validation layer 114, context-aware data integration 116, collaborative model training support 118, adaptive learning rates 120, multi-lingual and multi-modal data processing 122, real-time data acquisition and predictive data acquisition 124, customizable workflow automation 126, integration with edge computing 128, ensemble model support 130, energy-efficient processing 132, historical data analysis and insights 134, user-specified data privacy settings 136, integration of domain-specific knowledge 138.

[00029] Referring to Fig. 1, the present disclosure provides details of AI server system for model updates via merging multiple information sources 100. It is a system designed to optimize machine learning model updates by integrating data from structured, unstructured, and streaming sources. The system includes key components such as data ingestion module 102, data merging engine 104, and model update mechanism 106 to streamline the data merging and model updating processes. It further incorporates evaluation and feedback loop 108 to continuously monitor model performance, while privacy-aware data handling 112 ensures compliance with data regulations. The system also features real-time data validation layer 114 to assess data quality and adaptive learning rates 120 for optimizing model training. Additional components such as customizable workflow automation 126 and integration with edge computing 128 enhance scalability and performance.

[00030] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with data ingestion module 102, which is responsible for continuously collecting data from diverse sources such as databases, text files, and IoT devices. This module preprocesses data by cleaning, normalizing, and transforming it into a compatible format. The data ingestion module 102 works closely with the data merging engine 104, ensuring that the ingested data is prepared for merging, allowing for seamless integration into the overall system. It plays a crucial role in ensuring that the AI models receive timely and relevant data for accurate updates.

[00031] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with data merging engine 104, which intelligently merges data from multiple sources using advanced algorithms. This engine ensures that the merged data retains its contextual integrity and relevance by applying data fusion, transformation, and feature engineering techniques. The data merging engine 104 interacts with the data ingestion module 102 to ensure the ingested data is appropriately formatted, and it feeds the unified dataset to the model update mechanism 106 for continuous learning. It ensures data consistency across multiple streams, enhancing the accuracy of AI models.

[00032] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with model update mechanism 106, which facilitates incremental learning and automated hyperparameter tuning to update machine learning models in real time. This mechanism allows the models to adapt to new data without the need for full retraining, significantly reducing computational resources. The model update mechanism 106 receives the unified dataset from the data merging engine 104 and updates the AI models accordingly, ensuring they remain accurate and up-to-date. It also tracks version control to ensure that previous model versions can be retrieved if needed.

[00033] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with evaluation and feedback loop 108, which monitors the performance of the updated models by assessing real-time metrics such as accuracy, precision, and recall. This component allows for continuous improvement of the models by integrating feedback from users and system performance data. The evaluation and feedback loop 108 works in tandem with the model update mechanism 106 to refine models based on real-world applications and feedback, ensuring that models remain robust and effective in dynamic environments.

[00034] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with user interface 110, which allows data scientists, engineers, and other stakeholders to monitor the system, configure parameters, and visualize data insights. This component provides a user-friendly dashboard for real-time tracking of data ingestion, model updates, and evaluation metrics. The user interface 110 interacts with the evaluation and feedback loop 108, giving users the ability to make informed adjustments based on model performance. It plays a crucial role in improving collaboration and decision-making by offering intuitive tools for system interaction.

[00035] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with privacy-aware data handling 112, which ensures that all data processed through the system complies with data protection regulations. This component uses techniques like data anonymization and secure data transfer protocols to protect sensitive information while maintaining data quality for model updates. Privacy-aware data handling 112 works closely with the data ingestion module 102 and data merging engine 104 to ensure that privacy is maintained throughout the data processing lifecycle, making the system reliable for industries with strict privacy requirements like healthcare and finance.

[00036] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with real-time data validation layer 114, which ensures that only high-quality and reliable data is fed into the system. This layer uses anomaly detection algorithms to filter out erroneous or outlier data before it reaches the data merging engine 104. The real-time data validation layer 114 works closely with the data ingestion module 102 to assess data quality at the point of entry, ensuring that the data used for model updates is accurate and relevant. By maintaining data integrity, this component enhances the overall robustness of the AI models.

[00037] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with context-aware data integration 116, which intelligently prioritizes and merges data based on its contextual relevance. This component ensures that the most relevant and useful data is utilized for model updates, enhancing decision-making processes in real-time environments. The context-aware data integration 116 interacts with the data merging engine 104 to assess the importance of each data source, ensuring that the AI models are updated with the most pertinent information, improving accuracy and adaptability.

[00038] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with collaborative model training support 118, which enables multiple users or teams to contribute to the training process. This component facilitates the integration of domain-specific knowledge, allowing experts to provide input that improves model performance. Collaborative model training support 118 works alongside the user interface 110 and model update mechanism 106 to enhance the quality of the models through collective input. This collaborative feature fosters a more dynamic and versatile model development environment.

[00039] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with adaptive learning rates 120, which automatically adjust based on the incoming data characteristics and model performance. This component stabilizes the training process, particularly when dealing with volatile or highly variable data sources. Adaptive learning rates 120 interact with the model update mechanism 106 to optimize the learning process, ensuring that the system responds to changes in data distribution effectively, reducing the risk of model drift. This feature ensures continuous model improvement with minimal manual intervention.

[00040] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with multi-lingual and multi-modal data processing 122, which allows the system to process data from various languages and modalities such as text, audio, and video. This component expands the system's ability to operate in diverse environments and across global markets. The multi-lingual and multi-modal data processing 122 works in conjunction with the data ingestion module 102 to ensure that data from different languages and formats is effectively integrated, enhancing the versatility of the AI models in applications such as natural language processing and multimedia analysis.

[00041] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with real-time data acquisition and predictive data acquisition 124, which proactively identifies and collects data sources most relevant for upcoming model updates. This component uses predictive analytics to determine the value of various data streams, optimizing the data ingestion process. Real-time data acquisition 124 interacts with the data ingestion module 102 to ensure that the most pertinent data is collected in real-time, feeding into the data merging engine 104 for more efficient model training and updates.

[00042] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with customizable workflow automation 126, which enables users to define and automate specific tasks within the data merging and model updating processes. This component reduces the need for manual intervention by allowing users to set up automated workflows tailored to their organizational needs. Customizable workflow automation 126 integrates with the user interface 110, providing flexibility for configuring complex data pipelines and model update routines, making the system highly adaptable and efficient.

[00043] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with integration with edge computing 128, which supports data processing and model updates closer to the data sources, reducing latency and bandwidth usage. This component is particularly useful in real-time applications such as IoT, smart cities, and autonomous vehicles. Integration with edge computing 128 works with the data ingestion module 102 and the model update mechanism 106 to enable faster processing and real-time decision-making, improving the system's responsiveness and scalability.

[00044] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with ensemble model support 130, which allows the system to create and manage multiple machine learning models that can be combined to improve overall predictive performance. This component enables the system to leverage the strengths of various models for more robust and accurate predictions. Ensemble model support 130 works in tandem with the model update mechanism 106, ensuring that different models are updated and integrated seamlessly to enhance the AI's decision-making capabilities.

[00045] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with energy-efficient processing 132, which minimizes computational resource consumption during data merging and model updates. This component implements strategies to reduce the energy footprint of the system, making it ideal for organizations with sustainability goals. Energy-efficient processing 132 works in conjunction with the model update mechanism 106 and data merging engine 104 to optimize resource use without sacrificing performance, ensuring that the system can scale efficiently while maintaining a low environmental impact.

[00046] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with historical data analysis and insights 134, which enables users to extract meaningful insights from past data trends and model performance metrics. This component helps inform future model updates and data acquisition strategies by analyzing historical data patterns. Historical data analysis and insights 134 integrates with the evaluation and feedback loop 108 to offer valuable feedback on model evolution, improving the system's capacity to adapt to long-term changes in data.

[00047] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with user-specified data privacy settings 136, which gives users control over how their data is handled and merged. This component allows for customization of privacy preferences, ensuring that data governance complies with regulatory requirements. User-specified data privacy settings 136 work closely with privacy-aware data handling 112 to provide robust data protection mechanisms, making the system highly reliable for sectors with strict privacy concerns such as healthcare and finance.

[00048] Referring to Fig. 1, AI server system for model updates via merging multiple information sources 100 is provided with integration of domain-specific knowledge 138, which incorporates expert knowledge bases to enhance data merging and model training. This component allows the system to use contextual and domain-specific insights to improve the accuracy and relevance of AI model predictions. Integration of domain-specific knowledge 138 works in collaboration with the collaborative model training support 118 to refine models with industry-specific information, enabling the system to perform more effectively in specialized applications.


[00049] Referring to Fig 2, there is illustrated method 200 for AI server system for model updates via merging multiple information sources 100. The method comprises:

At step 202, method 200 includes the data ingestion module 102 continuously collecting data from multiple sources, including structured, unstructured, and streaming data;

At step 204, method 200 includes the data ingestion module 102 preprocessing the collected data by cleaning, normalizing, and transforming it into a standardized format;

At step 206, method 200 includes the data merging engine 104 receiving the preprocessed data from the data ingestion module 102 and intelligently merging the data from various sources;

At step 208, method 200 includes the model update mechanism 106 using the merged dataset to update machine learning models through incremental learning, minimizing the need for full retraining;

At step 210, method 200 includes the evaluation and feedback loop 108 monitoring the updated models and assessing their performance using metrics such as accuracy and precision;

At step 212, method 200 includes user interaction through the user interface 110 to review data insights, configure parameters, and adjust system settings based on model performance;

At step 214, method 200 includes the privacy-aware data handling 112 ensuring compliance with data protection regulations by applying anonymization techniques and secure data transfer protocols;

At step 216, method 200 includes the real-time data validation layer 114 filtering out erroneous or low-quality data to ensure only high-quality information is fed into the system;

At step 218, method 200 includes the context-aware data integration 116 prioritizing and merging data based on its relevance to improve the accuracy and decision-making of the updated models;

At step 220, method 200 includes the collaborative model training support 118 enabling multiple users to contribute their expertise to improve the model training process.

[00050] In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "fixed" "attached" "disposed," "mounted," and "connected" are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.

[00051] Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non- exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.

[00052] Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
, Claims:WE CLAIM:
1. An AI server system for model updates via merging multiple information sources 100 comprising of
data ingestion module 102 to continuously collect and preprocess data from multiple sources;

data merging engine 104 to intelligently merge preprocessed data from various sources;

model update mechanism 106 to update machine learning models using incremental learning and automated hyperparameter tuning;

evaluation and feedback loop 108 to monitor model performance and provide real-time feedback;

user interface 110 to allow users to configure and visualize data and model performance;

privacy-aware data handling 112 to ensure secure and compliant data processing;

real-time data validation layer 114 to filter and validate incoming data for accuracy;

context-aware data integration 116 to prioritize and merge data based on contextual relevance;

collaborative model training support 118 to enable multiple users to contribute and improve model training;

adaptive learning rates 120 to automatically adjust model learning rates based on data characteristics;

multi-lingual and multi-modal data processing 122 to handle data in various languages and formats;

real-time data acquisition and predictive data acquisition 124 to proactively collect relevant data for model updates;

customizable workflow automation 126 to automate tasks within the data merging and model updating processes;

integration with edge computing 128 to enable data processing and model updates closer to data sources;

ensemble model support 130 to combine multiple models for improved predictive performance;

energy-efficient processing 132 to minimize resource consumption during model updates;

historical data analysis and insights 134 to extract insights from past data trends for future model updates;

user-specified data privacy settings 136 to provide control over data handling and privacy;

integration of domain-specific knowledge 138 to incorporate expert knowledge into data merging and model training.

2. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein data ingestion module 102 is configured to continuously collect and preprocess structured, unstructured, and streaming data from various sources, ensuring a seamless data flow for model updates.

3. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein data merging engine 104 is configured to intelligently merge preprocessed data using advanced algorithms, ensuring compatibility and contextual relevance across diverse data sources.

4. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein model update mechanism 106 is configured to perform incremental learning and automated hyperparameter tuning, enabling continuous updates of machine learning models without the need for full retraining.

5. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein evaluation and feedback loop 108 is configured to monitor the performance of updated models using real-time metrics, providing feedback to improve decision-making and model efficiency.

6. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein user interface 110 is configured to allow users to interact with the system, monitor data processing, configure merging parameters, and visualize model performance analytics in real-time.

7. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein privacy-aware data handling 112 is configured to ensure compliance with data protection regulations by applying anonymization techniques and secure data transfer protocols throughout the data processing cycle.

8. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein real-time data validation layer 114 is configured to validate and filter incoming data by detecting anomalies and ensuring only high-quality, reliable data is used for model updates.

9. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein context-aware data integration 116 is configured to prioritize and merge data based on its contextual relevance, ensuring the most pertinent data is utilized for enhancing model accuracy and decision-making

10. The AI server system for model updates via merging multiple information sources 100 as claimed in claim 1, wherein method comprises of
data ingestion module 102 preprocessing the collected data by cleaning, normalizing, and transforming it into a standardized format;

data merging engine 104 receiving the preprocessed data from the data ingestion module 102 and intelligently merging the data from various sources;

model update mechanism 106 using the merged dataset to update machine learning models through incremental learning, minimizing the need for full retraining;

evaluation and feedback loop 108 monitoring the updated models and assessing their performance using metrics such as accuracy and precision;

user interaction through the user interface 110 to review data insights, configure parameters, and adjust system settings based on model performance;

privacy-aware data handling 112 ensuring compliance with data protection regulations by applying anonymization techniques and secure data transfer protocols;

real-time data validation layer 114 filtering out erroneous or low-quality data to ensure only high-quality information is fed into the system;

context-aware data integration 116 prioritizing and merging data based on its relevance to improve the accuracy and decision-making of the updated models;

collaborative model training support 118 enabling multiple users to contribute their expertise to improve the model training process.

Documents

NameDate
202441081739-COMPLETE SPECIFICATION [26-10-2024(online)].pdf26/10/2024
202441081739-DECLARATION OF INVENTORSHIP (FORM 5) [26-10-2024(online)].pdf26/10/2024
202441081739-DRAWINGS [26-10-2024(online)].pdf26/10/2024
202441081739-EDUCATIONAL INSTITUTION(S) [26-10-2024(online)].pdf26/10/2024
202441081739-EVIDENCE FOR REGISTRATION UNDER SSI [26-10-2024(online)].pdf26/10/2024
202441081739-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-10-2024(online)].pdf26/10/2024
202441081739-FIGURE OF ABSTRACT [26-10-2024(online)].pdf26/10/2024
202441081739-FORM 1 [26-10-2024(online)].pdf26/10/2024
202441081739-FORM FOR SMALL ENTITY(FORM-28) [26-10-2024(online)].pdf26/10/2024
202441081739-FORM-9 [26-10-2024(online)].pdf26/10/2024
202441081739-POWER OF AUTHORITY [26-10-2024(online)].pdf26/10/2024
202441081739-REQUEST FOR EARLY PUBLICATION(FORM-9) [26-10-2024(online)].pdf26/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.