Vakilsearch LogoIs NowZolvit Logo
close icon
image
image
user-login
Patent search/

FEDERATED LEARNING SYSTEM FOR AI MODELS WITH NON-IID DATA

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

FEDERATED LEARNING SYSTEM FOR AI MODELS WITH NON-IID DATA

ORDINARY APPLICATION

Published

date

Filed on 11 November 2024

Abstract

ABSTRACT FEDERATED LEARNING SYSTEM FOR AI MODELS WITH NON-IID DATA The present disclosure introduces a federated learning system for decentralized AI model training 100 optimized for non-identically distributed (Non-IID) data across clients. Key components are adaptive learning rate adjustment mechanism 102, which adjusts learning rates per client for enhanced model stability, and personalized model update protocol 104, which tailors updates to specific client data. Weighted aggregation strategy 106 assigns update weights based on data quality, while communication efficiency optimizer 108 reduces bandwidth usage. Data distribution analysis tool 110 evaluates client data diversity, guiding parameter adaptation, and privacy-preserving mechanisms 112 secure data exchanges with differential privacy. Scalability framework 116 supports efficient large-scale operation across diverse clients, and hierarchical federated learning architecture 118 organizes clients into clusters. Some additional components are federated transfer learning integration 124, offline and online learning integration 136, feedback loop 138 for client insights, and multi-objective optimization framework 140. Reference Fig 1

Patent Information

Application ID202441086927
Invention FieldCOMPUTER SCIENCE
Date of Application11/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
Ade Mahesh BabuAnurag University, Venkatapur (V), Ghatkesar (M), Medchal Malkajgiri DT. Hyderabad, Telangana, IndiaIndiaIndia

Applicants

NameAddressCountryNationality
Anurag UniversityVenkatapur (V), Ghatkesar (M), Medchal Malkajgiri DT. Hyderabad, Telangana, IndiaIndiaIndia

Specification

Description:DETAILED DESCRIPTION

[00021] The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognise that other embodiments for carrying out or practising the present disclosure are also possible.

[00022] The description set forth below in connection with the appended drawings is intended as a description of certain embodiments of federated learning system for AI models with non- IID data and is not intended to represent the only forms that may be developed or utilised. The description sets forth the various structures and/or functions in connection with the illustrated embodiments; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimised to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.

[00023] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however, that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.

[00024] The terms "comprises", "comprising", "include(s)", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, or system that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system. In other words, one or more elements in a system or apparatus preceded by "comprises... a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.

[00025] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings and which are shown by way of illustration-specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

[00026] The present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.

[00027] Referring to Fig. 1, federated learning system for AI models with non- IID data 100 is disclosed, in accordance with one embodiment of the present invention. It comprises of adaptive learning rate adjustment mechanism 102, personalized model update protocol 104, weighted aggregation strategy 106, communication efficiency optimizer 108, data distribution analysis tool 110, privacy-preserving mechanisms 112, client collaboration interface 114, scalability framework 116, hierarchical federated learning architecture 118, dynamic client participation management system 120, non-iid data handling algorithms 122, federated transfer learning integration 124, end-to-end encryption for model updates 126, automated hyperparameter tuning system 128, client diversity metrics dashboard 130, collaborative knowledge transfer mechanism 132, cross-domain federated learning support 134, offline and online learning integration 136, federated learning with feedback loop 138, multi-objective optimization framework 140, AI model evaluation metrics standardization 142.

[00028] Referring to Fig. 1, the present disclosure provides details of federated learning system for AI models with Non-IID data 100. It is a framework designed to improve collaborative AI learning across diverse client data sources while preserving data privacy and enhancing model robustness. The federated learning system for decentralized AI model training 100 may be provided with key components such as adaptive learning rate adjustment mechanism 102, personalized model update protocol 104, and weighted aggregation strategy 106 to address non-uniform data challenges. The system incorporates communication efficiency optimizer 108 and privacy-preserving mechanisms 112 to reduce communication overhead and maintain data security. It also features scalability framework 116 and hierarchical federated learning architecture 118 to support large-scale deployments and structured collaboration. Additional components such as non-iid data handling algorithms 122 and federated transfer learning integration 124 further enhance learning accuracy and adaptability across clients.

[00029] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with adaptive learning rate adjustment mechanism 102, which dynamically tunes the learning rate of each client based on the characteristics of its local data distribution. This component enables stable convergence across varied data environments by lowering the learning rate in highly diverse datasets and increasing it in more uniform datasets. The adaptive learning rate adjustment mechanism 102 works in close association with personalized model update protocol 104 to ensure that updates are tailored to individual client requirements. Together, they enhance the overall model's performance and ensure more accurate representations across diverse datasets.

[00030] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with personalized model update protocol 104, which customizes the model updates based on each client's data distribution, preserving critical features relevant to local data. This component plays a pivotal role in enhancing model generalization and reducing biases by retaining specific patterns unique to each client. The personalized model update protocol 104 collaborates with weighted aggregation strategy 106 to contribute unique updates that are meaningfully integrated into the global model, enhancing adaptability across diverse populations.

[00031] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with weighted aggregation strategy 106, which assigns weights to client updates based on their data quality and relevance, ensuring that high-quality updates have a stronger influence on the global model. This component enhances model robustness by reducing the impact of noisy or irrelevant updates. Weighted aggregation strategy 106 works synergistically with data distribution analysis tool 110 to evaluate client data characteristics and fine-tune update weights accordingly, promoting balanced learning across clients.

[00032] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with communication efficiency optimizer 108, which minimizes the frequency and size of updates by employing model compression and asynchronous update mechanisms. This component significantly reduces bandwidth requirements and improves training efficiency, especially in resource-limited settings. The communication efficiency optimizer 108 interacts with privacy-preserving mechanisms 112 to ensure that model updates remain both efficient and secure, balancing speed and data protection in the federated network.

[00033] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with data distribution analysis tool 110, which analyzes the data distribution across clients to identify Non-IID patterns and assist in tailoring training strategies. This tool is essential for informing the adaptive learning rate adjustment mechanism 102 and weighted aggregation strategy 106, enabling the system to adapt effectively to distributional disparities. The data distribution analysis tool 110 works closely with the scalability framework 116 to ensure consistent model performance as client numbers grow, supporting large-scale deployments.
[00034] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with privacy-preserving mechanisms 112, which implement techniques like differential privacy and secure multi-party computation to safeguard client data throughout the learning process. This component ensures compliance with stringent data protection standards while allowing collaborative model training. Privacy-preserving mechanisms 112 work seamlessly with communication efficiency optimizer 108 to ensure secure and efficient model update exchanges, preserving client confidentiality without sacrificing performance.

[00035] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with client collaboration interface 114, a user-friendly platform that enables clients to exchange insights, feedback, and best practices within the federated learning network. This component plays a vital role in fostering a collaborative environment, allowing clients to enhance their individual models by learning from others' experiences. The client collaboration interface 114 supports the dynamic client participation management system 120, enabling real-time engagement and continuous improvement of the federated learning process.

[00036] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with scalability framework 116, designed to support the efficient scaling of the federated learning network as the number of clients increases. This framework accommodates clients with varying data sizes and computational resources, ensuring balanced participation across the network. The scalability framework 116 is tightly integrated with hierarchical federated learning architecture 118 to organize clients into structured clusters, promoting efficient local model training and aggregation for large-scale applications.

[00037] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with hierarchical federated learning architecture 118, which clusters clients based on data similarity or geographical location, enhancing the efficiency of local model training. This structure reduces the computational burden on central servers by performing partial aggregations within clusters. Hierarchical federated learning architecture 118 works in concert with weighted aggregation strategy 106 to effectively integrate updates across different levels of the hierarchy, improving training efficiency and model quality.

[00038] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with dynamic client participation management system 120, which selectively manages client involvement based on their data characteristics and performance metrics. This component optimizes resource utilization by prioritizing high-quality contributions, ensuring that the global model benefits from the most relevant data. The dynamic client participation management system 120 collaborates with data distribution analysis tool 110 to maintain consistent model quality across the federated network, adapting to client capabilities and data quality.

[00039] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with non-iid data handling algorithms 122, which are specialized algorithms tailored to manage Non-IID data distributions effectively. These algorithms include techniques for data augmentation, synthetic data generation, and balancing strategies to enhance model robustness in diverse data environments. Non-IID data handling algorithms 122 work closely with adaptive learning rate adjustment mechanism 102 to adjust learning strategies as per data distribution, ensuring model stability and performance.

[00040] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with federated transfer learning integration 124, enabling clients to leverage pre-trained models and adapt them to their unique data distributions. This component accelerates the learning process, especially for clients with limited data, enhancing overall model accuracy. Federated transfer learning integration 124 works in conjunction with personalized model update protocol 104 to ensure models remain relevant to client-specific data, improving generalization across diverse clients.

[00041] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with end-to-end encryption for model updates 126, a security feature ensuring that all model updates exchanged between clients and the server are encrypted. This component is vital for protecting sensitive information and aligns with privacy-preserving mechanisms 112 to maintain high data security throughout the federated learning process. End-to-end encryption for model updates 126 ensures that only authorized clients can access model data, enhancing overall system trust.

[00042] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with automated hyperparameter tuning system 128, which optimizes model performance by adjusting parameters based on real-time feedback and model performance metrics. This system reduces the need for manual intervention, enabling more efficient and effective model tuning. Automated hyperparameter tuning system 128 integrates with adaptive learning rate adjustment mechanism 102 to dynamically fine-tune training processes, enhancing the accuracy and stability of the global model.

[00043] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with client diversity metrics dashboard 130, a visual analytics tool that provides insights into client data diversity, performance metrics, and contribution quality. This dashboard allows stakeholders to assess federated learning strategies and identify areas for improvement. Client diversity metrics dashboard 130 works alongside data distribution analysis tool 110 to monitor client engagement and guide strategy adjustments, ensuring optimal federated learning outcomes.

[00044] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with collaborative knowledge transfer mechanism 132, which facilitates knowledge sharing among clients, enabling them to benefit from the experiences and insights of others. This component supports enhanced learning while maintaining data privacy, as clients can learn without directly sharing data. Collaborative knowledge transfer mechanism 132 is integrated with client collaboration interface 114 to promote a cooperative learning environment across the federated network.

[00045] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with cross-domain federated learning support 134, which enables collaborative model training across different industries or domains, adhering to industry-specific privacy standards. This component supports the generalization of AI models, enabling clients from diverse fields to enhance model robustness and applicability. Cross-domain federated learning support 134 is designed to interact with privacy-preserving mechanisms 112 to meet regulatory requirements across industries.

[00046] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with offline and online learning integration 136, allowing both batch (offline) and real-time (online) learning processes, providing flexibility to clients based on application needs. This dual-mode capability ensures models remain current and adaptable, especially in time-sensitive applications. Offline and online learning integration 136 works in tandem with communication efficiency optimizer 108 to manage update frequency effectively, balancing real-time and batch training.

[00047] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with federated learning with feedback loop 138, an integrated feedback mechanism allowing clients to provide feedback on global model updates. This feedback loop enables continuous model refinement, as clients can suggest adjustments based on unique data insights. Federated learning with feedback loop 138 operates alongside client diversity metrics dashboard 130 to track and adjust model quality as per client needs, fostering an adaptable and responsive learning environment.

[00048] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with multi-objective optimization framework 140, allowing simultaneous optimization of multiple training objectives such as accuracy, training time, and communication costs. This framework balances stakeholder priorities, ensuring model training meets diverse requirements.

[00049] Referring to Fig.1, federated learning system for decentralized AI model training 100 is provided with AI model evaluation metrics standardization 142, which establishes a consistent framework for evaluating AI models trained on Non-IID data across diverse clients. This component ensures that model performance metrics, such as accuracy, precision, and convergence rates, are uniformly defined and applied, allowing for reliable assessment across clients. The AI model evaluation metrics standardization 142 plays a critical role in promoting transparency and comparability by creating benchmarks that account for data diversity and varying computational resources among clients. It interworks with data distribution analysis tool 110 to align performance measures with the unique data characteristics observed in each client. Additionally, AI model evaluation metrics standardization 142 supports multi-objective optimization framework 140 by providing standardized feedback, enabling the system to maintain consistent quality and fairness in federated learning outcomes.

[00050] Referring to Fig 4, there is illustrated method 200 for federated learning system for decentralized AI model training 100. The method comprises:
At step 202, method 200 includes initiating federated learning by gathering initial client data distributions using data distribution analysis tool 110;
At step 204, method 200 includes analysing the data diversity identified by data distribution analysis tool 110 to inform adaptive learning rate adjustment mechanism 102 and weighted aggregation strategy 106;
At step 206, method 200 includes setting personalized model update protocol 104 for each client, based on data analysed in step 204, to tailor model updates for client-specific data distributions;
At step 208, method 200 includes beginning local training on each client with learning rates dynamically adjusted by adaptive learning rate adjustment mechanism 102 according to the client's data characteristics;
At step 210, method 200 includes clients sharing model updates (instead of raw data) with the server, using communication efficiency optimizer 108 to compress updates and manage bandwidth usage;
At step 212, method 200 includes aggregating these model updates at the server using weighted aggregation strategy 106, which assigns weights based on data quality as evaluated by data distribution analysis tool 110;
At step 214, method 200 includes privacy-preserving mechanisms 112 ensuring the security and confidentiality of model updates transmitted between clients and the server;
At step 216, method 200 includes deploying the updated global model back to clients for continued training, enabling real-time adjustments via offline and online learning integration 136;
At step 218, method 200 includes allowing clients to provide feedback on model performance through federated learning with feedback loop 138, helping further refine model accuracy and responsiveness;
At step 220, method 200 includes repeating the federated learning cycle with multi-objective optimization framework 140 balancing model accuracy, training efficiency, and communication costs for optimal system performance.
[00051] In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "fixed" "attached" "disposed," "mounted," and "connected" are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.

[00052] Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non- exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.

[00053] Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
, Claims:WE CLAIM:
1. A federated learning system for decentralized AI model training 100 comprising of
adaptive learning rate adjustment mechanism 102 to dynamically adjust learning rates based on client data distribution characteristics;
personalized model update protocol 104 to tailor model updates according to each client's unique data;
weighted aggregation strategy 106 to prioritize high-quality client updates in the global model;
communication efficiency optimizer 108 to reduce bandwidth usage by compressing model updates;
data distribution analysis tool 110 to analyze data diversity across clients for training adjustments;
privacy-preserving mechanisms 112 to ensure data security during model update exchanges;
client collaboration interface 114 to facilitate client communication and feedback within the network;
scalability framework 116 to support large-scale federated learning deployments across diverse clients;
hierarchical federated learning architecture 118 to organize clients into clusters based on data similarity;
dynamic client participation management system 120 to select relevant clients based on data quality and performance;
non-iid data handling algorithms 122 to address data heterogeneity with augmentation and balancing strategies;
federated transfer learning integration 124 to leverage pre-trained models for faster adaptation to client data;
end-to-end encryption for model updates 126 to protect model update exchanges from unauthorized access;
automated hyperparameter tuning system 128 to optimize model parameters for better performance; client diversity metrics dashboard 130 to display analytics on client data diversity and model performance;
collaborative knowledge transfer mechanism 132 to enable knowledge sharing while maintaining data privacy;
cross-domain federated learning support 134 to accommodate model training across different industries;
offline and online learning integration 136 to provide flexible batch and real-time training modes;
federated learning with feedback loop 138 to allow clients to give feedback on model updates;
multi-objective optimization framework 140 to balance accuracy, training efficiency, and communication cost across the system; and
AI model evaluation metrics standardization 142 to establish consistent performance metrics for reliable assessment across diverse client data distributions.
2. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein adaptive learning rate adjustment mechanism 102 is configured to dynamically adjust learning rates per client based on local data distribution characteristics, enhancing model stability and accuracy across non-uniform data environments.

3. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein personalized model update protocol 104 is configured to tailor model updates to individual client data distributions, preserving critical features and reducing bias in the global model.

4. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein weighted aggregation strategy 106 is configured to assign varying weights to client updates based on data quality, enabling robust and representative global model updates by prioritizing high-quality data contributions.

5. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein communication efficiency optimizer 108 is configured to compress model updates and implement asynchronous updates, reducing bandwidth usage and enhancing training efficiency within the federated network.

6. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein data distribution analysis tool 110 is configured to analyze data diversity across clients, providing insights to adapt learning parameters and update weighting for improved model performance.

7. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein privacy-preserving mechanisms 112 are configured to secure data exchanges using differential privacy and secure multi-party computation, maintaining client confidentiality throughout the collaborative model training process.

8. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein scalability framework 116 is configured to support the efficient operation of federated learning across a large number of clients with diverse computational resources, enabling seamless scaling without compromising model performance.

9. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein federated learning with feedback loop 138 is configured to collect client feedback on model updates, enabling iterative improvement of model accuracy and responsiveness based on real-time client insights.

10. The federated learning system for decentralized AI model training 100 as claimed in claim 1, wherein method comprises of
adaptive learning rate adjustment mechanism 102 initiates federated learning by gathering initial client data distributions using data distribution analysis tool 110;
data distribution analysis tool 110 analyzes the data diversity identified to inform adaptive learning rate adjustment mechanism 102 and weighted aggregation strategy 106;
personalized model update protocol 104 sets tailored model updates for each client based on data analyzed in the previous step, customizing updates for client-specific data distributions;
adaptive learning rate adjustment mechanism 102 begins local training on each client, dynamically adjusting learning rates according to the client's data characteristics;
communication efficiency optimizer 108 enables clients to share model updates (instead of raw data) with the server by compressing updates and managing bandwidth usage;
weighted aggregation strategy 106 aggregates these model updates at the server, assigning weights based on data quality as evaluated by data distribution analysis tool 110;
privacy-preserving mechanisms 112 ensure the security and confidentiality of model updates transmitted between clients and the server;
offline and online learning integration 136 deploys the updated global model back to clients for continued training, enabling real-time adjustments;
federated learning with feedback loop 138 allows clients to provide feedback on model performance, helping further refine model accuracy and responsiveness;
multi-objective optimization framework 140 repeats the federated learning cycle, balancing model accuracy, training efficiency, and communication costs for optimal system performance.

Documents

NameDate
202441086927-COMPLETE SPECIFICATION [11-11-2024(online)].pdf11/11/2024
202441086927-DECLARATION OF INVENTORSHIP (FORM 5) [11-11-2024(online)].pdf11/11/2024
202441086927-DRAWINGS [11-11-2024(online)].pdf11/11/2024
202441086927-EDUCATIONAL INSTITUTION(S) [11-11-2024(online)].pdf11/11/2024
202441086927-EVIDENCE FOR REGISTRATION UNDER SSI [11-11-2024(online)].pdf11/11/2024
202441086927-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [11-11-2024(online)].pdf11/11/2024
202441086927-FIGURE OF ABSTRACT [11-11-2024(online)].pdf11/11/2024
202441086927-FORM 1 [11-11-2024(online)].pdf11/11/2024
202441086927-FORM FOR SMALL ENTITY(FORM-28) [11-11-2024(online)].pdf11/11/2024
202441086927-FORM-9 [11-11-2024(online)].pdf11/11/2024
202441086927-POWER OF AUTHORITY [11-11-2024(online)].pdf11/11/2024
202441086927-REQUEST FOR EARLY PUBLICATION(FORM-9) [11-11-2024(online)].pdf11/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.