image
image
user-login
Patent search/

EDGE COMPUTING ARCHITECTURE FOR LOW-LATENCY IOT APPLICATIONS

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

EDGE COMPUTING ARCHITECTURE FOR LOW-LATENCY IOT APPLICATIONS

ORDINARY APPLICATION

Published

date

Filed on 14 November 2024

Abstract

The present invention provides an edge computing architecture for low-latency Internet of Things (IoT) applications, designed to optimize real-time data processing by deploying edge nodes in close proximity to IoT devices. This architecture reduces communication delays and enhances decision-making by processing data locally at the edge, rather than relying on centralized cloud systems. By utilizing advanced techniques such as machine learning, intelligent caching, and multi-tier edge networks, the system ensures high performance, scalability, and fault tolerance. It is particularly beneficial for time-sensitive IoT applications, such as autonomous vehicles, industrial automation, and smart healthcare, where minimizing latency is critical for system responsiveness and efficiency.

Patent Information

Application ID202441088018
Invention FieldCOMMUNICATION
Date of Application14/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
M. KotammaAssistant Professor, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
K. Harsha VardhanFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
K. Sandeep KumarFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
K. HemashalinFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
K. Ashok KumarFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
K.V.S. ManikantaFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
M. PraneethaFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
M. Ashok ReddyFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
M. VaishnaviFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia
M. SravyaFinal Year B.Tech Student, Audisankara College of Engineering & Technology(AUTONOMOUS), NH-16, By-Pass Road, Gudur, Tirupati Dist., Andhra Pradesh, India-524101, India.IndiaIndia

Applicants

NameAddressCountryNationality
Audisankara College of Engineering & TechnologyAudisankara College of Engineering & Technology, NH-16, By-Pass Road, Gudur, Tirupati Dist, Andhra Pradesh, India-524101, India.IndiaIndia

Specification

Description:In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.

The ensuing description provides exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.

Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.

Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.

The word "exemplary" and/or "demonstrative" is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as "exemplary" and/or "demonstrative" is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms "includes," "has," "contains," and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising" as an open transition word without precluding any additional or other elements.

Reference throughout this specification to "one embodiment" or "an embodiment" or "an instance" or "one instance" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

The present invention provides an innovative edge computing architecture designed to support low-latency processing of data in Internet of Things (IoT) applications. The architecture enables real-time data processing at the edge of the network, close to IoT devices, rather than relying on centralized cloud computing systems. This architecture is particularly suitable for time-sensitive IoT applications, where minimizing latency is crucial for effective decision-making and system responsiveness.

The edge computing architecture consists of several interconnected components, including IoT devices, edge nodes, communication interfaces, and centralized cloud servers. IoT devices generate a continuous stream of data, which is transmitted to the nearest edge node for processing. These edge nodes are strategically located close to the IoT devices, which allows them to perform tasks such as data filtering, aggregation, analysis, and real-time decision-making with minimal communication delays. By processing data at the edge, the system reduces the need for back-and-forth communication with cloud-based servers, which typically introduces higher latency due to long-distance data transmission.

To further optimize performance, the edge computing system includes intelligent algorithms, such as machine learning (ML) models or artificial intelligence (AI) systems, which are deployed on the edge nodes to analyze and interpret data locally. This enables advanced applications such as predictive analytics, anomaly detection, and real-time response generation. In addition, the edge nodes are capable of caching frequently accessed data, reducing the need to fetch data from remote servers and improving response times for repetitive tasks or queries. This caching mechanism is particularly useful for systems with recurring patterns or requests, allowing for quicker data retrieval and lower overall latency.

The architecture also incorporates multi-tier edge networks, where multiple edge nodes are interconnected, allowing for load balancing and fault tolerance. When one edge node becomes overwhelmed or fails, tasks can be offloaded to neighboring nodes, ensuring the continued operation of the system and preventing data loss. Furthermore, content delivery networks (CDNs) can be integrated into the edge nodes to optimize data delivery, ensuring that high-priority information reaches the right destination with minimal delay.

The communication interface between edge nodes and the central cloud infrastructure employs low-latency protocols, such as 5G, Wi-Fi 6, or Bluetooth Low Energy (BLE), depending on the specific IoT application. These high-speed communication methods further contribute to the overall reduction in latency and improve the overall performance of the system.

In an industrial IoT application, the edge computing architecture is deployed in a factory environment with a large number of IoT sensors and machines. Each IoT device (e.g., temperature sensors, pressure gauges, vibration detectors) generates real-time data related to machine performance and environmental conditions. The data is transmitted to the nearest edge node, where it is processed to identify potential anomalies or malfunctions in the machinery.

The edge node processes the data using a machine learning algorithm trained to detect deviations from normal operational conditions, such as excessive vibrations or temperature fluctuations. If an anomaly is detected, the edge node triggers an alert and performs corrective actions, such as adjusting machine settings or sending an alert to maintenance personnel. By processing the data locally at the edge, this embodiment ensures real-time decision-making, improving overall factory efficiency and minimizing downtime. Additionally, historical data is cached at the edge to enable faster responses to recurring issues and minimize the load on the cloud infrastructure.

In an autonomous vehicle system, the edge computing architecture is deployed to process real-time data from a variety of IoT devices, such as cameras, LIDAR sensors, GPS modules, and motion detectors. The edge nodes in this embodiment are responsible for processing the sensor data and making critical driving decisions in real-time, such as collision avoidance, route planning, and vehicle speed control.

The edge node on the vehicle is equipped with a high-performance processor capable of executing AI-driven models to analyze the sensor data for identifying obstacles, road conditions, and traffic signals. By performing this processing at the edge, the vehicle can make instantaneous decisions, ensuring safety and efficiency in dynamic driving conditions. Additionally, data related to the vehicle's performance, such as battery status, engine diagnostics, and location, is cached locally, allowing for rapid access to this information without relying on cloud systems. This embodiment leverages low-latency communication protocols like 5G for fast interaction between the vehicle's edge node and other external systems, such as traffic control centers or nearby vehicles for coordinated decision-making.

While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the invention and not as limitation. , Claims:1.An edge computing architecture for low-latency IoT applications, comprising:
a plurality of IoT devices, each configured to generate data;
at least one edge node located geographically proximate to the IoT devices, configured to receive the generated data;
a processing unit at the edge node, configured to perform data filtering, aggregation, and analysis to provide real-time insights;
a communication interface for transmitting processed data to a central server or cloud-based system for further analysis or decision-making;
wherein the architecture minimizes data transmission latency by processing data at or near the source of generation.

2.The edge computing architecture of claim 1, wherein the processing unit at the edge node includes machine learning models for predictive analytics or decision-making.

3.The edge computing architecture of claim 1, further comprising a cache memory at the edge node configured to store frequently accessed data and reduce redundant data transmission.

4.The edge computing architecture of claim 1, wherein the communication interface utilizes a low-latency communication protocol, including but not limited to 5G, Wi-Fi 6, or Bluetooth Low Energy (BLE).

5.The edge computing architecture of claim 1, wherein the edge node is further configured to implement a content delivery network (CDN) for optimizing data transmission to the central server.

Documents

NameDate
202441088018-COMPLETE SPECIFICATION [14-11-2024(online)].pdf14/11/2024
202441088018-DECLARATION OF INVENTORSHIP (FORM 5) [14-11-2024(online)].pdf14/11/2024
202441088018-DRAWINGS [14-11-2024(online)].pdf14/11/2024
202441088018-FORM 1 [14-11-2024(online)].pdf14/11/2024
202441088018-FORM-9 [14-11-2024(online)].pdf14/11/2024
202441088018-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-11-2024(online)].pdf14/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.