image
image
user-login
Patent search/

AN EDGE ARITIFICAL INTELLEIGENCE (AI) BASED SYSTEM AND METHOD FOR DETECTION OF TRAFFIC VIOLATION/CRIMINAL ACTIVITY

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

AN EDGE ARITIFICAL INTELLEIGENCE (AI) BASED SYSTEM AND METHOD FOR DETECTION OF TRAFFIC VIOLATION/CRIMINAL ACTIVITY

ORDINARY APPLICATION

Published

date

Filed on 8 November 2024

Abstract

ABSTRACT AN EDGE ARITIFICAL INTELLEIGENCE (AI) BASED SYSTEM AND METHOD FOR DETECTION OF TRAFFIC VIOLATION/CRIMINAL ACTIVITY The present invention relates to a method and system for an Edge AI electronic device (108) designed to detect multiple activities within an environment. The devices receives sensing inputs from various sensors, generates spatial data related to detected objects using a spatial data model (214), assigning spatial positioning confidence scores. The processor (202) generates classification data using an Object Classification model (216), assigning object classification confidence scores. These scores are synchronized and combined through a data fusion model (218), resulting in an object detection confidence score. Additionally, the processor analyses temporal sequences of objects to model motion trajectories, generating a temporal consistency confidence score, or a Human Activity Recognition model (222) to assign a behavioural confidence score. These scores are processed through a Symbolic Integration Model (224) to compute an overall activity confidence score. The devices compares this score against a predefined threshold to detect activities within the environment. (To be published with Fig. 1)

Patent Information

Application ID202421086218
Invention FieldCOMPUTER SCIENCE
Date of Application08/11/2024
Publication Number49/2024

Inventors

NameAddressCountryNationality
Krupa A Patel64-Jogani Nagar, Part-2, Honey Park Road, Near Prime Arcade, Adajan, Surat, Gujarat – 395009IndiaIndia
Akshay G Patel64-Jogani Nagar, Part-2, Honey Park Road, Near Prime Arcade, Adajan, Surat, Gujarat – 395009IndiaIndia

Applicants

NameAddressCountryNationality
Krupa A Patel64-Jogani Nagar, Part-2, Honey Park Road, Near Prime Arcade, Adajan, Surat, Gujarat – 395009IndiaIndia
Akshay G Patel64-Jogani Nagar, Part-2, Honey Park Road, Near Prime Arcade, Adajan, Surat, Gujarat – 395009IndiaIndia

Specification

Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of Invention:
AN EDGE ARITIFICAL INTELLEIGENCE (AI) BASED SYSTEM AND METHOD FOR DETECTION OF TRAFFIC VIOLATION/CRIMINAL ACTIVITY
APPLICANT:
Krupa A Patel and Akshay G Patel
Indian Nationals having address as:
64-Jogani Nagar, Part-2, Honey Park Road,
Near Prime Arcade, Adajan, Surat, Gujarat - 395009


The following specification particularly describes the invention and the manner in which it is to be performed.

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[0001] The present application does not claim priority from any other application.
TECHNICAL FIELD
[0002] The present invention relates to a method and system for real-time detection of criminal and traffic violation activity using Edge AI electronic devices based on Artificial Intelligence (AI) and Machine Learning techniques.
BACKGROUND
[0003] This section is intended to introduce the reader to various aspects of art (the relevant technical field or area of knowledge to which the invention pertains), which may be related to various aspects of the present disclosure that are described or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements in this background section are to be read in this light, and not as admissions of prior art. Similarly, a problem mentioned in the background section.
[0004] Traffic activity and criminal activities pose a significant challenge to safety, and detecting these infractions in real-time is essential for prevention as well detection. Conventional systems for monitoring activities predominantly rely on fixed cameras and sensors. While these methods serve their purpose, they come with inherent limitations. Fixed cameras, positioned at specific junctions or locations, offer only limited coverage and can be affected by various environmental conditions and technical issues, resulting in potential inaccuracies.
[0005] The enforcement capacity of police and traffic officials is significantly hampered by their limited numbers. When officers are occupied with issuing citations, their ability to monitor other activities is temporarily diminished, leading to unaddressed infractions and a potential decline in overall safety. This situation creates gaps in enforcement that can contribute to an increase in dangerous behaviours.
[0006] Currently, enforcement systems primarily target a narrow range of activity, such as assault and running red lights. While these technologies have made some progress in improving safety, they do not address the full spectrum of dangerous behaviours, leaving many activities unchecked. This limited scope can exacerbate the risk of traffic accidents and undermine the effectiveness of law enforcement.
[0007] Existing solutions also face significant challenges in managing and storing the large volumes of user-generated content that accompany the rise of smartphone usage. Although residents can report traffic violations and criminal activities through images and videos, conventional database systems struggle to effectively process this influx of data. This results in high storage costs and difficulties in distinguishing valuable evidence from irrelevant submissions, further hindering the effectiveness of traffic enforcement.
[0008] Conventional systems does not have any real-time applications, primarily due to their reliance on predefined traffic laws and static thresholds. This rigidity can lead to inefficiencies when managing complex traffic scenarios, particularly when multiple detection subsystems like vehicle detection and behaviour analysis provide conflicting signals. Such conflicts can result in the system struggling to make accurate decisions, causing delays and an increased likelihood of false positives in activity detection. Additionally, these systems exhibit a lack of adaptability to new behaviours or unexpected driving patterns, as they are unable to dynamically adjust their rules. Consequently, this limitation can lead to the over-reporting of activity (false positives) or the failure to identify significant incidents, as conventional systems do not learn from past detections or adapt to evolving traffic conditions.
[0009] The invention presented here seeks to overcome these limitations by proposing a flexible, real-time activity detecting and monitoring system that leverages mobile phone cameras. By utilizing everyday mobile devices, this system expands coverage beyond fixed cameras and sensors, making it particularly useful in areas with limited infrastructure. Furthermore, the present invention aims to address the challenges of managing user-generated content, thereby enhancing overall road safety and enforcement efficiency.
SUMMARY
[0010] This summary is provided to introduce concepts related to a method and a system for detecting one or more activities using one or more Edge AI electronic devices and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
[0011] In one implementation, an edge AI based system and method for the detection of one or more criminal/traffic violation activity is disclosed herein. The detection of the one or more criminal/traffic violation activity is performed using Edge AI electronic devices by incorporating multiple models including Lidar integration with Dynamic Adaptive LIDAR Resolution (DALR), YOLOv11 integration with a Real-Time Occlusion-Aware Classifier (ROAC), Fusion of LIDAR and YOLOv11 Data with Advanced Temporal-Spatial Synchronization System (ATSSS), Liquid Neural Networks (LNN) for Temporal Processing with Event-Based Temporal Adaptation Layer (ETAL) and Human Activity Recognition (HAR) for Driver Behaviour Analysis with Contextual Behavioural Enhancement Network (CBEN). Each of these models has its own scoring mechanism to detect one or more criminal/traffic activity.
[0012] In one implementation, a method for detecting one or more activities using one or more Edge AI electronic devices is disclosed. The method may be implemented by the one or more Edge AI electronic devices including a processor and a memory communicatively coupled to the processor. The memory is configured to store programmed instructions executable by the processor. The method may comprise a step of receiving one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices. Further, the method may comprise a step of generating a spatial data related to one or more objects within the environment using a spatial data model based on the one or more sensing inputs received and thereby assigning a spatial positioning confidence score to the one or more objects. Further, the method may comprise a step of generating classification data related to the one or more objects by using an object classification model based on the one or more sensing inputs and the spatial data related to the one or more objects and thereby assigning an object classification confidence score to the one or more objects. Further, the method may comprise a step of synchronizing and combining the spatial positioning confidence score and the object classification confidence score using a data fusion model to assign an object detection confidence score to the one or more objects. Further, the method may comprise a step of processing at least one of: the temporal sequences of the one or more objects using a temporal analysis model to model the motion trajectory of the one or more objects over time and to identify temporal data associated with the one or more objects and thereby generating a temporal consistency confidence score for the one or more objects, and one or more actions of the one or more objects , in real-time using a Human Activity Recognition (HAR) model to identify the behavioural data of the one or more objects and thereby assigning a behavioural confidence score to the one or more objects. Further, the method may comprise a step of processing the object detection confidence score, the temporal consistency confidence score and the behavioural confidence score using a symbolic integration model to compute an overall activity confidence score. Furthermore, the method may comprise a step of comparing the overall activity confidence score with the predefined threshold to detect the one or more activities associated with one or more objects within the environment.
[0013] In another implementation, a system for detecting one or more activities using one or more Edge AI electronic devices is disclosed. Further, the system may comprise a memory and a processor. Further, the processor may be configured to execute programmed instructions stored in the memory. Further, the processor may be configured for receiving one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices. Further, the processor may be configured for generating a spatial data related to one or more objects within the environment using a spatial data model based on the one or more sensing inputs received and thereby assigning a spatial positioning confidence score to the one or more objects. Further, the processor may be configured for generating classification data related to the one or more objects by using an object classification model based on the one or more sensing inputs and the spatial data related to the one or more objects and thereby assigning an object classification confidence score to the one or more objects. Further, the processor may be configured for synchronizing and combining the spatial positioning confidence score and the object classification confidence score using a data fusion model to assign an object detection confidence score to the one or more objects. Further, the processor may be configured for processing at least one of the temporal sequences of the one or more objects using a temporal analysis model to model the motion trajectory of the one or more objects over time and to identify temporal data associated with the one or more objects and thereby generating a temporal consistency confidence score for the one or more objects, and one or more actions of the one or more objects in real-time, using a Human Activity Recognition (HAR) model to identify the behavioural data of the one or more objects and thereby assigning a behavioural confidence score to the one or more objects. Further, the processor may be configured for processing the object detection confidence score, the temporal consistency confidence score and the behavioural confidence score using a symbolic integration model to compute an overall activity confidence score. Furthermore, the processor may be configured for comparing the overall activity confidence score with the predefined threshold to detect the one or more activities associated with one or more objects within the environment.
[0014] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.

BRIEF DESCRIPTION OF DRAWINGS
[0015] The accompanying drawings illustrate the various embodiments of systems, methods, and other aspects of the disclosure. Any person with ordinary skills in art will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. In some examples, one element may be designed as multiple elements, or multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Further, the elements may not be drawn to scale.
[0016] Various embodiments will hereinafter be described in accordance with the appended drawings, which are provided to illustrate and not to limit the scope in any manner, wherein similar designations denote similar elements, and in which:
[0017] FIG. 1 is a block diagram that illustrates a system environment (100) for detecting one or more activities using one or more Edge AI electronic devices (108), in accordance with an embodiment of present subject matter.
[0018] FIG. 2 is a block diagram (200) that illustrates various components of an application server (110) configured for performing steps for detecting one or more activities using one or more Edge AI electronic devices (108), in accordance with an embodiment of the present subject matter.
[0019] FIG. 3 illustrates a diagram illustrating the application of confidence score computing for detecting one or more activities within the environment, in accordance with an embodiment of the present subject matter.
[0020] FIG. 4A and FIG. 4B collectively depicts a flowchart that illustrates a method (300) for detecting one or more activities using one or more Edge AI electronic devices (108), in accordance with an embodiment of the present subject matter.
[0021] FIG. 5 illustrates a block diagram (400) of an exemplary computer system for implementing embodiments consistent with the present subject matter.

DETAILED DESCRIPTION
[0022] The present disclosure may be best understood with reference to the detailed figures and description set forth herein. Various embodiments are discussed below with reference to the figures. However, those skilled in the art will readily appreciate that the detailed descriptions given herein with respect to the figures are simply for explanatory purposes as the methods and systems may extend beyond the described embodiments. For example, the teachings presented, and the needs of a particular application may yield multiple alternative and suitable approaches to implement the functionality of any detail described herein. Therefore, any approach may extend beyond the particular implementation choices in the following embodiments described and shown.
[0023] References to "one embodiment," "at least one embodiment," "an embodiment," "one example," "an example," "for example," and so on indicate that the embodiment(s) or example(s) may include a particular feature, structure, characteristic, property, element, or limitation but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element, or limitation. Further, repeated use of the phrase "in an embodiment" does not necessarily refer to the same embodiment. The terms "comprise", "comprising", "include(s)", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, system or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or system or method. In other words, one or more elements in a system or apparatus preceded by "comprises… a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
[0024] The objective of the present disclosure is to streamline the process of detecting the one or more criminal/traffic activities using Artificial Intelligence (AI) and Machine learning techniques.
[0025] Another objective of the present invention is to increase the efficiency in detection of the one or more criminal/traffic activities using Artificial Intelligence (AI) and Machine learning techniques.
[0026] Another objective of the present disclosure is to enhance the user experience by providing a seamless and intuitive system and method that enables the users to effortlessly navigate and interact with the one or more Edge AI electronic devices (108).
[0027] Another objective of the present invention is to detect the one or more activities in real-time using mobile cameras.
[0028] Yet another objective of the present disclosure is to detect traffic related illegal activities using one or more Edge AI electronic devices (108).
[0029] Yet another objective of the present disclosure is to detect crime related illegal activities using one or more Edge AI electronic devices (108).
[0030] Yet another objective of the present disclosure is to decrease the administrative burden on authorities and government instruments.
[0031] Yet another objective of the present invention is to detect vehicle number plates and recognize alphanumeric codes using ALPR (Automatic License Plate Recognition) technique, without relying on advanced AI or Edge AI technologies
[0032] Yet another objective of the present disclosure is to provide cost-effective and adaptable solution for comprehensive traffic monitoring and enforcement.
[0033] Yet another objective of the present disclosure is to utilize Artificial Intelligence (AI) technology and computer vision techniques to detect and track objects in the environment, analyze their behaviour, and detect specific activity.
[0034] Yet another objective of the present disclosure is to overcome drawbacks by providing a flexible, real-time activity monitoring system using mobile devices, thereby enhancing overall coverage area of monitoring activities.
[0035] Yet another objective of the present invention is to overcome ability to manage and store large volumes of user-generated content effectively.
[0036] Yet another objective of the present invention is to provide a method for stabilizing video frames by utilizing optical flow technology to mitigate motion blur resulting from rapid movement and camera shake, wherein the method comprises analyzing inter-frame motion and applying compensatory adjustments to reduce undesired image displacement, thereby enhancing video clarity and stability.
[0037] Yet another objective of the present invention is to enable real-time video annotation with minimal latency and high privacy on mobile devices by utilizing YOLOv11 technology, wherein YOLOv11 provides rapid and accurate object detection capabilities, allowing for efficient annotation directly on-devices, thereby minimizing data transmission and enhancing privacy during video processing.
[0038] Yet another objective of the present invention is to trim videos to a specific length (e.g., 20 seconds) and improve their quality.
[0039] Yet another objective of the present invention is to detect traffic and criminal activity and blur sensitive objects, such as faces, in real-time to maintain privacy.
[0040] Yet another objective of the present invention is to detect and classify vehicles in real-time using YOLOv11 in combination with computer vision techniques and rule-based classification.
[0041] FIG. 1 is a block diagram that illustrates a system environment (100) for detecting one or more activity, in accordance with an embodiment of present subject matter. The system environment (100) typically includes a database server (102), a central server (104), a communication network (106), one or more Edge AI electronic devices (108) and application server (110). The database server (102), the central server (104) and one or more Edge AI electronic devices (108) are typically communicatively coupled with each other via the communication network (106). Further, the application server (110) is electronically coupled with one or more Edge AI electronic devices (108). In an embodiment, the central server (104) may communicate with the database server (102), and the one or more Edge AI electronic devices (108) connected with application server (110) using one or more protocols such as, but not limited to, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), RF mesh, Bluetooth Low Energy (BLE), and the like, to communicate with one another.
[0042] In an embodiment, the database server (102) may refer to a computing device that may be configured to store one or more for detecting one or more activity, one or more sensing inputs, one or more thresholds, one or more enforcement parameters, reports, and other intermediate processing data.
[0043] In an embodiment, the database server (102) may include a special purpose operating system specifically configured to perform one or more database operations on the stored content. Examples of database operations may include, but are not limited to, Select, Insert, Update, and Delete. In an embodiment, the database server (102) may include hardware that may be configured to perform one or more predetermined operations. In an embodiment, the database server (102) may be realized through various technologies such as, but not limited to, Microsoft® SQL Server, Oracle®, IBM DB2®, Microsoft Access®, PostgreSQL®, MySQL®, SQLite®, distributed database technology and the like. In an embodiment, the database server (102) may be configured to utilize the central server (104) for storage and retrieval of data used for detecting one or more activity.
[0044] A person with ordinary skills in art will understand that the scope of the disclosure is not limited to the database server (102) as a separate entity. In an embodiment, the functionalities of the database server (102) can be integrated into the central server (104) or into the one or more Edge AI electronic devices (108).
[0045] In an embodiment, the application server (110) may refer to a computing devices or a software framework hosting an application or a software service. In an embodiment, the application server (110) may be implemented to execute procedures such as, but not limited to, programs, routines, or scripts stored in one or more memories for supporting the hosted application or the software service. In an embodiment, the hosted application or the software service may be configured to perform one or more predetermined operations. The application server (110) may be realized through various types of application servers (110) such as, but are not limited to, a Java application server, a .NET framework application server, a Base4 application server, a PHP framework application server, or any other application server framework.
[0046] In an embodiment, the central server (104) may be configured to utilize the database server (102) and the one or more Edge AI electronic devices (108) coupled with the application server (110), in conjunction, for detecting one or more activities. In one embodiment, the one or more Edge AI electronic devices (108) are mounted on vehicle. In an implementation, the application server (110) is adapted to a detect one or more activities based on one or more sensing input.
[0047] In an embodiment, the communication network (106) may correspond to a communication medium through which the central server (104), the application server (110), the database server (102), and the one or more Edge AI electronic devices (108) may communicate with each other. Such a communication may be performed in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), Wireless Application Protocol (WAP), File Transfer Protocol (FTP), ZigBee, Edge, infrared IR), IEEE 802.11, 802.16, 2G, 3G, 4G, 5G, 6G, 7G cellular communication protocols, and/or Bluetooth (BT) communication protocols. The communication network (106) may either be a dedicated network or a shared network. Further, the communication network (106) may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. The communication network (106) may include, but is not limited to, the Internet, intranet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cable network, the wireless network, a telephone network (e.g., Analog, Digital, POTS, PSTN, ISDN, xDSL), a telephone line (POTS), a Metropolitan Area Network (MAN), an electronic positioning network, an X.25 network, an optical network (e.g., PON), a satellite network (e.g., VSAT), a packet-switched network, a circuit-switched network, a public network, a private network, and/or other wired or wireless communications network configured to carry data.
[0048] In an embodiment, the one or more Edge AI electronic devices (108) may refer to a computing device used by a user. The one or more Edge AI electronic devices (108) may comprise of one or more processors and one or more memories. The one or more memories may include computer readable code that may be executable by one or more processors to perform predetermined operations. In an embodiment, the one or more Edge AI electronic devices (108) may present an Edge computing system to process the input and detect the one or more activities using the application server (110). Example of Edge computing system on the one or more Edge AI electronic devices (108) to process the one or more received inputs received by the sensors of the Edge AI electronic devices (108) on devices itself, to detect one or more activity. Examples of the one or more Edge AI electronic devices (108) may include, but are not limited to, a smart phone, a handheld device, AR glass, a dashcam, a static IP enabled CCTV camera, or a combination thereof.
[0049] The system (100) can be implemented using hardware, software, or a combination of both, which includes using where suitable, one or more computer programs, mobile applications, or "apps" by deploying either on-premises over the corresponding computing terminals or virtually over cloud infrastructure. The system (100) may include various micro-services or groups of independent computer programs which can act independently in collaboration with other micro-services. Internally, the system (100) may be the central processor of all requests for transactions by the various actors or users of the system. A critical attribute of the system (100) is that it can process the input on the one or more Edge AI electronic devices (108) using the application server (110) in real-time. In a specific embodiment, the system (100) is implemented for detecting one or more activities in real-time.
[0050] In one embodiment, the system (100) is configured to analyze the real-time environment surrounding the Edge AI electronic devices (108). Further, the system (100) integrates advanced data processing and real-time monitoring technologies to provide accurate and current detection of one or more activities and one or more objects. Further, the system (100) is equipped with multiple models to compute the received sensing input in real time and to compare the overall activity with the predefined threshold to detect one or more activities associated with one or more objects within the environment.
[0051] FIG. 2 illustrates a block (200) diagram illustrating components of the application server (110) configured for detecting one or more activities within the environment, in accordance with an embodiment of the present subject matter. Further, FIG. 2 is explained in conjunction with elements from FIG. 1. Here, the application server (110) preferably includes a processor (202), a memory (204), a transceiver (206), an Edge unit (208), an input/output unit (210), a pre-processing unit (212), a spatial data model (214), an object classification model (216), a data fusion model (218), a temporal analysis model (220), a human activity recognition (HAR) model (222), a symbolic integration model (224). The processor (202) is further preferably communicatively coupled to the memory (204), the transceiver (206), the Edge unit (208), the input/output unit (210), the pre-processing unit (212), the spatial data model (214), the object classification model (216), the data fusion model (218), the temporal analysis model (220), the human activity recognition (HAR) model (222), the symbolic integration model (224), while the transceiver (206) is preferably communicatively coupled to the communication network (106).
[0052] The processor (202) comprises suitable logic, circuitry, interfaces, and/or code that may be configured to execute a set of instructions stored in the memory (204), and may be implemented based on several processor technologies known in the art. The processor (202) works in coordination with the transceiver (206), the Edge unit (208), the input/output unit (210), the pre-processing unit (212), the spatial data model (214), the object classification model (216), the data fusion model (218), the temporal analysis model (220), the human activity recognition (HAR) model (222), the symbolic integration model (224) for detecting one or more activity. Examples of the processor (202) include, but not limited to, standard microprocessor, microcontroller, central processing unit (CPU), an X86-based processor, a Reduced Instruction Set Computing (RISC) processor, an Application- Specific Integrated Circuit (ASIC) processor, and a Complex Instruction Set Computing (CISC) processor, distributed or cloud processing unit, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions and/or other processing logic that accommodates the requirements of the present invention.
[0053] The memory (204) comprises suitable logic, circuitry, interfaces, and/or code that may be configured to store the set of instructions, which are executed by the processor (202). Preferably, the memory (204) is configured to store one or more programs, routines, or scripts that are executed in coordination with the processor (202). Additionally, the memory (204) may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, a Hard Disk Drive (HDD), flash memories, Secure Digital (SD) card, Solid State Disks (SSD), optical disks, magnetic tapes, memory cards, virtual memory and distributed cloud storage. The memory (204) may be removable, non-removable, or a combination thereof. Further, the memory (204) may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The memory (204) may include programs or coded instructions that supplement applications and functions of the system (100). In one embodiment, the memory (204), amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the programs or the coded instructions. In yet another embodiment, the memory (204) may be managed under a federated structure that enables adaptability and responsiveness of the application server (110).
[0054] The transceiver (206) comprises suitable logic, circuitry, interfaces, and/or code that may be configured to receive, process or transmit information, data or signals, which are stored by the memory (204) and executed by the processor (202). The transceiver (206) is preferably configured to receive, process or transmit, one or more programs, routines, or scripts that are executed in coordination with the processor (202). The transceiver (206) is preferably communicatively coupled to the communication network (106) of the system (100) for communicating all the information, data, signal, programs, routines or scripts through the network.
[0055] The transceiver (206) may implement one or more known technologies to support wired or wireless communication with the communication network (106). In an embodiment, the transceiver (206) may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a Universal Serial Bus (USB) devices, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and/or a local buffer. Also, the transceiver (206) may communicate via wireless communication with networks, such as the Internet, an Intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN). Accordingly, the wireless communication may use any of a plurality of communication standards, protocols and technologies, such as: Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g and/or IEEE 802.11n), voice over Internet Protocol (VoIP), Wi-MAX, a protocol for email, instant messaging, and/or Short Message Service (SMS).
[0056] In one embodiment of the present disclosure, the Edge unit (208) comprises an Edge AI function within electronic devices (108) by processing data directly on the device, rather than relying on central server (104). Further, on-devices operation utilizes the device's computational resources to analyze data locally in real time. Further, by running AI algorithms directly on the devices, Edge AI enables immediate response times, reduces latency, and ensures continuous operation even without internet connectivity. Additionally, this integration enhances data privacy by keeping sensitive information on the devices, minimizing the need to transmit data externally. Furthermore, the benefits of Edge AI within electronic devices (108) include faster decision-making, reduced bandwidth usage, and greater reliability, making it ideal for applications requiring real-time analytics, such as mobile applications, IoT devices, and smart cameras.
[0057] In another embodiment of the present disclosure, the Edge unit (208) leverages advanced neuromorphic processing capabilities, simulating biological neural networks to enable energy-efficient information processing directly on the devices, eliminating reliance on cloud-based resources. It must be noted herein that, the conventional systems often struggle with adaptability and efficiency for real-time decision-making in constrained environments while exhibiting high energy consumption, which is unsuitable for continuous monitoring in remote or low-power settings. Further, to address these challenges, the proposed model incorporates a specialized event-driven processing mechanism that selectively prioritizes critical data points, reducing computational load by up to 30%. Further, Edge unit (208) features temporal learning elements, allowing the system to adapt to changing conditions in real-time dynamically, optimizing energy use as a result, this integration achieves 50% faster processing times. It reduces energy consumption by approximately 40% compared to conventional devices. Furthermore, at the same time, the modified Edge AI has neuromorphic-inspired architecture and enhances reliable operation in environments with limited connectivity through effective local real-time data analysis.
[0058] In one embodiment of the present disclosure, the input/output (I/O) unit (210) comprises suitable logic, circuitry, interfaces, and/or code that may be configured to receive or present information. The input/output unit (210) comprises various input and output sensors that are configured to communicate with the processor (202). Examples of the input sensors include, but are not limited to, a microphone, a front camera sensor, a rear camera sensor, a recorder, a lidar, accelerometer, motion sensor, GPS sensor. Examples of the output sensors include, but are not limited to, a display screen and/or a speaker. Examples of the sensing input include, but are not limited to, images, videos, audio recording, linear acceleration inputs, sensor inputs, geolocation data, environmental data and/or texts. The I/O unit (208) may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O unit (208) may allow the system (100) to interact with the user directly or through the Edge AI electronic devices (108). Further, the I/O unit (208) may enable the system (100) to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O unit (208) can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O unit (210) may include one or more ports for connecting a number of devices to one another or to another server. In one embodiment, the I/O unit (210) allows the application server (110) to be logically coupled to other Edge AI electronic devices (108), some of which may be built in. Illustrative components include tablets, mobile phones, wireless devices, a static IP enabled CCTV camera, AR glasses etc.
[0059] In another embodiment of the present disclosure, the input/output unit (210) may be configured for receiving one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices (108). Further, the one or more camera sensors captures high definition video data and image data with multiple view of the environment. Further, the one or more LIDAR sensors are equipped to capture three dimensional spatial data of the environment. Furthermore, the one or more environment sensors are equipped to capture real-time environmental conditions. Furthermore, the sensing input received from the one or more sensors electronically coupled with the one or more Edge AI electronic devices (108) include, but are not limited to, images, videos, audio recording, linear acceleration inputs, sensor inputs, geolocation data, environmental data and/or texts, etc.
[0060] In one embodiment of the present disclosure, the pre-processing unit (212) of the application server (110), is disclosed. The pre-processing unit (212) may be configured for enhancing the quality of one or more sensing inputs. Further, the pre-processing unit includes automatic blurring model, environmental condition classifier model, stabilization model and stabilization model. Furthermore, each model is configured to adjust parameters of the sensors, stabilize received sensing inputs, trim the video data and combinations thereof.
[0061] In another embodiment of the present disclosure, the environment condition classifier model of the pre-processing unit (212), is disclosed. Further, the environment condition classifier model monitors real-time environmental conditions to automatically adjust operational capturing of one or more activities and building climate control, enhancing safety and efficiency. Further, the environment condition classifier model utilizes various data inputs, including environmental sensors that gather temperature, humidity, air quality, and atmospheric pressure data, and combination thereof. As well as traffic cameras that track vehicle flow and congestion, and weather sensors monitoring precipitation, wind speed, and light intensity. Further, the processor (202) analyze this real-time data to adapt to changing conditions, resulting in system adjustments like climate control settings, while also sending alerts for situations requiring manual intervention, such as low visibility or heavy rain. Furthermore, data acquisition includes real-time data from sensors and historical data for pattern analysis, with preprocessing steps that combine sensor data into a cohesive overview and apply noise reduction and normalization to ensure consistency. For example, during nighttime, the environment condition classifier model might lower the threshold for object detection to account for lower visibility and still detect activity reliably, such as a vehicle speeding through a dimly lit intersection or physical abuse to a person in dimly lit area.
[0062] In yet another embodiment of the present disclosure, the video editing and trimming model of the pre-processing unit (212), is disclosed. In an embodiment, the video editing and trimming model of the pre-processing unit (212) utilizes YOLOv11 technology for real-time object detection and scene identification. Further, YOLOv11 object detection model analyses movement and detects the presence of objects within video frames, providing valuable data for subsequent video editing processes. Further based on the detected object presence and movement patterns, the model automatically identifies relevant video segments for editing, generating corresponding timestamps for trimming. Further these identified video segments are then passed to external video processing modules, including but not limited to OpenCV or FFmpeg, for performing the actual video trimming and optimization tasks. Further by leveraging the object-specific detection capabilities of YOLOv11, the model facilitates efficient and targeted video editing, ensuring that only the relevant frames or scenes are retained and that the video content is optimized for downstream applications, including but not limited to traffic violation detection, criminal behaviour analysis, and data storage, making the process both fast and efficient. Further, the primary objective is to shorten videos to a specific length, such as but not limited to 20 seconds, while improving their quality. Further, the process begins with video capture from the one or more Edge AI electronic devices (108), where videos are recorded and stored locally in a standard format like MP4. Further, during preprocessing the frames are extracted at a steady rate, and normalization ensures consistent resizing and color adjustments. Further, the video editing and trimming model detects key moments and scenes, allowing the system to select the most important parts of the video for the desired length. Furthermore, in the editing process, video editing and trimming model trims the selected segments and creates smooth transitions while enhancing the video's visual quality by adjusting brightness and color.
[0063] In yet another embodiment of the present disclosure, the stabilization model of the pre-processing unit (212), is disclosed. Further, the stabilization model aims to stabilize video frames by reducing motion blur caused by camera shake and fast-moving objects, leveraging an optical flow technology for real-time processing directly on Edge AI electronic devices (108), which ensures faster response times and lower power consumption. Further, the objective is to stabilize video frames captured at high frame rates, allowing for better motion detection. Further, the process of stabilization is initiated in real-time with video capture, where stabilization model immediately analyses the footage for motion blur. Further, the pre-processing involves extracting individual frames and detecting blur in real-time by analyzing pixel movement. Further, motion tracking utilizes optical flow algorithms to estimate motion vectors and feature tracking to maintain frame alignment. Further, in the image stabilization phase, Edge AI compensates for shaking or rapid movements by adjusting frame position and orientation using various transformation techniques. Performance metrics like PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index) evaluate stabilization quality and visual similarity. Furthermore, stabilization model provides real-time stabilization with reduced motion blur, enhancing output quality without relying on cloud infrastructure, making it ideal for mobile applications with minimal latency and efficient power usage.
[0064] In yet another embodiment of the present disclosure, the automatic blurring model of the pre-processing unit (212), is disclosed. Further, the automatic blurring model automatically blurs sensitive objects, such as faces and bystanders, in real-time video feeds to ensure privacy while still capturing important traffic data. Further, the objective is to detect traffic and criminal activity and obscure sensitive elements using Edge AI which facilitates fast on-devices processing. Further, real-time processing involves extracting video frames and detecting sensitive objects while keeping vehicles and license plates visible. Further, blurring is applied using techniques like Gaussian blur through OpenCV, with the system calculating bounding boxes around detected objects and dynamically adjusting the blur level based on sensitivity. Furthermore, the output is displayed in real-time on the mobile devices, and the processed video is saved locally, preserving the blurred content while maintaining the visibility of relevant activity information. For example, if a mobile device captures a video of a criminal assault, and a bystander is visible in the frame someone who is neither involved in nor related to the incident then the bystander's face will be blurred to protect their privacy and avoid any infringement on their rights. Furthermore, the encrypted blurred videos are saved in the mobile application gallery to prevent unauthorized sharing or sending to others.
[0065] In one embodiment of the present disclosure, the spatial data model (214) may be configured to generate a highly detailed 3D point cloud map of the spatial data such as environment, capturing road lanes, vehicle positions, and surrounding objects. Spatial data means a 3D point cloud representing the environment's spatial structure, precise alignment of one or more objects with lane boundaries, crime scene visualization, and other activities. Further, Spatial data Model (214) corresponds to fusion of LIDAR systems with Dynamic Adaptive LIDAR Resolution(DALR). It must be noted herein that, the conventional LIDAR systems may face significant limitations due to static point cloud density and resolution, which present challenges in high-speed environments and complex, crowded settings. To address these limitations/challenges, the present system proposes Dynamic Adaptive LIDAR Resolution (DALR), allowing the LIDAR to automatically adjust point cloud density in real-time based on traffic speed, vehicle proximity, and environmental complexity. For example, in a busy highway scenario, LIDAR can focus high-resolution scanning around vehicles near lane boundaries, enabling precise detection of wrong-side driving. Additionally, spatial data from LIDAR provides a spatial structure representation, while the Dynamic Alignment and Localization Refinement (DALR) system ensures accurate alignment with lane boundaries, even in challenging conditions like fog or rain. Further, a spatial positioning confidence score, such as 0.98 is computed, that indicates high precision in detecting and localizing objects or lane boundaries, which is crucial for identifying wrong-side violations. High spatial accuracy is vital to correctly map detected vehicles to their real-world positions, as poor accuracy could result in false positives, such as misidentifying a vehicle's lane position. Spatial positioning confidence score may vary as per the positioning in the 3D environment.
[0066] In another embodiment of the present disclosure, the spatial positioning Confidence Score is calculated using 3D spatial data captured by a spatial data model (214), which provides depth and positional information for detected objects. Further, the spatial positioning confidence score reflects the alignment of an object's 3D spatial characteristics such as distance, size, and shape with known object profiles stored in a database, indicating the certainty of the LIDAR system in accurately identifying the object's type and position. The quality of the data significantly impacts the Confidence Score, wherein high-resolution LIDAR captures precise spatial details, resulting in higher confidence scores (e.g., above 0.90) for object identification, while scenarios characterized by sensor noise or lower-resolution data yield lower confidence scores (e.g., around 0.70) due to increased uncertainties in object shape or distance measurements. Furthermore, scoring system enhances the overall reliability of object positioning within the traffic and criminal activity detection framework.
[0067] In one embodiment of the present disclosure, object classification model (216) may comprise the YOLOv11 with a real-time occlusion-aware classifier (ROAC). The object classification model (216) processes one or more sensing inputs from the Edge AI electronic devices (108) to detect and classify vehicles, pedestrians, and road signs in real-time, providing bounding boxes with confidence scores for each classification of detected object. It must be noted herein that, the conventional YOLOv11 faces challenges when objects are partially hidden and occluded objects, such as vehicles or pedestrians, leading to misclassification or missed detections in dense traffic scenarios. Additionally, while YOLOv11 is optimized for speed, it may sacrifice accuracy in complex scenarios where objects overlap, resulting in degraded performance in fast-moving scenes.
[0068] To address the above issues, the YOLOv11 model has been enhanced with a real-time occlusion-aware classifier (ROAC). Further, due to this enhancement the model incorporates probabilistic Edge detection and optical flow analysis, enabling the system to identify and track partially hidden vehicles over time, subsequently reassigning higher confidence scores once visibility is restored. For instance, in a congested street scenario, ROAC can detect a vehicle driving the wrong way behind a large truck, accurately tracking its movement and assigning a high object classification confidence score of 0.93 when it becomes fully visible. Object confidence classification score may vary depending upon different scenarios and results.
[0069] In another embodiment of the present disclosure, the modifications to YOLOv11 include the incorporation of several key features to improve its performance in occlusion-heavy environments. Further the real-time occlusion-aware classifier (ROAC) predicts hidden parts of objects based on learned patterns, using a custom-designed attention mechanism that focuses on visible portions while extrapolating the occluded areas. Additionally, contextual object reconstruction allows the model to infer missing portions of partially visible objects using context from the surrounding scene. Further, the occlusion-aware bounding box prediction algorithm has been modified to generate more accurate bounding boxes that encompass entire objects, even if parts are missing from view. Further, the dynamic object prioritization focuses on fully visible objects first while handling occluded ones with specialized techniques, ensuring real-time efficiency without sacrificing accuracy. Furthermore, the result is improved in detecting accuracy in complex environments, enabling reliable object detection even when only small parts are visible, while maintaining the real-time processing speed that YOLOv11 is known for.
[0070] In yet another embodiment of the present disclosure, the Object Classification Confidence Score is derived from visual data captured by the Edge AI electronic devices (108) and processed by the object classification model (216), which identifies and classifies objects using a dedicated classification model. Further, the YOLOv11 assigns a confidence score based on the similarity of the detected object to known classifications, reflecting the model's certainty in its identification. Further, the quality of the visual data significantly influences this score. For example, the high-quality images captured under favourable lighting conditions enable YOLOv11 to classify objects with confidence, resulting in scores exceeding 0.90. Conversely, the low-quality visual data affected by factors such as low light, motion blur, or occlusions can diminish the model's accuracy, leading to reduced confidence scores around 0.75, as the system struggles to reliably match objects to their classifications. Furthermore, confidence scoring enhances the reliability of object detection and classification in the traffic and criminal activity detection framework.
[0071] In one embodiment of the present disclosure, the data fusion model (218) is configured for synchronizing and combining LIDAR's 3D point cloud data generated by the spatial data model (214) with YOLOv11's 2D object detection generated by Object classification model (216) to establish a comprehensive spatial positioning and classification understanding of the scene. It must be noted herein that there exists a challenge due to temporal misalignment between 2D object detection and 3D point cloud data. More particularly, the system may frequently struggle to synchronize the data streams from different sensors, particularly in dynamic environments. This may result in mismatched object positions in 2D and 3D data, impairing spatial accuracy. Furthermore, detection delays become critical when monitoring high-speed objects, as standard fusion techniques may not adequately account for rapid movement, complicating object mapping between the 2D and 3D domains. Such delays can adversely affect applications like traffic violation and criminal activities detection, where both speed and accuracy are essential.
[0072] To address the aforementioned limitations, the invention incorporates an Advanced Temporal-Spatial Synchronization System (ATSSS), which facilitates real-time alignment of 2D and 3D data. Further, the ATSSS employs predictive modelling to compensate for time lag, leveraging velocity vectors obtained from both LIDAR and YOLOv11 to predict the future positions of detected objects. For example, in a scenario where a vehicle detected by YOLOv11 is traveling at 80 mph in the wrong direction, the ATSSS predicts its future position to maintain synchronization with the LIDAR's 3D map, effectively preventing misalignment due to speed.
[0073] In another embodiment of the present disclosure, the modifications to enhance the fusion of LIDAR and YOLOv11 data comprise three primary features. First, the ATSSS aligns both data streams in real-time by synchronizing frame rates and timestamps, ensuring accurate correspondence to the same temporal reference. Second, predictive modelling is integrated to anticipate rapid object movements; this approach analyses the velocity vectors of objects from both sensors, allowing for future position forecasting to maintain alignment in fast-moving scenarios. Lastly, the system employs velocity-based adjustment, dynamically refining the fusion process according to the detected velocity of objects, thereby ensuring precise spatial placement in both 2D and 3D domains. Furthermore, the system achieves real-time synchronization and accurate spatial understanding, making it particularly effective for applications such as traffic and criminal activity, where precision and speed are critical.
[0074] In yet another embodiment of the present disclosure, the ATSSS employs a synchronization algorithm that dynamically aligns data streams in real-time, reducing temporal misalignment by up to 50%, and incorporates an adaptive buffer to recalibrate synchronization based on real-time feedback, ensuring high spatial accuracy even under fluctuating conditions. Further, this integration results in a 40% increase in spatial accuracy and a 25% reduction in latency, which is vital for precise object detection in critical applications like traffic activity detection and crime monitoring. Further the object detection confidence score corresponds to the fusion of Spatial data models (214) spatial positioning confidence score and Object classification models (216) object classification confidence score. The confidence score is derived as follows: The YOLOv11 model processes video feeds to detect objects and assign initial scores, which ROAC adjusts by predicting occluded parts, and final scores are integrated with LIDAR data to improve localization. Different scores result from varying conditions, with high confidence scores (e.g., 0.95) indicating clear visual data and minimal occlusion, moderate scores (e.g., 0.85) reflecting partial occlusion or visual disruptions, and low scores (e.g., 0.70) occurring under significant occlusion or poor lighting, illustrating how visual data quality and LIDAR enhance object detection accuracy by using data fusion model (218).
[0075] In one embodiment of the present disclosure, the processing by the processor (202) involves analysing at least one of the following:
a. temporal sequences of one or more objects using a temporal analysis model (220) to model the motion trajectory of the detected objects over time, which helps identify temporal data associated with them and generates a temporal consistency confidence score, and
b. the real-time recognition of one and/or more actions of the objects using a Human Activity Recognition (HAR) model (222) to identify their behavioural data, thereby assigning a behavioural confidence score to the objects.
[0076] In one embodiment of the present disclosure, the temporal analysis model (220) may be a Liquid Neural Network (LNN) with the Event-Based Temporal Adaptation Layer (ETAL). Further, the temporal analysis model (220) is adept at processing sequences of temporal data, tracking vehicle trajectories and identifying sustained activity like wrong-side driving. Further, the temporal data comprises of sudden and significant changes in the vehicle's trajectory, criminal intimidation with use of weapons and other activities. It must be noted herein that, conventional LNNs face challenges with erratic or non-linear motion, such as abrupt changes in direction or frequent lane switches, which can lead to difficulties in accurately identifying sustained activity. Further, to address this, the Event-Based Temporal Adaptation Layer (ETAL) has been proposed, enhancing the LNN with an event-triggered feedback loop that prioritizes sudden and significant deviations in a vehicle's trajectory. Furthermore, this modification enables the system to quickly adapt to critical movements while ignoring irrelevant historical data, thus ensuring a more responsive and precise detection mechanism.
[0077] In another embodiment of the present disclosure, the ETAL continuously monitors vehicle trajectories in real time, focusing on significant deviations or activity and triggering immediate processing for these events. Further, by effectively ignoring minor historical movements, the modified LNN improves its capacity to detect sustained or repeated activity, such as wrong-side driving. Further, this includes an activity confidence scoring mechanism that reflects the severity and duration of detected activity, allowing the system to flag cases of repeated infractions. For example, if a vehicle briefly switches to the wrong side of the road and then re-enters shortly after, the ETAL ensures that the second instance is prioritized, resulting in a high activity confidence score of 0.97. This mechanism leads to faster and more accurate detection of complex driving behaviours, enhancing overall efficiency in activity detection and behaviour analysis in chaotic traffic scenarios.
[0078] In yet another embodiment of the present disclosure, the temporal analysis model (220) incorporates an Event-Based Temporal Adaptation Layer (ETAL) within the Liquid Neural Networks (LNN), which selectively amplifies responses to significant changes in the temporal sequence, thereby minimizing response time by up to 40% and enhancing pattern recognition accuracy by 20% in variable environments, particularly for subtle behavioural changes. Further, this modification results in a 35% improvement in adaptive accuracy and faster convergence rates, enabling the system to rapidly learn and adjust to new temporal patterns with minimal training data. Further, the temporal consistency confidence score is determined through a comprehensive process involving the collection of time-series data from both the mobile phone camera and LIDAR, ensuring temporal alignment via the Adaptive Temporal-Spatial Synchronization System (ATSSS). Further, the Kalman Filter is employed to perform a Temporal Smoothness Check, estimating current and predicting future object states while filtering out noise to maintain smooth tracking. Further, the confidence score is calculated by comparing the Kalman Filter's predictions to the actual detected states, with higher consistency resulting in elevated scores. In one embodiment, the factors influencing this score include object movement patterns, sensor synchronization, and environmental stability, where gradual movements and synchronized data yield high scores (e.g., above 0.90), while abrupt movements, desynchronized inputs, or fluctuating conditions may lower the score (e.g., around 0.70 or 0.75). Furthermore, the temporal consistency confidence score effectively reflects the reliability of object tracking over time, ensuring optimal detection and response to real-time behaviours.
[0079] In yet another embodiment of the present disclosure, the Human Activity Recognition (HAR) model (222) for driver behaviour analysis is enhanced through the integration of a Contextual Behavioural Enhancement Network (CBEN) to address the challenges of detecting subtle behaviours in dynamic environments using the behaviour data. Further, the behavioural data comprises of erratic unusual human behaviour, reckless swirling of vehicle, use of phone, illegal substance abuse, road rage assault and other activities. It must be noted herein that the conventional HAR models often struggle to accurately identify actions such as phone use or signs of distraction, primarily due to background noise and rapid vehicle movements. Moreover, the conventional HAR models face limitations in analysing driver behaviour. Additionally, the conventional HAR models lack context awareness, focusing primarily on gestures without considering surrounding conditions, which can lead to missed correlations between driver actions and driving patterns.
[0080] Thus, to alleviate the above challenges, the present invention integrates the CBEN with HAR model, wherein the CBEN improves the detection capability by not only recognizing driver gestures but also analyzing the surrounding context, including vehicle speed, movement patterns, and traffic conditions. Furthermore, by cross-referencing these contextual factors, the CBEN enables the system to identify unusual or dangerous behaviours, such as a driver using a phone while exhibiting erratic driving, leading to a more nuanced understanding of driver actions.
[0081] In yet another embodiment of the present disclosure, the CBEN further incorporates advanced noise reduction techniques to filter out irrelevant background movements, allowing for real-time detection of meaningful driver behaviours. For example, if a driver is detected using a mobile phone while swerving into the wrong lane, the CBEN can correlate this distraction with the vehicle's abrupt movements. Further, this results in a behavioural confidence score of 0.94, effectively confirming distracted driving as the cause of the wrong-side activity. Furthermore, the enhanced capabilities of the CBEN not only improve the accuracy and reliability of detecting complex driver behaviours but also provide critical behavioural context for assessing traffic activity, ensuring a more robust analysis of driver actions in real-time.
[0082] In yet another embodiment of the present disclosure, the Human Activity Recognition (HAR) model (222) utilizes advanced algorithms to classify specific behaviours, such as turning, accelerating, and decelerating, providing critical context for identifying whether movements align with typical behaviour patterns or indicate unusual actions. Further, the system stores and references historical behaviour patterns, enabling the detection of deviations by comparing new behaviours with established norms. Further, the Behavioural Confidence Score computed is influenced by several factors: predictable and smooth behaviours, such as steady speed and expected lane changes, yield high scores (typically above 0.90), reflecting strong confidence in behaviour identification, while repetitive patterns maintain similar high scores. Conversely, sudden or erratic movements like abrupt lane changes or rapid acceleration are flagged as anomalies, resulting in lower scores (around 0.75 or lower), as the Liquid Neural Network (LNN) assigns lower confidence due to deviations from established norms. Additionally, unseen or unfamiliar behaviours further decrease scores due to the LNN's uncertainty in recognizing unobserved patterns. Further, Environmental factors, such as changing traffic conditions and adverse weather, also impact behaviour recognition; sudden increases in traffic density may prompt rapid behavioural changes, temporarily reducing confidence scores as the system recalibrates. The LNN's adaptability, combined with detailed temporal analysis, robustly generates the Behavioural Confidence Score, ensuring accuracy and clarity in reflecting evolving behaviour patterns.
[0083] In one embodiment of the present disclosure, the proposed enhancement to Symbolic Integration model (224) involves the integration of Symbolic AI and a Real-Time Rule Evolution Engine (RTREE) to improve the evaluation of activity based on predefined traffic laws. It must be noted herein that, the conventional rule-based systems often struggle with efficiency in real-time applications, particularly when multiple detection subsystems provide conflicting signals. Further, to address these challenges, RTREE dynamically adjusts traffic or criminal activity rules by learning from past detections and recalibrating thresholds for different types of activity, such as wrong-side driving. Further, this adaptability enables the system to respond more effectively to new or unexpected behaviours on the road, ultimately reducing false positives. Further, the computed overall activity score is checked against the threshold determined by Symbolic Integration Model (224). For instance, when the system detects a vehicle that momentarily enters the wrong lane but quickly corrects its trajectory, RTREE increases the time threshold required to classify the action as sustained wrong-side driving, resulting in an overall activity confidence score of 0.96.
[0084] In another embodiment of the present disclosure, the RTREE enhances the context-aware confidence scoring mechanism of Symbolic AI by continuously recalculating confidence scores based on the reliability of signals from various subsystems, including vehicle detection and driver behaviour analysis. This ensures that only significant activity are flagged with high confidence while accounting for any conflicting data. Further, integration of RTREE results in improved adaptability to changing traffic patterns and driver behaviours, enabling the system to handle complex scenarios without reliance on static, predefined rules. Further, by recalibrating thresholds based on real-time data and past detections, RTREE effectively reduces the occurrence of false positives, providing a robust and reliable evaluation of traffic and criminal activity. Further, if the detected overall activity confidence score is above a certain threshold Symbolic Integration model will mark that instance as detected activity. Furthermore, this innovative approach significantly enhances the overall accuracy and efficiency of activity detection analysis in real-time monitoring systems.
[0085] In another embodiment of the present disclosure, the Symbolic Integration Model (224) proposes a Real-Time Rule Evolution Engine (RTREE), which continuously updates the rule set based on newly acquired data, allowing for real-time adaptation. This model features a sophisticated scoring mechanism that assigns confidence levels to decisions, significantly enhancing decision-making granularity. As a result, the model achieves a 25% increase in decision accuracy while ensuring that confidence scores reflect the most current contextual information. The enhanced Symbolic AI framework produces a more resilient system that demonstrates a 30% improvement in the accuracy of rule application, while the real-time confidence scoring mechanism contributes to a 40% increase in decision reliability. This robust approach makes the system highly effective for nuanced decision-making tasks, including behavioural analysis and activity detection.
[0086] In another embodiment of the present disclosure, the Symbolic Integration model (224) dynamically integrates and balances confidence scores from multiple subsystems including Spatial data model (214), Object classification model (216), data fusion model(218), temporal analysis model (220) and Human Activity Recognition model (222) to produce a final, reliable confidence score for activity detection. This score reflects the robustness and certainty of the overall detection process, ensuring that all relevant subsystems are considered before confirming an activity. By leveraging multiple sources of data and confidence scoring, the system minimizes false positives, reporting only verified activity with high confidence and accuracy. Furthermore, the system generates comprehensive activity reports that detail the violation type, confidence scores, driver behaviour, and spatial positioning, thereby ensuring accountability in activity detection. The final activity detection score is then compared against a threshold generated by the Symbolic Integration Model (224) to validate detection outcomes.
[0087] In one embodiment of the present disclosure, real time alerts and detailed activity report for enforcement personnel using reporting interface are provided, wherein the reports include the computed overall activity confidence score, detected objects and analysis of driver behaviour. Further, the system allows users to submit traffic or criminal activity reports through an app, providing video evidence, GPS location, timestamp, and vehicle details. Further, these reports are first reviewed by AI for clarity, duplication, and basic validation (such as the accuracy of the location and time). Further, if a report is found invalid, the user is notified with an explanation and tips for improving future submissions. Further, valid reports undergo further verification, including manual review by traffic authorities and cross-checking of GPS data. Once validated, users are notified of their reward, which is a percentage of the collected fine. Further, the reward is then distributed to the user's linked payment account after fine collection. Further, the system ensures data security by encrypting all sensitive information, including personal data, video evidence, and payment details. Users can track their report history, including feedback on rejections and rewards, while traffic authorities access a dashboard to monitor reports and analyze activity patterns. Furthermore, the system offers advantages such as automation of processing, transparent communication, enhanced security, and user engagement through rewards and feedback.
[0088] FIG. 3 illustrates a diagram illustrating the application of confidence score computing for detecting one or more activities within the environment, in accordance with an embodiment of the present subject matter. Further, FIG. 3 is explained in conjunction with elements from FIG. 2. Here, in one embodiment disclosed of the present disclosure, the Overall Activity Confidence Score is a comprehensive measure that integrates three critical components such as the Object Detection Confidence Score, the Temporal Consistency Confidence Score, and the Behavioural Confidence Score. Further, each component may reflect essential aspects of activity analysis such as the accuracy of detecting objects in visual data, the consistency of tracking these objects over time, and the alignment of observed behaviours with expected patterns. Further, the computational logic hinges on the establishment of appropriate weights for each component, which dictate their influence on the final score. Furthermore, these weights (w₁, w₂, w₃) are normalized such that their sum equals one, ensuring a balanced contribution of each score based on its relevance to the specific application context.
[0089] In another embodiment of the present disclosure, to calculate the Overall Activity Confidence Score, a weighted average formula is employed Overall Activity Confidence Score=(w1×Object Detection Confidence)+(w2×Temporal Consistency Confidence)+(w3×Behavioral Confidence). Before applying this formula, individual component scores are normalized to a common range, typically between 0 and 1. Further, normalization prevents any single score from disproportionately skewing the overall result. Following the calculation of the weighted sum, the final score may be further scaled to align with predetermined thresholds for decision-making, where higher values denote greater confidence in the reliability of the detected activity.
[0090] In another embodiment of the present disclosure, the determination of weighting factors (w₁, w₂, w₃) is highly context dependent. For example, in applications such as surveillance, where accurate object detection is critical, a higher weight may be assigned to the Object Detection Confidence. Conversely, in scenarios requiring continuous monitoring, such as real-time tracking, Temporal Consistency may receive more emphasis. Further, in a dynamic system, these weights can be adjusted in real time based on current conditions, ensuring that the confidence score remains reflective of situational demands. High Overall Activity Confidence Scores indicate reliable activity detection, while lower scores highlight potential issues that may necessitate further scrutiny.
[0091] Working Example: Consider an application with the following weights: ( w₁ = 0.25 ) (Object Detection), ( w₂ = 0.50 ) (Temporal Consistency), and ( w₃ = 0.25) (Behavioural Confidence). Given component scores of Object Detection Confidence = 0.91, Temporal Consistency Confidence = 0.90, and Behavioural Confidence = 0.95. The formula for Overall Activity Confidence Score = (w₁ × Object Detection Confidence) + (w₂ × Temporal Consistency Confidence) + (w₃ × Behavioural Confidence). Now we can compute the Overall Activity Confidence Score as follows: [{Overall Activity Confidence Score} = (0.25 × 0.91) + (0.50 × 0.90) + (0.25 × 0.95)] Calculating this gives [{Overall Activity Confidence Score} = 0.2275 + 0.4500 + 0.2375]. Thus, the Overall Activity Confidence Score in this scenario would be 0.91, indicating a high level of confidence in the detected activity.
[0092] The overall activity confidence score is evaluated on a scale reflecting the reliability of detected activities, with high scores (0.90+) indicating robust individual component performance across object detection, tracking consistency, and behaviour alignment, thereby confirming the activity's registration in the system. Medium scores (0.75-0.89) arise from slight reductions in one or more component scores, such as minor tracking inconsistencies or behavioural deviations, leading to a 50/50 likelihood of acceptance pending further review. Conversely, low scores (below 0.75) signify significant issues, including erratic behaviour, poor object detection, or disrupted temporal consistency, resulting in low confidence and the activity being rejected by the system.
[0093] In one embodiment of the present disclosure, blockchain technology provides a robust framework for activity storage through immutable record-keeping, decentralized networks, and the use of smart contracts. Further, each recorded activity is stored as a transaction on the blockchain, ensuring a permanent and unalterable record that guarantees data integrity and traceability. Further, the decentralized nature of blockchain removes reliance on a central authority, distributing activity data across multiple nodes and enhancing resistance to tampering and unauthorized access. Additionally, smart contracts automate processes such as verification, timestamping, and logging of activities, executing actions automatically when specific conditions are met. Furthermore, blockchain technology ensures consistency and accuracy in data handling while maintaining a transparent view of activities for authorized users, which is especially beneficial for auditing purposes.
[0094] In another embodiment of the present disclosure, key features of blockchain storage for activities include enhanced data integrity, improved security, scalability, and auditability. Further, the immutable nature of blockchain ensures that once activities are recorded, they cannot be altered, making it crucial for applications like activity detection and compliance tracking. Further, data encryption and the decentralized ledger minimize risks associated with data breaches, while consensus mechanisms validate and confirm new entries, enhancing trustworthiness. Further, the clear, traceable history created by timestamped records allows for effective regulatory compliance and investigative purposes. Furthermore, use cases include logging traffic activity s, documenting crime activities, and facilitating real-time monitoring and reporting, thereby ensuring that activity data is secure, accurate, and tamper-proof.
[0095] In one embodiment of the present disclosure, the aforementioned traffic violations and/or criminal activities can be tracked based upon the footages/videos/media captured from one or more static IP enabled CCTV cameras installed within a premise. In this embodiment, the system provides a secure mobile AI application that leverages a dual-layered security model incorporating API key and OAuth 2.0 authentication protocols to ensure secure access and data integrity across all interactions with the one or more a static IP enabled CCTV cameras. Additionally, session tokens are generated by the camera, implementing Role-Based Access Control (RBAC) to limit permissions and allow only authorized functions, such as data capture and event handling. Further, the application employs the ONVIF protocol for devices discovery and capability enumeration, enabling it to locate the one or more static IP enabled CCTV cameras and retrieve its specifications, including video streaming options and motion detection capabilities. Further, the advanced configuration API access allows the application to adjust camera settings, optimizing parameters like motion sensitivity and video resolution to enhance processing accuracy. Furthermore, real-time command and control is facilitated through REST API calls, enabling dynamic command execution for key camera functions. This includes initiating streaming, capturing on-demand video clips in formats like MP4, and controlling pan-tilt-zoom (PTZ) movements with precise parameters.
[0096] In another embodiment of the present disclosure, the mobile AI application subscribes to event notifications using ONVIF or MQTT protocols, implementing advanced event subscription and custom filtering to identify significant motion types with metadata. Further, upon receiving event notifications, the application retrieves video stream URLs using RTSP, capturing high-resolution clips for verification via HTTP requests. To ensure secure data transmission, all exchanges are encrypted using TLS (Transport Layer Security), while tokenized session management prevents unauthorized reuse of session tokens. Further, the application also processes events in the background, generating and storing validated clips, which are synchronized with a secure server and accessed through a role-based access control-enabled dashboard. In summary, the operational workflow encompasses establishing a secure connection through authentication, enumerating devices capabilities, executing commands, subscribing to and filtering events, retrieving data, and ensuring secure transmission while maintaining background processing for validated event clips. Furthermore, this holistic approach guarantees robust security, efficient data handling, and authorized access for further review.
[0097] In one embodiment of the present disclosure, the processor (202) is trained using federated learning technique, wherein the processor improves over time by learning from devices local data, weight without sharing sensitive information with a central server (104) communicatively coupled with processor (202). Further, federated learning enables distributed model training across multiple edge AI devices (108), such as traffic cameras, weather sensors, and vehicles, without transferring raw data, thereby preserving privacy and reducing latency. Further, each edge AI devices (108) performs local computations, including running Optical Character Recognition (OCR) and Local Neural Networks (LNNs), and stores model updates, such as learned weight adjustments, locally. These updates are periodically shared with a central server (104), where they are aggregated without accessing raw data, ensuring privacy. Further, the local error is calculated on each device by comparing its predictions to ground-truth labels, with model weights adjusted through local optimization algorithms like stochastic gradient descent. Further, the central server (104) aggregates these updates to improve a global model, which is then redistributed back to the edge AI devices (108), enhancing performance across varying local conditions, such as lighting, motion, and plate formats. Furthermore, this approach allows real-time detection and reporting of traffic activity with minimal latency, while ensuring scalability, privacy, and secure sharing of insights across devices.
[0098] In order to implement the federated learning technique as described above, the processor (202) employs a federated learning model to train AI algorithms across a network of the edge AI devices. This approach enables the processor to improve over time by learning from data generated locally on each device, thereby enhancing model performance without transferring sensitive data to a central server. To achieve this, secure synchronization protocols may be employed that facilitate the exchange of model updates between devices. These updates are aggregated and processed in a manner that preserves privacy and data security. In some embodiments, various distributed network protocols, including secure aggregation or peer-to-peer connectivity, may be implemented depending on application requirements, ensuring that the federated learning process adapts to different operational environments. This framework allows for continuous model enhancement across devices, optimizing AI accuracy in real-time and aligning with privacy regulations by preventing the transmission of raw data.
[0099] In one embodiment of the present disclosure, the proposed Offline Mode for traffic activity detection ensures that activity are detected and stored locally on mobile devices when internet connectivity is unavailable, with data automatically syncing to the cloud upon restoration of connectivity. Further, the offline model utilizes Edge AI to process video and sensor data locally, enabling real-time detection of various traffic activities, including mobile phone usage, seat belt activity, speeding, and running red lights. Further the mobile phone, mounted in a fixed position within the vehicle, activates the traffic activity detection application, which utilizes GPS and GLONASS for continuous real-time location tracking and timestamping, thereby accurately tagging each activity. Further, detected activity are saved locally with metadata, including activity type, geolocation coordinates, and timestamps, and stored in a local database for syncing when internet access is restored. Further, the system operates uninterrupted due to local Edge AI processing, reducing latency and ensuring immediate detection. Furthermore, it periodically checks for internet connectivity, and once available, securely transmits the encrypted activity data to the cloud for centralized storage and processing by authorities, effectively managing intermittent connectivity and ensuring no data is lost.
[00100] Further, the system securely transmits encrypted activity data to a centralized cloud storage for processing by authorized entities, effectively managing intermittent connectivity to ensure no data is lost. This process includes a comprehensive user login mechanism utilizing One-Time Passwords (OTPs) and the submission of Know Your Customer (KYC) details, which are verified through live facial recognition using the mobile device's camera. The integration of live facial verification enhances security measures, addressing potential vulnerabilities associated with traditional methods. Additionally, the submission of bank details is encrypted to further protect sensitive information, thereby establishing a robust framework for user authentication and data integrity in compliance with regulatory standards. It must be noted herein that the sensitive data such as facial data (for example facial images and facial embeddings), and bank details may be encrypted with Encryption model using AES (Advanced Encryption Standard) with a 256-bit key before storing them in the database.
[00101] In one embodiment of the present invention, an alternate model in form of AR Glasses Integration Model for Traffic activity Detection is proposed herein that offers users the flexibility to select either AR glasses or a mobile phone camera for video capture in a traffic activity detection system. Users can connect AR glasses to a mobile app via Wi-Fi or Bluetooth, enabling real-time video streaming, or they can opt to utilize the mobile phone's built-in camera, with the app directly handling video capture. Further, the system employs a unified object detection mechanism, utilizing Liquid Neural Networks (LNN) and the YOLOv11 framework, ensuring consistent detection performance irrespective of the chosen devices. Further, the video feed from either source is processed through the detection pipeline, where detected objects and potential activity are analyzed using rule-based algorithms. Further, a Symbolic AI component assigns confidence scores to each detection, and if an activity is identified, it is validated and logged with the corresponding score, maintaining a seamless workflow regardless of the video source. Furthermore, this model enhances the adaptability and effectiveness of traffic and criminal activity detection through streamlined devices integration and robust processing capabilities.
[00102] A person skilled in the art will understand that the scope of the disclosure should not be limited to activity detection domain and using the aforementioned techniques. Further, the examples provided in supra are for illustrative purposes and should not be construed to limit the scope of the disclosure.
[00103] Referring to FIG. 4A and 4B, a flowchart that illustrates a method (300) for detecting one or more activities, in accordance with at least one embodiment of the present subject matter. The method (300) may be implemented by an Edge AI electronic device (108) including one or more processors (202) and a memory (204) communicatively coupled to the processor (202) and the memory (204) is configured to store processor-executable programmed instructions, caused the processor to perform the following steps.
[00104] At step (301), the processor (202) of the one or more Edge AI electronic devices (108) is configured to receive, one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices (108).
[00105] At step (302), the processor (202) is configured to generate spatial data related to one or more objects within the environment using a spatial data model (214) based on the one or more sensing inputs and thereby assigning a spatial positioning confidence score to the one or more objects.
[00106] At step (303), the processor (202) is configured to generate classification data related to the one or more objects by using an Object classification model (216) based on the one or more sensing inputs and the spatial data related to the one or more objects and thereby assigning an object classification confidence score to the one or more objects
[00107] At step (304), the processor (202) is configured to synchronize and combine the spatial positioning confidence score and the object classification confidence score using a data fusion model (218) and to assign an object detection confidence score to the one or more objects.
[00108] At step (305), the processor (202) is configured to process at least one or both of temporal analysis model (220) and/or Human Activity Recognition (HAR) model (222).
[00109] At step (306), the processor (202) is configured to analyze temporal sequences of the one or more objects using temporal analysis model (220) to model the motion trajectory of the one or more objects over time, to identify temporal data associated with the one or more objects and thereby generating a temporal consistency confidence score to the one or more objects.
[00110] At step (307), the processor (202) is configured to recognize one or more actions of the one or more objects , in real-time using a Human Activity Recognition (HAR) model (222) to identify the behavioural data of the one or more objects and thereby assigning a behavioural confidence score to the one or more objects.
[00111] At step (308), the processor (202) is configured to process the object detection confidence score, the temporal consistency confidence score and the behavioural confidence score using a Symbolic integration model (224) and to compute an overall activity confidence score
[00112] At step (309), the processor (202) is configured to comparing the overall activity confidence score with the predefined threshold to detect the one or more activities associated with one or more objects within the environment
[00113] Let us delve into a detailed working example of the present disclosure.
[00114] Example 01:
[00115] Consider the scenario where X, a sports car enthusiast, is driving his vehicle the wrong way on a highway at a speed of 80 mph. The system receives one or more sensing inputs from sensors electronically coupled to the Edge AI electronic devices (108), enabling the generation of spatial data related to the vehicle using the spatial data model (214). This data is crucial as it assigns a spatial positioning confidence score, allowing the system to track the vehicle accurately.
[00116] Simultaneously, the object classification model (216), such as YOLOv11, processes the video feed to identify the vehicle and generates classification data, assigning an object classification confidence score. The Advanced Temporal Synchronization and Spatial Sampling (ATSSS) system plays a critical role by utilizing the vehicle's velocity and direction data to predict its future position in real-time, ensuring that detection in the 2D domain from YOLOv11 is synchronized with the 3D LIDAR point cloud data, thereby mitigating potential misalignments due to the vehicle's high speed.
[00117] The processor (202) then synchronizes and combines the spatial positioning confidence score and the object classification confidence score using a data fusion model (218), resulting in an object detection confidence score that reflects the vehicle's accurate tracking. Additionally, the processor can model the vehicle's motion trajectory over time using the temporal analysis model (220), generating a temporal consistency confidence score that enhances detection accuracy by predicting the vehicle's trajectory and accounting for its high velocity.
[00118] This comprehensive approach allows the system to maintain real-time synchronization between LIDAR's 3D point cloud data and YOLOv11's 2D detection, effectively managing the challenges posed by fast-moving objects. Furthermore, by integrating LIDAR's depth information with YOLOv11's classifications through ATSSS, the system achieves a sophisticated spatial understanding of the environment. Ultimately, the processor computes an overall activity confidence score by processing the object detection, temporal consistency, and any behavioural confidence scores using the symbolic integration model (224). This score is compared to a predefined threshold to detect activities associated with the vehicle, enhancing the system's ability to monitor traffic conditions and detect activity with high accuracy, regardless of vehicle speed.
[00119] Example 02:
[00120] In the context of the method (300) for detecting activities using Edge-AI electronic devices (108), consider the scenarios of assault detection and weapon detection. For assault detection, the system receives one or more sensing inputs from sensors, such as cameras or motion detectors, electronically coupled to the Edge AI devices (108). The processor (202) generates spatial data related to individuals within the environment using the spatial data model (214), assigning spatial positioning confidence scores to the detected individuals. Concurrently, the Human Activity Recognition (HAR) model (222) analyses movements to identify aggressive gestures, such as repeated punching or forceful shoving.
[00121] When such patterns are sustained and align with known assault behaviours, the HAR model (222) assigns a high behavioural confidence score to the actions detected. This data is then synchronized and combined with the spatial positioning confidence score using a data fusion model (218), resulting in a comprehensive object detection confidence score for the individual exhibiting aggressive behaviour. The processor processes the object detection confidence score along with any temporal consistency scores generated from analyzing the movement trajectories over time, culminating in an overall activity confidence score.
[00122] In a separate but related scenario, the system uses object detection algorithms through the object classification model (216) to assess the scene for potential weapons. The processor analyses the sensing inputs to identify objects that match the shapes and characteristics of known weapons, like guns or knives. If an object closely resembles a weapon, the system assigns a high object classification confidence score. This score is integrated into the overall detection framework, and the processor computes the overall activity confidence score, which is compared against a predefined threshold to determine the potential presence of a weapon.
[00123] By leveraging the integration of HAR model (222) for aggressive behaviour and object detection for weapon identification, the system effectively monitors environments for threats. This dual approach enhances situational awareness and contributes to timely alerts for potential assaults or weapon presence, ultimately improving safety and security.
[00124] A person skilled in the art will understand that the scope of the disclosure is not limited to scenarios based on the aforementioned factors and using the aforementioned techniques, and that the examples provided do not limit the scope of the disclosure.
[00125] FIG. 5 illustrates a block diagram of an exemplary computer system (401) for implementing embodiments consistent with the present disclosure. Variations of computer system (401) may be used for detecting one or more activity. The computer system (401) may comprise a central processing unit ("CPU" or "processor") (402). The processor (402) may comprise at least one data processor for executing program components for executing user or system generated requests. A user may include a person, a person using a device such as those included in this disclosure, or such a device itself. Additionally, the processor (402) may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, or the like. In various implementations the processor (402) may include a microprocessor, such as AMD Athlon, Duron or Opteron, ARM's application, embedded or secure processors, IBM PowerPC, Intel's Core, Itanium, Xeon, Celeron or other line of processors, for example. Accordingly, the processor (402) may be implemented using mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some embodiments may utilize embedded technologies like application-specific integrated circuits (ASICs), digital signal processors (DSPs), or Field Programmable Gate Arrays (FPGAs), for example.
[00126] Processor (402) may be disposed in communication with one or more input/output (I/O) devices via I/O interface (403). Accordingly, the I/O interface (403) may employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMAX, or the like, for example.
[00127] Using the I/O interface (403), the computer system (401) may communicate with one or more I/O devices. For example, the input devices (404) may be a camera, microphone, touch screen, sensor (e.g., , light sensor, GPS), video devices/source, for example. Likewise, an output device (405) may be a user's smart phone, a static IP enabled CCTV camera, AR glasses, handset, handheld devices, dashcam, for example. In some embodiments, a transceiver (406) may be disposed in connection with the processor (402). The transceiver (406) may facilitate various types of wireless transmission or reception. For example, the transceiver (406) may include an antenna operatively connected to a transceiver chip (example devices include the Texas Instruments® WiLink WL1283, Broadcom® BCM4750IUB8, Infineon Technologies® X-Gold 618-PMB9800, or the like), providing IEEE 802.11a/b/g/n, Bluetooth, FM, global positioning system (GPS), and/or 2G/3G/5G/6G HSDPA/HSUPA communications, for example.
[00128] In some embodiments, the processor (402) may be disposed in communication with a communication network (408) via a network interface (407). The network interface (407) is adapted to communicate with the communication network (408). The network interface (407) may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, or IEEE 802.11a/b/g/n/x, for example. The communication network (408) may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), or the Internet, for example. Using the network interface (407) and the communication network (408), the computer system (401) may communicate with devices such as shown or a mobile/cellular phone (410). Other exemplary devices may include, without limitation, various mobile devices such as smartphones (e.g., Apple iPhone, Blackberry, Android-based phones, etc.), a static IP enabled CCTV cameras, and AR glasses, or the like. In some embodiments, the computer system (401) may itself embody one or more of these devices.
[00129] In some embodiments, the processor (402) may be disposed in communication with one or more memory devices (e.g., RAM 413, ROM 414, etc.) via a storage interface (412). The storage interface (412) may connect to memory devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, redundant array of independent discs (RAID), solid-state memory devices, or solid-state drives, for example.
[00130] The memory devices may store a collection of program or database components, including, without limitation, an operating system (416), user interface application (417), web browser (418), mail client/server (419), user/application data (420) (e.g., any data variables or data records discussed in this disclosure) for example. The operating system (416) may facilitate resource management and operation of the computer system (401). Examples of operating systems include, without limitation, Apple Macintosh OS X, UNIX, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu, etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like.
[00131] The user interface (417) is for facilitating the display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces (417) may provide computer interaction interface elements on a display system operatively connected to the computer system (401), such as cursors, icons, check boxes, menus, scrollers, windows, or widgets, for example. Graphical user interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh operating systems' Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, or web interface libraries (e.g., ActiveX, Java, JavaScript, AJAX, HTML, Adobe Flash, etc.), for example.
[00132] In some embodiments, the computer system (401) may implement a web browser (418) stored program component. The web browser (418) may be a hypertext viewing application, such as Microsoft Internet Explorer, Google Chrome, Mozilla Firefox, Apple Safari, or Microsoft Edge, for example. Secure web browsing may be provided using HTTPS (secure hypertext transport protocol), secure sockets layer (SSL), Transport Layer Security (TLS), or the like. Web browsers may utilize facilities such as AJAX, DHTML, Adobe Flash, JavaScript, Java, or application programming interfaces (APIs), for example. In some embodiments the computer system (401) may implement a mail client/server (419) stored program component. The mail server (419) may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as ASP, ActiveX, ANSI C++/C#, Microsoft .NET, CGI scripts, Java, JavaScript, PERL, PHP, Python, or WebObjects, for example. The mail server (419) may utilize communication protocols such as internet message access protocol (IMAP), messaging application programming interface (MAPI), Microsoft Exchange, post office protocol (POP), simple mail transfer protocol (SMTP), or the like. In some embodiments, the computer system (401) may implement a mail client (420) stored program component. The mail client (420) may be a mail viewing application, such as Apple Mail, Microsoft Entourage, Microsoft Outlook, or Mozilla Thunderbird.
[00133] In some embodiments, the computer system (401) may store user/application data (421), such as the data, variables, records, or the like as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase, for example. Alternatively, such databases may be implemented using standardized data structures, such as an array, hash, linked list, struct, structured text file (e.g., XML), table, or as object-oriented databases (e.g., using ObjectStore, Poet, Zope, etc.). Such databases may be consolidated or distributed, sometimes among the various computer systems discussed above in this disclosure. It is to be understood that the structure and operation of any computer or database component may be combined, consolidated, or distributed in any working combination.
[00134] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term "computer readable medium" should be understood to include tangible items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include Random Access Memory (RAM), Read- Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
[00135] Various embodiments of the disclosure encompass numerous advantages including methods and systems for detecting one or more activity. The disclosed method and system have several technical advantages, but not limited to the following:
• Larger Coverage Area: The use of mobile phone cameras provides access to a broader network area compared to static CCTV cameras. This flexibility allows for monitoring in various environments, including rural or underserved areas, thus expanding the reach of traffic and criminal enforcement initiatives and ensuring that all areas receive adequate attention.
• Real-Time Detection: Facilitates immediate identification of traffic violations and dangerous criminal behaviours, enabling prompt intervention and increasing overall safety.
• Dynamic Adaptability: Incorporates machine learning algorithms that can adapt to evolving traffic patterns and behaviours, reducing false positives and improving accuracy in traffic and criminal activity detection.
• User-Generated Content Management: Effectively processes and stores large volumes of user-generated data, distinguishing relevant evidence from irrelevant submissions to enhance enforcement efficiency.
• Improved Occlusion Handling: Employs advanced object detection techniques that minimize misclassification and missed detections of partially hidden objects, addressing a significant limitation of conventional systems.
• Synchronized Data Streams: Ensures accurate mapping of 2D and 3D data through improved synchronization, allowing for precise spatial accuracy even in dynamic environments.
• Context-Aware Analysis: Integrates contextual awareness into driver behaviour analysis, enabling the detection of subtle actions such as phone usage or signs of distraction, which conventional systems often overlook.
• Reduced Processing Delays: Optimizes historical data processing to ensure timely detection of critical events, such as lane changes, which is vital in high-speed scenarios.
• Scalability and Flexibility: Designed to easily scale and adapt to various traffic scenarios, accommodating a wide range of vehicle types and driving behaviours without the constraints of predefined rules.
• Cost-Effectiveness: Leverages existing mobile technology, reducing the need for expensive infrastructure investments in fixed monitoring systems.
• Comprehensive Behavioural Insights: Gathers detailed insights into driving patterns and behaviours over time, aiding in the development of targeted interventions and public safety campaigns.
• Predictive Capabilities: Anticipates traffic and criminal activity before they occur, providing authorities with a better opportunity to foresee accidents and traffic and criminal activity s. This proactive approach not only enhances safety but also helps in resource allocation, allowing law enforcement to focus on high-risk areas and reduce the likelihood of incidents.
• Automated Processing: The system module automates the processing of video files, significantly reducing the need for manual intervention. This not only speeds up the analysis but also minimizes human error, ensuring that traffic and criminal activity are detected and reported consistently and accurately.
• Reduced Administrative Overhead: By automating key aspects of the detection process, the system decreases the administrative burden on traffic authorities and government entities. This leads to a more efficient workflow, enabling staff to concentrate on strategic planning and community outreach while enhancing overall operational effectiveness.
• Transparency with Authorities: Blockchain technology creates a transparent system where both violators and authorities can access and verify activity data. The hashes stored on the blockchain ensure the integrity of evidence and activity records, fostering trust between the public and enforcement agencies. Additionally, this transparency can deter fraudulent claims and provide a clear audit trail for accountability.
• Real-time Data Sharing: The system facilitates real-time data sharing between various stakeholders, including law enforcement, city planners, and emergency services. This interconnectedness enhances coordination during traffic incidents and allows for a unified response strategy, improving overall safety and efficiency.
• Adaptive Learning: By integrating machine learning algorithms, the system can continuously improve its predictive capabilities based on historical data and emerging traffic patterns. This adaptive learning process ensures that the system evolves with changing traffic dynamics, enhancing its long-term effectiveness.
• Public Engagement: The technology can be used to engage the public through mobile apps that allow citizens to report activity or receive alerts about traffic conditions. This fosters a sense of community involvement and encourages responsible driving behaviour, ultimately contributing to safer roads.
• High Confidence and Accuracy: The use of multiple subsystems (LIDAR, YOLOv11, HAR, LNN) with confidence scoring ensures that only verified activity are reported, minimizing false positives.
• Real-Time Decision Making: Neuromorphic technology ensures low-latency processing, while Symbolic AI provides transparent, rule-based activity detection.
• Comprehensive Activity Reporting: The system provides detailed reports, including the activity type, confidence scores, driver behaviour, and spatial positioning, ensuring accountability.
[00136] In summary, these technical advantages solve the technical problem of providing a more convenient, less invasive, and continuous method for detecting one or more activities, thereby addressing the challenges associated with conventional detection methods such as false positives, less coverage, and the data privacy while processing. Additionally, these advantages contribute to improved user compliance, improves accuracy for detecting one or more activities, and the potential for cost savings, ultimately enhancing the overall management of activity, compliance and related conditions.
[00137] The claimed invention of a system and a method for detecting one or more activities involves tangible components, processes, and functionalities that interact to achieve specific technical outcomes. The system integrates various elements such as processors, memory, databases, encryption, authorization and authentication techniques to effectively perform for detecting one or more activity.
[00138] Furthermore, the invention involves a non-trivial combination of technologies and methodologies that provide a technical solution for a technical problem. While individual components like processors, databases, encryption, authorization and authentication are well-known in the field of computer science, their integration into a comprehensive system for detecting one or more activity, brings about an improvement and technical advancement in the field of detecting one or more activity.
[00139] In light of the above-mentioned advantages and the technical advancements provided by the disclosed method and system, the claimed steps as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Further, the claimed steps clearly bring an improvement in the functioning of the devices themselves as the claimed steps provide a technical solution to a technical problem.
[00140] The present disclosure may be realized in hardware, or a combination of hardware and software. The present disclosure may be realized in a centralized fashion, in at least one computer system, or in a distributed fashion, where different elements may be spread across several interconnected computer systems. A computer system or other apparatus adapted for carrying out the methods described herein may be suited. A combination of hardware and software may be a general-purpose computer system with a computer program that, when loaded and executed, may control the computer system such that it carries out the methods described herein. The present disclosure may be realized in hardware that comprises a portion of an integrated circuit that also performs other functions.
[00141] A person with ordinary skills in the art will appreciate that the systems, modules, and sub-modules have been illustrated and explained to serve as examples and should not be considered limiting in any manner. It will be further appreciated that the variants of the above disclosed system elements, modules, and other features and functions, or alternatives thereof, may be combined to create other different systems or applications. Those skilled in the art will appreciate that any of the aforementioned steps and/or system modules may be suitably replaced, reordered, or removed, and additional steps and/or system modules may be inserted, depending on the needs of a particular application. In addition, the systems of the aforementioned embodiments may be implemented using a wide variety of suitable processes and system modules, and are not limited to any particular computer hardware, software, middleware, firmware, microcode, and the like. The claims can encompass embodiments for hardware and software, or a combination thereof.
[00142] While the present disclosure has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the present disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. Therefore, it is intended that the present disclosure is not limited to the particular embodiment disclosed, but that the present disclosure will include all embodiments falling within the scope of the appended claims.
, Claims:I/WE CLAIM:
1. A method (300) for detecting one or more activities on one or more Edge AI electronic devices (108) using Edge AI and machine learning techniques, the method (300) comprising:
receiving, via a processor (202), of the one or more Edge AI electronic devices (108), one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices (108);
generating, via the processor (202), spatial data related to one or more objects within the environment using a spatial data model (214) based on the one or more sensing inputs, and thereby assigning a spatial positioning confidence score to the one or more objects;
generating, via the processor (202), classification data related to the one or more objects by using an Object classification model (216) based on the one or more sensing inputs and the spatial data related to the one or more objects, and thereby assigning an object classification confidence score to the one or more objects;
synchronizing and combining, via the processor (202), the spatial positioning confidence score and the object classification confidence score using a data fusion model (218), to assign an object detection confidence score to the one or more objects;
processing, via the processor (202), at least one of:
temporal sequences of the one or more objects using temporal analysis model (220) to model the motion trajectory of the one or more objects over time, to identify temporal data associated with the one or more objects and thereby generating a temporal consistency confidence score to the one or more objects, and/or
one or more actions of the one or more objects , in real-time using a Human Activity Recognition (HAR) model (222) to identify the behavioural data of the one or more objects and thereby assigning a behavioural confidence score to the one or more objects;
processing, via the processor (202), the object detection confidence score, the temporal consistency confidence score and the behavioural confidence score using a Symbolic integration model (224), to compute an overall activity confidence score;
comparing, via the processor (202), the overall activity confidence score with the predefined threshold to detect the one or more activities associated with one or more objects within the environment.
2. The method (300) as claimed in claim 1, wherein the one or more sensors comprises of a front camera sensor, a rear camera sensor, a recorder, a lidar, motion sensor, GPS sensor, a microphone, or a combination thereof.
3. The method (300) as claimed in claim 1, wherein the one or more sensing input comprises of images, videos, audio recording, linear acceleration inputs, sensor inputs, geolocation data, environmental data, texts, or a combination thereof.
4. The method (300) as claimed in claim 1, wherein the Edge Unit (208) comprises of Edge AI with Neuromorphic Processing.
5. The method (300) as claimed in claim 1, wherein the Edge AI electronic devices (108) include a smartphone, AR glasses, a static IP enabled CCTV camera, a handheld device, dashcam, or a combination thereof.
6. The method (300) as claimed in claim 3, wherein the method (300) comprises pre-processing (208), via the processor (202), the one or more sensing inputs to enhance the one or more sensing inputs.
7. The method (300) (300) as claimed in claim 4, wherein the pre-processing (208) of the one or more sensing inputs comprises automatic blurring of one or more sensitive objects in the videos or images or combination thereof using automatic blurring model.
8. The method (300) as claimed in claim 4, wherein the pre-processing (208) of the one or more sensing inputs comprises of identifying adverse environmental conditions using environmental condition classifier model and to dynamically adjust the capturing of one or more sensing inputs.
9. The method (300) as claimed in claim 4, wherein the pre-processing (208) of the one or more sensing inputs comprises of enhancing clarity of images or video frames by reducing motion blur caused by rapid movements of objects or the camera itself using stabilization model.
10. The method (300) as claimed in claim 4, wherein the pre-processing (208) of the one or more sensing inputs comprises of trimming of videos to a specified length (e.g., 20 seconds) while enhancing video quality and ensuring efficient processing using video editing and trimming model.
11. The method (300) as claimed in claim 1, wherein the spatial data comprises of a 3D point cloud representing the environment's spatial structure, precise alignment of one or more objects with lane boundaries, crime scene visualization, and other activities.
12. The method (300) as claimed in claim 1, wherein the temporal data comprises of sudden and significant changes in the vehicle's trajectory, criminal intimidation with use of weapons and other activities.
13. The method (300) as claimed in claim 1, wherein the behavioural data comprises of erratic unusual human behaviour, reckless swirling of vehicle, use of phone, illegal substance abuse, road rage assault and other activities.
14. The method (300) as claimed in claim 1, wherein the one or more activities detected comprises of criminal activity, wrong-side driving, wrong parking, driving without using a signal indicator, drivers with children in the driver's seat, starting without waiting for the traffic signal, driving in the wrong lane, parking in a non-designated area, driving with more than three or four passengers, using vehicles with black-tinted windows, driving with expired PUC certification, stopping a vehicle abruptly without indication, emitting smoke and fines for the PUC issuer/vendor, installing unauthorized LED lights on vehicles, driving non-insured vehicles, wrong parking on the road, unsafe overtaking causing an accident, trucks driving in the wrong lane on highways, highway accidents, riding without a helmet, talking on the phone while driving, spitting on the road while driving, driving that causes nuisance to others, city and school buses driving at high speed, fancy or incorrect number plates, overloading a rickshaw, using high-beam lights within the city, fine for loud horns exceeding decibel limits, driving vehicles with significant body damage, rickshaw stopping incorrectly to pick up passengers, driving outside designated lanes, speed limit and lane-driving activity, rash driving, driving vehicles older than 15 years, underage driving without a license, driving after disqualification, driving in an unfit mental or physical state, driving an oversized vehicle, and not yielding to emergency vehicles, and or a combination thereof.
15. The method (300) as claimed in claim 1, wherein the one or more Edge AI electronic devices (108) are mounted on vehicle, wherein to capture one or more sensing inputs using one or more sensors, wherein the sensing input is processed by using a processing unit coupled with processor to detect activity.
16. The method (300) as claimed in claim 1, wherein an activity dashboard is provided for real-time analytics and reporting for authorities.
17. The method (300) as claimed in claim 1, wherein an AI model or application running on the processor (202) is trained using a federated learning technique, wherein the model improves over time by learning from device local data and adjusting model weights without sharing sensitive information with a central server (104) communicatively coupled with the processor (202).
18. The method (300) as claimed in claim 1, the detected activity is reported and stored in a blockchain based decentralized ledger, wherein multiple mobile devices act as validators to verify the activity, wherein the stored data is synched with a central server (104) for low connectivity area.
19. The method (300) as claimed in claim 1, generating, via the processor (202), real time alerts and detailed activity report for enforcement personnel using reporting interface, wherein the reports include the computed overall activity confidence score, detected objects and analysis of driver behaviour.
20. An Edge AI electronic device (108) for detecting one or more activities in an environment based on artificial intelligence (AI) and Machine learning techniques, the Edge AI electronic devices (108) comprising:
a processor (202); and
a memory (204), wherein the processor (202) is configured to execute programmed instructions stored in the memory (204), for:
receiving, one or more sensing inputs from one or more sensors electronically coupled to the one or more Edge AI electronic devices (108);
generating, spatial data related to one or more objects within the environment using a spatial data model (214) based on one or more sensing inputs, and thereby assigning a spatial positioning confidence score to the one or more objects;
generating, classification data related to the one or more objects by using an Object classification model (216) based on the one or more sensing inputs and the spatial data related to the one or more objects, and thereby assigning an object classification confidence score to the one or more objects;
synchronizing and combining, the spatial positioning confidence score and the object classification confidence score using a data fusion model (218), to assign an object detection confidence score to the one or more objects detected;
processing, at least one of:
temporal sequences of the one or more objects detected using temporal analysis model (220) to model the motion trajectory of the one or more objects over time, to identify temporal data associated with the one or more objects and thereby generating a temporal consistency confidence score to the one or more objects
one or more actions of the one or more objects , in real-time using a Human Activity Recognition (HAR) model (222) to identify the behavioural data of the one or more objects and thereby assigning a behavioural confidence score to the one or more objects;
processing, the object detection confidence score, the temporal consistency confidence score and the behavioural confidence score using a Symbolic integration model (224), to compute an overall activity confidence score;
comparing, the overall activity confidence score with the predefined threshold to detect the one or more activities associated with one or more objects within the environment.
21. The Edge AI electronic devices (108) as claimed in claim 20, wherein the Edge Unit (208) comprises of Edge AI with Neuromorphic Processing.
22. The Edge AI electronic devices (108) as claimed in claim 20, wherein the Edge AI electronic devices (108) include smart phone, AR glasses, a static IP enabled CCTV camera, a handheld device, a dashcam, or a combination thereof.
23. The Edge AI electronic devices (108) as claimed in claim 20, wherein the one or more sensors comprises of a front camera sensor, a rear camera sensor, a recorder, a lidar, motion sensor, GPS sensor, microphone array, or a combination thereof.
24. The Edge AI electronic devices (108) as claimed in claim 20, wherein the one or more sensing input comprises of images, videos, audio recording, sensor inputs, geolocation data, environmental data, texts, or a combination thereof.
25. The Edge AI electronic devices (108) as claimed in claim 20, wherein the one or more activities detected comprises of criminal activity, wrong-side driving, wrong parking, driving without using a signal indicator, drivers with children in the driver's seat, starting without waiting for the traffic signal, driving in the wrong lane, parking in a non-designated area, driving with more than three or four passengers, using vehicles with black-tinted windows, driving with expired PUC certification, stopping a vehicle abruptly without indication, emitting smoke and fines for the PUC issuer/vendor, installing unauthorized LED lights on vehicles, driving non-insured vehicles, wrong parking on the road, unsafe overtaking causing an accident, trucks driving in the wrong lane on highways, highway accidents, riding without a helmet, talking on the phone while driving, spitting on the road while driving, driving that causes nuisance to others, city and school buses driving at high speed, fancy or incorrect number plates, overloading a rickshaw, using high-beam lights within the city, fine for loud horns exceeding decibel limits, driving vehicles with significant body damage, rickshaw stopping incorrectly to pick up passengers, driving outside designated lanes, speed limit and lane-driving activity, rash driving, driving vehicles older than 15 years, underage driving without a license, driving after disqualification, driving in an unfit mental or physical state, driving an oversized vehicle, and not yielding to emergency vehicles, and or a combination thereof.
26. The Edge AI electronic devices (108) as claimed in claim 20, wherein an AI model or application running on the processor (202) is trained using a federated learning technique, wherein the model improves over time by learning from device local data and adjusting model weights without sharing sensitive information with a central server (104) communicatively coupled with the processor (202).

Dated this 08th Day of November 2024

ABHIJEET GIDDE
IN/PA- 4407
AGENT FOR THE APPLICANT

Documents

NameDate
Abstract.jpg28/11/2024
202421086218-FORM 18A [11-11-2024(online)].pdf11/11/2024
202421086218-COMPLETE SPECIFICATION [08-11-2024(online)].pdf08/11/2024
202421086218-DECLARATION OF INVENTORSHIP (FORM 5) [08-11-2024(online)].pdf08/11/2024
202421086218-DRAWINGS [08-11-2024(online)].pdf08/11/2024
202421086218-FIGURE OF ABSTRACT [08-11-2024(online)].pdf08/11/2024
202421086218-FORM 1 [08-11-2024(online)].pdf08/11/2024
202421086218-FORM-9 [08-11-2024(online)].pdf08/11/2024
202421086218-POWER OF AUTHORITY [08-11-2024(online)].pdf08/11/2024
202421086218-REQUEST FOR EARLY PUBLICATION(FORM-9) [08-11-2024(online)].pdf08/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.