Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
SYSTEM AND METHOD FOR ENHANCING NETWORK COVERAGE AND THROUGHPUT IN UNMANNED AERIAL VEHICLES-BASED DISASTER MANAGEMENT
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 12 November 2024
Abstract
A system and method for enhancing network coverage and throughput in unmanned 10 aerial vehicles-based disaster management is provided. The method includes deploying UAVs 102A-N in disaster zone; obtaining position data, environmental data and user demand data of each UAV 102A; determining optimal position of UAV 102A using reinforcement learning approach 108 to control flight path of UAV 102A based on position data, and environmental data; determining optimal transmission power level for each UAV 15 102A by ground control unit 104 based on user demand data and environmental data using deep neural network model 110 to allocate transmission power to each UAV. The ground control unit 104 continuously monitors network performance of UAV 102A and dynamically adjusts flight path, transmission power level, and signal strength through feedback loop mechanism. Thereby, enhancing 5G wireless network coverage and 20 throughput and enabling adapting to changing conditions in disaster management within the disaster zone. FIG. 4A
Patent Information
Application ID | 202441087226 |
Invention Field | COMMUNICATION |
Date of Application | 12/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. Sudhanshu Arya | Department of Communication Engineering, PRP BLOCK 315 AB 26, SENSE, School of Electronics Engineering, Technology Tower, Vellore Institute of Technology, Tiruvalam Rd, KATPADI, Tamil Nadu, India, 632014. | India | India |
Dr. Saranya K C | Department of Communication Engineering, TT435A, SENSE, School of Electronics Engineering, Technology Tower, Vellore Institute of Technology, Tiruvalam Rd, KATPADI, Tamil Nadu, India, 632014. | India | India |
Dr. Yogesh Kumar Choukiker | Department of Communication Engineering, CBMR-207B, SENSE, School of Electronics Engineering, Technology Tower, Vellore Institute of Technology, Tiruvalam Rd, KATPADI, Tamil Nadu, India, 632014 | India | India |
Dr. Abhijit Bhowmick | Department of Communication Engineering, CBMR-102C, SENSE, School of Electronics Engineering, Technology Tower, Vellore Institute of Technology, Tiruvalam Rd, KATPADI, Tamil Nadu, India, 632014. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
VELLORE INSTITUTE OF TECHNOLOGY | VELLORE INSTITUTE OF TECHNOLOGY, KATPADI, VELLORE - 632014, TAMIL NADU, INDIA | India | India |
Specification
Description:The embodiments herein generally relate to communication systems and more
particularly to a system and method for enhancing network coverage and throughput in
unmanned aerial vehicles (UAVs)-based disaster management using a reinforcement learning
(RL) approach and a deep neural network model.
Description of the Related Art
15 [0002] In disaster management, effective communication is critical for coordinating
relief efforts, sharing situational updates, and providing aid to those affected. However,
traditional communication infrastructure, such as cellular networks and internet services, is
often compromised or entirely unavailable during severe events like earthquakes, hurricanes,
and floods. These disruptions severely limit the ability of responders to communicate with each
20 other and the affected population, hampering effective disaster response and recovery. Many
existing systems are not designed to function under such compromised conditions, leading to
delays and inefficiencies in response coordination.
[0003] Current approaches to communication in disaster management typically rely on
fixed infrastructure or basic algorithms that operate on pre-set assumptions. While these
25 systems perform adequately in stable environments, they lack the flexibility to adapt to the
unpredictable and dynamic nature of disaster scenarios. For instance, static algorithms do not
3
account for sudden shifts in network availability or rapidly changing situational 5 demands,
which are common during disasters. Consequently, these limitations often result in
communication blackouts or poor information flow, which can endanger lives and reduce the
effectiveness of relief operations.
[0004] There is need for developing a communication system capable of adapting to
10 changing disaster conditions with enhanced network coverage and throughput.
SUMMARY
[0005] In view of a foregoing, an embodiment herein provides a system for enhancing
a 5G wireless network coverage and throughput in disaster management within a disaster zone.
The system includes one or more unmanned aerial vehicles (UAVs) that are deployed in the
15 disaster zone to establish a 5G wireless network for one or more receiver nodes. Each UAV
includes (i) a sensor unit that is configured to obtain at least one of position data, environmental
data and user demand data. The position data includes geographical coordinates, altitude,
speed, and heading information of the one or more UAVs, the environmental data includes
signal strength, user density, and channel propagation conditions, and the user demand data
20 includes number of active users and data requirements of the users; and (ii) a communication
unit including a 5G transceiver, and an antenna array for transmitting and receiving data across
the 5G wireless network.
[0006] The system further includes a ground control unit that is connected with the one
or more UAVs and is configured to: (i) receive the position data, the environmental data and
25 the user demand data that are collected at the one or more UAVs from each UAV through the
communication unit; (ii) determine, using a reinforcement learning (RL) approach, an optimal
position of each UAV based on the position data, and the environmental data, the RL approach
includes defining a current state of each UAV based on the position data and environmental
4
data, enabling the ground control unit to generate a control signal for a first 5 position of UAV
based on the current state, receiving a reward for the first position from each UAV based on
network efficiency after each UAV is moved to the first position using the control signal,
enabling the ground control unit to iteratively generate control signals for different positions
and receiving rewards for each different position from each UAV, and learning an optimal
10 positioning strategy by maximizing the cumulative reward; (iii) generate a flight path control
signal based on the optimal position that is determined using the RL approach to adjust the
flight path of each UAV, wherein the fight path of each UAV is adjusted to the optimal position
using a flight controller of each UAV based on the flight path control signal that is received
from the ground control unit through the communication unit; (iv) determine, using a deep
15 neural network model, an optimal transmission power level for each UAV based on the user
demand data and the environmental data, the optimal transmission power level is determined
by extracting one or more spatial features and one or more temporal features from the user
demand data and the environmental data; generating a concatenated feature vector by merging
the one or more spatial features and the one or more temporal features; and inputting the
20 concatenated feature vector into the deep neural network model that predicts the optimal
transmission power level, the deep neural network model is trained by mapping spatial and
temporal data associated with historical user demand data and environmental data to
corresponding transmission power levels; and (v) generating a power allocation control signal
based on the transmission power level that is determined using the deep neural network model
25 to allocate the transmission power level to each UAV, the optimal transmission power level is
allocated to each UAV using a power distribution board of each UAV based on the power
allocation control signal that is received from the ground control unit through the
communication unit, thereby enhancing the 5G wireless network coverage and throughput in
5
disaster management within 5 the disaster zone.
[0007] In some embodiments, the ground control unit is configured to: (a) receive one
or more performance metrics at regular intervals from the one or more UAVs and a feedback
of users from the one or more receiver nodes to monitor a network performance of each UAV,
the one or more performance metrics include signal strength, power consumption and
10 environmental conditions; (b) compare the one or more performance metrics against pre-set
optimal values; and (c) dynamically adjust the flight path, transmission power level, and signal
strength of the plurality of UAVs using the RL approach and the deep neural network model
through a feedback loop mechanism, if the one or more performance metrics deviate from the
pre-set optimal values, thereby enabling adapting to changing environmental conditions.
15 [0008] In some embodiments, the RL approach employs a policy gradients technique
and a Q-learning technique to determine the optimal position of each UAV and enable each
UAV to navigate in complex environments.
[0009] In some embodiments, the one or more spatial features and the one or more
temporal features include at least one of a distance from users, a current network load, past
20 power levels that are used under similar conditions, environmental factors including weather
and terrain, signal quality, or the number of active connections.
[0010] In some embodiments, the deep neural network model employs a dynamic
routing and an attention mechanism to determine the optimal transmission power level for each
UAV, enabling power management and increasing data throughput.
25 [0011] In some embodiments, the ground control unit is configured to deploy additional
UAVs to extend network coverage or replace depleted UAVs based on changes in user density
or communication demands through a deployment and scaling mechanism.
6
[0012] In some embodiments, the feedback loop mechanism employs 5 online learning
and adaptive filtering techniques to dynamically adjust the flight path, transmission power
level, and signal strength of the plurality of UAVs.
[0013] In one aspect, a method for enhancing a 5G wireless network coverage and
throughput in disaster management within a disaster zone is provided. The method includes (a)
10 deploying one or more unmanned aerial vehicles (UAVs) in the disaster zone to establish a 5G
wireless network for one or more receiver nodes, each UAV includes a sensor unit, a
communication unit, a flight controller, and a power distribution board; (b) obtaining, using
the sensor unit, at least one of position data, environmental data and user demand data, the
position data includes geographical coordinates, altitude, speed, and heading information of
15 the UAV, the environmental data includes signal strength, user density, and channel
propagation conditions, and the user demand data includes number of active users and data
requirements of the users; (c) receiving, by a ground control unit, the position data, the
environmental data and the user demand data that are collected at the one or more UAVs from
each UAV through the communication unit; (d) determining, using a reinforcement learning
20 (RL) approach, an optimal position of each UAV based on the position data, and the
environmental data by the ground control unit, the RL approach includes (i) defining a current
state of each UAV based on the position data and environmental data, (ii) enabling the ground
control unit to generate a control signal for a first position of UAV based on the current state,
(iii) receiving a reward for the first position from each UAV based on network efficiency after
25 each UAV is moved to the first position using the control signal, (iv) enabling the ground
control unit to iteratively generate control signals for different positions and receiving rewards
for each different position from each UAV, and (v) learning an optimal positioning strategy by
maximizing the cumulative reward; (e) generating, by the ground control unit, a flight path
7
control signal based on the optimal position that is determined using the RL 5 approach to adjust
the flight path of each UAV, the fight path of each UAV (102A) is adjusted to the optimal
position using the flight controller of each UAV based on the flight path control signal that is
received from the ground control unit through the communication unit; (f) determining, using
a deep neural network model, an optimal transmission power level for each UAV based on the
10 user demand data and the environmental data by the ground control unit, the optimal
transmission power level is determined by (i) extracting one or more spatial features and one
or more temporal features from the user demand data and the environmental data; (ii)
generating a concatenated feature vector by merging the one or more spatial features and the
one or more temporal features; and (iii) inputting the concatenated feature vector into the deep
15 neural network model that predicts the optimal transmission power level, the deep neural
network model is trained by mapping spatial and temporal data associated with historical user
demand data and environmental data to corresponding transmission power levels; and (h)
generating, by the ground control unit, a power allocation control signal based on the
transmission power level that is determined using the deep neural network model to allocate
20 the transmission power level to each UAV, the optimal transmission power level is allocated
to each UAV using the power distribution board of each UAV based on the power allocation
control signal that is received from the ground control unit through the communication unit,
thereby enhancing the 5G wireless network coverage and throughput in disaster management
within the disaster zone.
25 [0014] In some embodiments, the method includes (i) receiving, by the ground control
unit, one or more performance metrics at regular intervals from the one or more UAVs and a
feedback of users from the one or more receiver nodes to monitor a network performance of
each UAV; (ii) comparing, by the ground control unit, the one or more performance metrics
8
against pre-set optimal values; and (iii) dynamically adjusting, by the ground 5 control unit, the
flight path, transmission power level, and signal strength of the plurality of UAVs using the
RL approach and the deep neural network model through a feedback loop mechanism, if the
one or more performance metrics deviate from the pre-set optimal values.
[0015] In some embodiments, the method includes deploying, by the ground control
10 unit, additional UAVs to extend network coverage or replace depleted UAVs based on changes
in user density or communication demands through a deployment and scaling mechanism.
[0016] The system of the present disclosure provides a resilient and adaptive solution
for communication in disaster-stricken areas or environments where traditional infrastructure
is compromised or unavailable. Using UAVs equipped with high-efficiency power
15 management systems and 5G NR-compliant communication modules, the system ensures
quick deployment and integration with existing networks, significantly enhancing emergency
response capabilities. By leveraging a reinforcement learning (RL) algorithm for real-time
UAV positioning, the system can dynamically adapt to complex disaster conditions, providing
robust network coverage with minimal signal interference. Unlike static methods, this adaptive
20 approach meets the demands of rapidly changing environments, ensuring uninterrupted
communication and optimized network performance.
[0017] The system's power management and throughput are further enhanced by deep
neural network (DNN) model that process large-scale, real-time data for precise power
distribution. Through dynamic routing and attention mechanisms, the DNN adapts to
25 fluctuating network loads and power constraints, maximizing throughput and resource
utilization under challenging conditions. This combination of RL-driven UAV positioning and
DNN-based power management allows for efficient communication with high speed and low
latency, addressing the critical need for reliable communication infrastructure in disaster
9
scenarios. Therefore, the system provides enhanced adaptability, efficiency, 5 and network
stability in even the most demanding situations.
[0018] These and other aspects of the embodiments herein will be better appreciated
and understood when considered in conjunction with the following description and the
accompanying drawings. It should be understood, however, that the following descriptions,
10 while indicating preferred embodiments and numerous specific details thereof, are given by
way of illustration and not of limitation. Many changes and modifications may be made within
the scope of the embodiments herein without departing from the spirit thereof, and the
embodiments herein include all such modifications
BRIEF DESCRIPTION OF THE DRAWINGS
15 [0019] The embodiments herein will be better understood from the following detailed
description with reference to the drawings, in which:
[0020] FIG. 1 illustrates a system for enhancing a network coverage and throughput in
disaster management within a disaster zone according to some embodiments herein;
[0021] FIG. 2 is a block diagram that illustrates at least one unmanned aerial vehicle
20 (UAV) of FIG. 1 according to some embodiments herein;
[0022] FIG. 3 is block diagram that illustrates a ground control unit of FIG. 1 according
to some embodiments herein;
[0023] FIGS. 4A-4B are flow diagrams that illustrate a method for enhancing a network
coverage and throughput in disaster management within a disaster zone according to some
25 embodiments herein;
[0024] FIG. 5 is a graphical representation that illustrates a throughput performance
comparison of a system of FIG. 1 with conventional methods according to some embodiments
10
5 herein and;
[0025] FIG. 6 is a schematic diagram of a computer architecture in accordance with the
embodiments herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0026] The embodiments herein and the various features and advantageous details
10 thereof are explained more fully with reference to the non-limiting embodiments that are
illustrated in the accompanying drawings and detailed in the following description.
Descriptions of well-known components and processing techniques are omitted .so as to not
unnecessarily obscure the embodiments herein. The examples used herein are intended merely
to facilitate an understanding of ways in which the embodiments herein may be practiced and
15 to further enable those of skill in the art to practice the embodiments herein. Accordingly, the
examples should not be construed as limiting the scope of the embodiments herein.
[0027] As mentioned, there remains a need for developing a communication system
capable of adapting to changing disaster conditions with enhanced network coverage and
throughput. The embodiments herein achieve this by proposing a system and method for
20 enhancing network coverage and throughput in unmanned aerial vehicles (UAVs)-based
disaster management using a reinforcement learning (RL) approach and a deep neural network
model. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where
similar reference characters denote corresponding features consistently throughout the figures,
there are shown preferred embodiments.
25 [0008] FIG. 1 illustrates a system 100 for enhancing a network coverage and
throughput in disaster management within a disaster zone according to some embodiments
herein. The system 100 includes one or more unmanned aerial vehicles (UAVs) 102A-N, a
11
ground control unit 104, and one or more receiver nodes 106A-N. The ground 5 control unit 104
includes a reinforcement learning (RL) approach 108, and a deep neural network model 110.
[0009] The one or more UAVs 102A-N are deployed in the disaster zone as 5G base
stations to establish a high-speed communication network (5G wireless network) for the one
or more receiver nodes 106A-N. Each receiver node 106A is equipped with an RF transceiver
10 to connect to the 5G wireless network. The one or more receiver nodes 106A-N may support
flexible and agile radio frequency communication, enabling efficient data transfer between the
users associated with the receiver nodes 106A-N and the one or more UAVs 102A-N. The one
or more receiver nodes 106A-N may include IoT devices, edge computing systems, or any
device requiring network access. The communication from the one or more receiver nodes
15 106A-N to the UAVs may happen via WiGig (wireless gigabit) technology, offering highspeed
data transfer over short distances. The RF transceivers may operate in this standard to
maintain low latency and high throughput.
[0010] Each UAV 102A includes a sensor unit, a communication unit, and one or more
controllers including a flight controller, and a power distribution board for autonomous
20 operation and data processing. The sensor unit includes one or more sensors that are configured
to obtain position data, environmental data and user demand data. The position data includes
geographical coordinates, altitude, speed, and heading information of each UAV 102A. The
environmental data includes signal strength, user density, and channel propagation conditions.
The signal strength may measure the strength of communication signals in the disaster zone.
25 The user Density may determine how many users (e.g., people or devices) are in the disaster
zone, which can affect network load and communication needs. The channel propagation
conditions may assess how well signals can travel in the environment, influenced by factors
like obstacles, weather, and terrain. The user demand data includes number of active users and
12
data requirements of the users. The one or more sensors may include inertial 5 measurement unit
(IMU), GPS module, RF signal strength sensor, Lidar (light detection and ranging) sensor,
cameras (optical and thermal imaging), weather sensors, obstruction detection sensor
(ultrasonic or infrared sensors), and networking monitoring tool.
[0011] The communication unit includes a 5G transceiver, and an antenna array and
10 allows for high-bandwidth data transmission between the one or more UAVs 102A-N, the
ground control unit 104, and the one or more receiver nodes 106A-N. The communication unit
ensures that the data collected by the one or more sensors is transmitted instantaneously to the
ground control unit 104 for analysis and decision-making. The one or more controllers within
each UAV 102A enable autonomous flight operations, allowing the one or more UAVs 102A15
N to navigate through complex environments and avoid obstacles by responding to control
signals from the ground control unit 104.
[0012] The ground control unit 104 is a central command system and includes a
processor and a non-transitory computer-readable storage medium (or a memory) storing one
or more sequences of instructions. When executed by the processor, these instructions enhance
20 network coverage and throughput by coordinating the one or more UAVs 102A-N and enable
adaptation to changing conditions in the disaster zone. The ground control unit 104 may be a
handheld device, a mobile phone, a kindle, a Personal Digital Assistant (PDA), a tablet, a music
player, a computer, a laptop, an electronic notebook or a Smartphone.
[0013] The ground control unit 104 is communicatively connected with each UAV
25 102A through the 5G wireless network and is configured to receive the position data,
environmental data and the user demand data from each UAV 102A through the
communication unit of each UAV 102A.
13
[0014] The ground control unit 104 is further configured to determine 5 an optimal
position of each UAV 102A to control a flight path of each UAV 102A based on the position
data, and the environmental data. The ground control unit 104 uses the RL approach 108 that
determines the optimal position based on a high-dimensional state-space framework. The
optimal position includes a location or coordinates for each UAV 102A where the UAV 102A
10 maximizes the network coverage. In some embodiments, the RL approach 108 employs a
policy gradients technique and Q-learning technique to determine the optimal position of each
UAV 102A and enable each UAV 102A to navigate in complex environments. The policy
gradients technique may allow the RL approach 108 to directly optimize the policy, which is
the strategy each UAV 102A uses to decide the actions based on the current situation. The Q15
learning technique may help the RL approach 108 to learn the value of different actions in
various states, even when the environment is continuously changing. In some embodiments,
the RL approach 108 involves (i) defining a current state of each UAV 102A based on the
position data and environmental data, (ii) enabling the ground control unit 104 to generate a
control signal for a first position of UAV 102A based on the current state, (iii) receiving a
20 reward for the first position from each UAV 102A based on network efficiency after each UAV
102A is moved to the first position using the control signal, (iv) enabling the ground control
unit 104 to iteratively generate control signals for different positions and receiving rewards for
each different position from each UAV 102A and (iv) learning an optimal positioning strategy
by maximizing the cumulative reward.
25 [0015] The ground control unit 104 is further configured to generate a flight path
control signal based on the optimal position that is determined using the RL approach 108 to
adjust the flight path of each UAV 102A. The ground control unit 104 may transmit the flight
path control signal to each UAV 102A. Each UAV 102 receives the flight path control signal
14
from the ground control unit 104 through the commination unit and controls 5 the flight path of
each UAV 102A to the optimal position using the flight controller, thereby enhancing the
network coverage, minimizing signal interference, and ensuring robust communication links.
[0016] The ground control unit 104 is further configured to determine an optimal
transmission power level for each UAV 102A based on the user demand data and the
10 environmental data. The ground control unit 104 may use the deep neural network model 110
to determine the optimal transmission power level. The deep neural network model 110 may
include a convolutional neural network (CNN) and a recurrent neural network (RNN). The
optimal transmission power level is determined by (i) extracting one or more spatial features
and one or more temporal features from the user demand data and the environmental data, (ii)
15 generating a concatenated feature vector by merging the one or more spatial features and the
one or more temporal features, and (iii) inputting the concatenated feature vector into the deep
neural network model 110 that predicts the optimal transmission power level. The deep neural
network model 110 is trained by mapping spatial and temporal data associated with historical
user demand data and environmental data to corresponding transmission power levels.
20 [0017] In some embodiments, the deep neural network model 110 employs a dynamic
routing and an attention mechanism to manage power and increase data throughput. The
dynamic routing may help the network to efficiently allocate resources based on current
demands, allowing each UAV 102A to adapt to changing network conditions. The attention
mechanism focuses on the most relevant information, ensuring that the network processes data
25 effectively without wasting energy.
[0018] The ground control unit 104 is further configured to generate a power allocation
control signal based on the optimal transmission power level that is determined using the deep
neural network 110 to allocate the transmission power level to each UAV 102A. The ground
15
control unit 104 may transmit the power allocation control signal to each 5 UAV 102A. Each
UAV 102 receives the power allocation control signal from the ground control unit 104 through
the commination unit and allocates the optimal transmission power level to each UAV 102A
using a power distribution board of each UAV 102A, thereby enhancing data throughput
maximization and power efficiency.
10 [0019] Thus, the system 100 enhances the 5G wireless network coverage and
throughput in disaster management within the disaster zone.
[0020] Further, the ground control unit 104 is configured to continuously monitor a
network performance of each UAV 102A. The ground control unit 104 (i) receives one or more
performance metrics at regular intervals from the one or more UAVs 102A-N and a feedback
15 of users from the one or more receiver nodes 106A-N to monitor the network performance of
each UAV; (ii) compares the one or more performance metrics against pre-set optimal values;
and (iii) dynamically adjusts at least one of the flight path, transmission power level, and signal
strength of the one or more UAVs using the RL approach 108 and the deep neural network
model 110 through a feedback loop mechanism, if the one or more performance metrics deviate
20 from the pre-set optimal values. The one or more performance metrics include signal strength,
power consumption and environmental conditions. Thereby, the feedback loop mechanism
enables the one or more UAVs 102A-N to adapt to changing environmental conditions.
[0021] In some embodiments, the ground control unit 104 is configured to deploy
additional UAVs to extend network coverage or replace depleted UAVs based on changes in
25 user density or communication demands through a deployment and scaling mechanism.
[0022] FIG. 2 is a block diagram that illustrates at least one unmanned aerial vehicle
(UAV) 102A of FIG. 1 according to some embodiments herein. The UAV 102A includes a
16
sensor unit 202, a communication unit 204, a flight controller 206, an obstacle 5 detection and
navigation unit 208, a power distribution board 210, and a data processing unit 212.
[0023] The sensor unit 202 includes one or more sensors that are configured to obtain
position data, environmental data and user demand data. The position data includes
geographical coordinates, altitude, speed, and heading information of the UAV 102A that is
10 measured using an inertial measurement unit (IMU), and a GPS module. The environmental
data includes signal strength, user density, and channel propagation conditions. The signal
strength may be measured using a RF signal strength sensor that measure the strength of
communication signals in the disaster zone. The user density may be measured using cameras
including optical and thermal imaging that determine how many users (e.g., people or devices)
15 are in the disaster zone, which can affect network load and communication needs. The channel
propagation conditions may be detected using at least one of Lidar (light detection and ranging)
sensor, weather sensors, obstruction detection sensor. These sensors may provide a detailed 3D
mapping and assess how well signals can travel in the environment, influenced by factors such
as obstacles, weather, and terrain. The user demand data may be monitored to a networking
20 monitoring tool that tracks the number of active users and data requirements of the users.
[0024] The communication unit 204 includes a 5G transceiver 204A, and an antenna
array 204B and allows for high-bandwidth data transmission between one or more UAVs
102A-N, the ground control unit 104, and one or more receiver nodes 106A-N. The 5G
transceiver 204A may support both sub-6 GHz and mmWave bands for robust data transfer.
25 The antenna array 204B may transmit control signals and communication data (2.4 GHz or 5
GHz) between the one or more UAVs 102A-N, the ground control unit 104, and one or more
receiver nodes 106A-N. Beamforming may be optimized through the antenna array 204B. The
17
antenna array 204B may be dynamically adjusted based on real-time feedback 5 to maintain the
signal strength.
[0025] The flight controller 206 is responsible for a navigation, and stabilization of the
UAV 102A. The flight controller 206 receives a flight path control signal from the ground
control unit 104 through the communication unit 204 and adjusts a flight path of the UAV
10 102A according to the flight path control signal. The obstacle detection and navigation unit
208 is configured to receive environmental data associated with obstacles from the sensor unit
202 and control the UAV 102A to avoid obstacles during flight, ensuring safe navigation in
dynamic environments.
[0026] The power distribution board 210 manages power supply to all components
15 onboard the UAV 102A, ensuring stable energy distribution. The power distribution board 210
may receive a power allocation control signal for the UAV 102A from the ground control unit
104 through the communication unit 204 and allocate an optimal transmission power level to
the UAV 102A. The power distribution board 210 may include a battery and a solar module to
provide extended operational duration and efficient power management. The data processing
20 unit 212 is configured to process the environmental data that is collected from sensor unit 202
and feed to the relevant controller in the UAV 102A.
[0027] FIG. 3 is block diagram that illustrates a ground control unit 104 of FIG. 1
according to some embodiments herein. The ground control unit 104 includes a database 300,
a processor 302, a receiving module 304, an optimal position determining module 306 that
25 includes a reinforcement learning approach 108, a control signal generating module 308, a
training module 310, an optimal transmission power level determination module 312 that
includes a deep neural network model 110, a transmission module 314, a feedback loop
mechanism 316, and a deployment and scaling mechanism 318.
18
[0028] The database 300 stores one or more modules of the ground 5 control unit 104
and one or more sequences of instructions. The one or more sequences of instructions are
executed by the processor 302 to enhance network coverage and throughput by coordinating
one or more UAVs 102A-N and enable adaptation to changing conditions in a disaster zone.
Historical user demand data and environmental data may be collected and labelled for optimal
10 transmission power levels. The labelled dataset of historical user demand data and
environmental data may be stored in the database 300.
[0029] The training module 310 may obtain the historical user demand data and
environmental data from the database 300. The training module 310 further extracts relevant
features from the historical user demand data and environmental data to train the deep neural
15 network model 110. The relevant features may include spatial and temporal features including
user density patterns, time-based usage variations, environmental conditions such as weather
and obstacles, and historical transmission power levels. The training module 310 further trains
the deep neural network model 110 based on the labeled dataset of the historical user demand
data and environmental data. The training module 310 trains the deep neural network model
20 110 to determine the optimal transmission power levels by mapping spatial and temporal data
associated with historical user demand data and environmental data to corresponding
transmission power levels. The trained deep neural network model 110 may be stored in the
database 300.
[0030] The receiving module 304 receives real-time position data, environmental data
25 and user demand data from each UAV 102A and stored in the database 300. The optimal
position determining module 306 determines an optimal position of each UAV 102A to control
a flight path of each UAV 102A based on the position data, and the environmental data.
19
[0031] The optimal position determining module 306 utilizes 5 the reinforcement
learning (RL) approach 108 within a high-dimensional state-space framework to identify the
most advantageous position for each UAV 102A to maximize network coverage. The process
begins with a formation of a high-dimensional state space from the position data, and the
environmental data. The high-dimensional state space represents all possible states or scenarios
10 in which a particular UAV 102A might operate. Each state includes current UAV position,
environmental conditions, and user density. The RL approach 108 further assigns a reward
based on how well each potential UAV position maximizes network coverage and meets
communication needs. For example, higher rewards may be given to positions with stronger
signal strength, better channel propagation conditions, or higher coverage of user-dense areas.
15 The RL approach 108 iteratively tests different positions and learns the optimal flight path by
maximizing the cumulative reward. The optimal position determining module 306 selects the
position with the highest reward, where the particular UAV 102A can maximize network
coverage, enhance signal strength, and effectively meet user demand within the disaster zone.
In some embodiments, the RL approach 108 employs a policy gradients technique and Q20
learning technique to determine the optimal position of each UAV 102A and enable each UAV
102A to navigate in complex environments. The policy gradients technique may allow the RL
approach 108 to directly optimize the policy, which is the strategy each UAV 102A uses to
decide the actions based on the current situation. The Q-learning technique may help the RL
approach 108 to learn the value of different actions in various states, even when the
25 environment is continuously changing. In some embodiments, the RL approach 108 involves
(i) defining a current state of each UAV 102A based on the position data and environmental
data, (ii) enabling the control signal generating module 308 to generate a control signal for a
first position of UAV 102A based on the current state, (iii) receiving a reward for the first
20
position from each UAV 102A based on network efficiency after each UAV 5 102A is moved to
the first position using the control signal, (iv) enabling the control signal generating module
308 to iteratively generate control signals for different positions and receiving rewards for each
different position from each UAV 102A, and (iv) learning an optimal positioning strategy by
maximizing the cumulative reward. The RL approach 108 enables adaptive adjustments to
10 maintain an ideal flight path for each UAV 102A that maximizes the network coverage.
[0032] The control signal generating module 308 generates a flight path control signal
based on the optimal position that is determined using the RL approach 108 to adjust the flight
path of each UAV 102A. The control signal generating module 308 may calculate the required
adjustments in each UAV's current path using the optimal position. This involves determining
15 any necessary changes in direction, altitude, speed, and trajectory to move each UAV 102A
towards the optimal position. The flight path control signal may include precise movement
instructions to guide a flight controller of each UAV 102A to navigate each UAV 102A. The
transmission module 314 transmits the flight path control signal to each UAV 102A. By
continuously generating and updating the flight path control signals based on real-time data
20 and RL-determined optimal positions, the control signal generating module 308 ensures that
each UAV 102A dynamically adjusts its path to maintain optimal network coverage and
respond to environmental or demand changes.
[0033] The optimal transmission power level determination module 312 determines an
optimal transmission power level for each UAV 102A based on the user demand data and the
25 environmental data using the deep neural network model 110. The deep neural network model
110 includes a convolutional neural network (CNN) and a recurrent neural network (RNN).
The optimal transmission power level determination module 312 extracts one or more spatial
features and one or more temporal features from the user demand data and the environmental
21
data using the deep neural network model 110. The deep neural network 5 model 110 extracts
spatial features from the user demand data and the environmental data using the convolutional
neural network. The deep neural network model 110 extracts one or more temporal features
from the user demand data and the environmental data using the recurrent neural network. The
optimal transmission power level determination module 312 further generates a concatenated
10 feature vector by merging the one or more spatial features and the one or more temporal
features using the deep neural network model 110. The optimal transmission power level
determination module 312 further input the concatenated feature vector into the deep neural
network model 110 that predicts the optimal transmission power level.
[0034] The optimal transmission power level determination module 312 may apply an
15 attention mechanism to the one or more spatial and temporal features of the user demand data
and the environmental data using the deep neural network model 110 to focus on the most
relevant information. This ensures that the network processes data effectively without wasting
energy. The optimal transmission power level determination module 312 may employ a
dynamic routing to manage power and increase data throughput. The dynamic routing may
20 help the network to efficiently allocate resources based on current demands, allowing each
UAV 102A to adapt to changing network conditions.
[0035] The control signal generating module 308 generates a power allocation control
signal based on the optimal transmission power level that is determined using the deep neural
network model 110 to allocate the transmission power level to each UAV 102A. The power
25 allocation control may specify the exact power allocation needed. The transmission module
314 transmits the power allocation control signal to each UAV 102A. A power distribution
board of each UAV 102 may receive the power allocation control signal and allocate the
22
optimal transmission power level to each UAV 102A, thereby enhancing 5 data throughput
maximization and power efficiency.
[0036] The processor 302 receives one or more performance metrics at regular intervals
from each UAV 102A and a feedback of users from the one or more receiver nodes 106A-N to
monitor a network performance of each UAV 102A. The one or more performance metrics
10 include a signal strength, a power consumption and environmental conditions. The processor
302 further compares the one or more performance metrics against pre-set optimal values and
dynamically adjust the flight path, the transmission power level, and the signal strength of the
one or more UAVs 102A-N using the RL approach 108 and the deep neural network model
110 through the feedback loop mechanism 318, if the one or more performance metrics deviate
15 from the pre-set optimal values. For example, if the signal strength drops below a certain
threshold, the feedback loop mechanism 318 initiates a recalibration on an antenna array 204B
of UAV 102A to enhance communication links. Thus, the feedback loop mechanism 318
enables each UAV 102A to adapt to changing environmental conditions.
[0037] The processor 302 is further configured to deploy additional UAVs to extend
20 network coverage or replace depleted UAVs based on changes in user density or
communication demands through the deployment and scaling mechanism 320. The processor
302 monitors user density and communication demands in the disaster zone. If an increase in
users or higher data demand is detected, the processor 302 may deploy additional UAVs to
expand the 5G wireless network, ensuring sufficient coverage and maintaining high
25 performance. If any UAVs run low on power or experience technical issues, the processor 302
may deploy replacements to maintain consistent network availability. Thus, the deployment
and scaling mechanism 320 allows the 5G wireless network to adapt flexibly to changes,
enhancing both reliability and coverage as needed.
23
[0038] FIGS. 4A-4B are flow diagrams that illustrate a method for enhancing 5 a network
coverage and throughput in disaster management within a disaster zone according to some
embodiments herein. At step 402, one or more UAVs 102A-N are deployed in the disaster zone
as 5G base stations to establish a 5G wireless network for one or more receiver nodes 106AN.
At step 404, position data, environmental data and user demand data are obtained from each
10 UAV 102A in real-time using a sensor unit of the one or more UAVs 102A-N. At step 406, the
position data, the environmental data and the user demand data are received by a ground control
unit 104 from each UAV 102A through a communication unit of each UAV 102A.
[0039] At step 408, an optimal position of each UAV 102A is determined by the ground
control unit 104 using a reinforcement learning (RL) approach 108 to control a flight path of
15 each UAV 102A based on the position data, and the environmental data. In some embodiments,
the RL approach 108 includes (i) defining a current state of each UAV 102A based on the
position data and environmental data, (ii) enabling the ground control unit 104 to generate a
control signal for a first position of UAV 102A based on the current state, (iii) receiving a
reward for the first position from each UAV 102A based on network efficiency after each UAV
20 102A is moved to the first position using the control signal, (iv) enabling the ground control
unit 104 to iteratively generate control signals for different positions and receiving rewards for
each different position from each UAV 102A and (iv) learning an optimal positioning strategy
by maximizing the cumulative reward.
[0040] At step 410, a flight path control signal is generated by the ground control unit
25 104 based on the optimal position that is determined using the RL approach 108 to adjust the
flight path of each UAV 102A. At step 412, the flight path control signal is transmitted by the
ground control unit 104 to each UAV 102A. At step 414, the fight path of each UAV 102A is
adjusted to the optimal position using a flight controller of each UAV 102A based on the flight
24
path control signal that is received from the ground control unit 104 through 5 the communication
unit.
[0041] At step 416, an optimal transmission power level for each UAV 102A is
determined by the ground control unit 104 based on the user demand data and the
environmental data using a deep neural network model 110. In some embodiments, the optimal
10 transmission power level is determined by (i) extracting one or more spatial features and one
or more temporal features from the user demand data and the environmental data, (ii)
generating a concatenated feature vector by merging the one or more spatial features and the
one or more temporal features, and (iii) inputting the concatenated feature vector into the deep
neural network model 110 that predicts the optimal transmission power level. The deep neural
15 network model 110 may be trained by mapping spatial and temporal data associated with
historical user demand data and environmental data to corresponding transmission power
levels.
[0042] At step 418, a power allocation control signal is generated by the ground control
unit 104 based on the optimal transmission power level that is determined using the deep neural
20 network 110 to allocate the transmission power level to each UAV 102A. At step 420, the
power allocation control signal is transmitted to each UAV 102A by the ground control unit
104. At step 422, the optimal transmission power level is allocated to each UAV 102A using a
power distribution board of each UAV 102A based on the power allocation control signal that
is received from the ground control unit 104 through the communication unit.
25 [0043] The method further includes (i) receiving, by the ground control unit 104, one
or more performance metrics at regular intervals from the one or more UAVs 102A-N and a
feedback of users from one or more receiver nodes 106A-N to monitor a network performance
of each UAV, (ii) comparing the one or more performance metrics against pre-set optimal
25
values; and (iii) dynamically adjusting the flight path, transmission power 5 level, and signal
strength of the one or more UAVs 102A-N using the RL approach 108 and the deep neural
network model 110 through a feedback loop mechanism, if the one or more performance
metrics deviate from the pre-set optimal values.
[0044] The method further include deploying additional UAVs to extend network
10 coverage or replace depleted UAVs by the ground control unit 104 based on changes in user
density or communication demands through a deployment and scaling mechanism.
[0045] FIG. 5 is a graphical representation that illustrates a throughput performance
comparison of a system 100 of FIG. 1 with conventional methods according to some
embodiments herein. The experimental comparison of the system 100 of the present disclosure,
15 where UAVs serve as both base stations and receivers, is conducted against three established
methods: the greedy algorithm, gradient-based optimization, and the genetic algorithm, as
shown in FIG. 5. In the graphical representation, the methods are plotted against the X-axis,
and the average throughput (in mega bps) is plotted against Y-axis.
[0046] The greedy algorithm adjusts transmitter positions in a simple, straightforward
20 manner. However, it lacks the adaptability required for complex scenarios, resulting in limited
performance gains. By using derivative information, the gradient-based optimization refines
the transmitter positions and provides an improvement over the greedy algorithm. However,
the performance of the gradient-based optimization is constrained by the inherent limitations
of gradient-based techniques. The genetic algorithm applies evolutionary techniques to explore
25 multiple solution paths. Although more robust than the previous methods, its efficiency in
complex environments remains limited. In contrast, the system 100 of the present disclosure
demonstrates a clear advantage by effectively managing complex scenarios and achieving
higher throughput. These results underscore the superior capability of the system 100 in
26
optimizing performance across challenging environments, establishing its 5 effectiveness over
traditional optimization methods.
[0047] A representative hardware environment for practicing the embodiments herein
is depicted in FIG. 6, with reference to FIGS. 1 through 5. This schematic drawing illustrates
a hardware configuration of a ground control unit 104 /computer system/ computing device in
10 accordance with the embodiments herein. The ground control unit 104 includes at least one
processing device CPU 10 that may be interconnected via system bus 14 to various devices
such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/
output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk
units 38 and program storage devices 40 that are readable by the system. The ground control
15 unit 104 can read the inventive instructions on the program storage devices 40 and follow these
instructions to execute the methodology of the embodiments herein. The ground control unit
104 further includes a user interface adapter 22 that connects a keyboard 28, mouse 30, speaker
32, microphone 34, and/or other user interface devices such as a touch screen device (not
shown) to the bus 14 to gather user input. Additionally, a communication adapter 20 connects
20 the bus 14 to a network 42, and a display adapter 24 connects the bus 14 to a display device
26, which provides a graphical user interface (GUI) 36 of the output data in accordance with
the embodiments herein, or which may be embodied as an output device such as a monitor,
printer, or transmitter, for example.
[0048] The foregoing description of the specific embodiments will so fully reveal the
25 general nature of the embodiments herein that others can, by applying current knowledge,
readily modify and/or adapt for various applications without departing from the generic
concept, and, therefore, such adaptations and modifications should be comprehended within
the meaning and range of equivalents of the disclosed embodiments. It is to be understood that
27
the phraseology or terminology employed herein is for the purpose of description 5 and not of
limitation. Therefore, while the embodiments herein have been described in terms of preferred
embodiments, those skilled in the art will recognize that the embodiments herein can be
practiced with modification within the spirit and scope of the appended claims. , Claims:I/We Claim:
1. A system (100) for enhancing a 5G wireless network coverage and throughput in disaster
management within a disaster zone, wherein the system (100) comprising:
10 a plurality of unmanned aerial vehicles (UAVs) (102A-N) that are deployed in the
disaster zone to establish a 5G wireless network for a plurality of receiver nodes (106A-N),
wherein each UAV (102A) comprises
a sensor unit (202) that is configured to obtain at least one of position data,
environmental data and user demand data, wherein the position data comprises
15 geographical coordinates, altitude, speed, and heading information of the plurality of
UAVs (102A-N), the environmental data comprises signal strength, user density, and
channel propagation conditions, and the user demand data comprises number of active
users and data requirements of the users; and
a communication unit comprising a 5G transceiver (204A), and an antenna array
20 (204B) for transmitting and receiving data across the 5G wireless network; and
a ground control unit (104) that is connected with the plurality of UAVs (102A-N) and
is configured to:
receive the position data, the environmental data and the user demand data that
are collected at the plurality of UAVs (102A-N) from each UAV (102A) through the
25 communication unit (204A-B);
characterized in that,
determine, using a reinforcement learning (RL) approach (108), an optimal
position of each UAV (102A) based on the position data, and the environmental data,
wherein the RL approach (108) comprises defining a current state of each UAV (102A)
29
based on the position data and environmental data, enabling the ground 5 control unit
(104) to generate a control signal for a first position of UAV (102A) based on the
current state, receiving a reward for the first position from each UAV (102A) based on
network efficiency after each UAV (102A) is moved to the first position using the
control signal, enabling the ground control unit (104) to iteratively generate control
10 signals for different positions and receiving rewards for each different position from
each UAV (102A), and learning an optimal positioning strategy by maximizing the
cumulative reward;
generate a flight path control signal based on the optimal position that is
determined using the RL approach (108) to adjust the flight path of each UAV (102A),
15 wherein the fight path of each UAV (102A) is adjusted to the optimal position using a
flight controller of each UAV (102A) based on the flight path control signal that is
received from the ground control unit (104) through the communication unit (204A-B);
determine, using a deep neural network model (110), an optimal transmission
power level for each UAV (102A) based on the user demand data and the environmental
20 data, wherein the optimal transmission power level is determined by extracting a
plurality of spatial features and a plurality of temporal features from the user demand
data and the environmental data; generating a concatenated feature vector by merging
the plurality of spatial features and the plurality of temporal features; and inputting the
concatenated feature vector into the deep neural network model (110) that predicts the
25 optimal transmission power level, wherein the deep neural network model (110) is
trained by mapping spatial and temporal data associated with historical user demand
data and environmental data to corresponding transmission power levels; and
30
generate a power allocation control signal based on the transmission 5 power level
that is determined using the deep neural network model (110) to allocate the
transmission power level to each UAV (102A), wherein the optimal transmission power
level is allocated to each UAV (102A) using a power distribution board of each UAV
(102A) based on the power allocation control signal that is received from the ground
10 control unit (104) through the communication unit (204A-B), thereby enhancing the 5G
wireless network coverage and throughput in disaster management within the disaster
zone.
2. The system (100) as claimed in claim 1, wherein the ground control unit (104) is configured
15 to:
receive a plurality of performance metrics at regular intervals from the plurality of
UAVs (102A-N) and a feedback of users from the plurality of receiver nodes (106A-N) to
monitor a network performance of each UAV (102A), wherein the plurality of performance
metrics comprise signal strength, power consumption and environmental conditions;
20 compare the plurality of performance metrics against pre-set optimal values; and
dynamically adjust the flight path, transmission power level, and signal strength of the
plurality of UAVs (102A-N) using the RL approach (108) and the deep neural network model
(110) through a feedback loop mechanism, if the plurality of performance metrics deviate from
the pre-set optimal values, thereby enabling adapting to changing environmental conditions.
25
3. The system (100) as claimed in claim 1, wherein the RL approach (108) employs a policy
gradients technique and a Q-learning technique to determine the optimal position of each UAV
(102A) and enable each UAV (102A) to navigate in complex environments.
31
5
4. The system (100) as claimed in claim 1, wherein the plurality of spatial features and the
plurality of temporal features comprise at least one of a distance from users, a current network
load, past power levels that are used under similar conditions, environmental factors
comprising weather and terrain, signal quality, or the number of active connections.
10
5. The system (100) as claimed in claim 1, wherein the deep neural network model (110)
employs a dynamic routing and an attention mechanism to determine the optimal transmission
power level for each UAV (102A), enabling power management and increasing data
throughput.
15
6. The system (100) as claimed in claim 1, wherein the ground control unit (104) is configured
to deploy additional UAVs to extend network coverage or replace depleted UAVs based on
changes in user density or communication demands through a deployment and scaling
mechanism.
20
7. The system (100) as claimed in claim 2, wherein the feedback loop mechanism employs
online learning and adaptive filtering techniques to dynamically adjust the flight path,
transmission power level, and signal strength of the plurality of UAVs (102A-N).
25 8. A method for enhancing a 5G wireless network coverage and throughput in disaster
management within a disaster zone, wherein the method comprising:
deploying a plurality of unmanned aerial vehicles (UAVs) (102A-N) in the disaster
zone to establish a 5G wireless network for a plurality of receiver nodes (106A-N), wherein
32
each UAV (102A) comprises a sensor unit (202), a communication unit 5 (204A-B), a flight
controller (206), and a power distribution board (210);
obtaining, using the sensor unit (202), at least one of position data, environmental data
and user demand data, wherein the position data comprises geographical coordinates, altitude,
speed, and heading information of the UAV, the environmental data comprises signal strength,
10 user density, and channel propagation conditions, and the user demand data comprises number
of active users and data requirements of the users;
receiving, by a ground control unit (104), the position data, the environmental data and
the user demand data that are collected at the plurality of UAVs (102A-N) from each UAV
(102A) through the communication unit (204A-B);
15 characterized in that,
determining, using a reinforcement learning (RL) approach (108), an optimal position
of each UAV (102A) based on the position data, and the environmental data by the ground
control unit (104), wherein the RL approach (108) comprises defining a current state of each
UAV (102A) based on the position data and environmental data, enabling the ground control
20 unit (104) to generate a control signal for a first position of UAV (102A) based on the current
state, receiving a reward for the first position from each UAV (102A) based on network
efficiency after each UAV (102A) is moved to the first position using the control signal,
enabling the ground control unit (104) to iteratively generate control signals for different
positions and receiving rewards for each different position from each UAV (102A), and
25 learning an optimal positioning strategy by maximizing the cumulative reward;
generating, by the ground control unit (104), a flight path control signal based on the
optimal position that is determined using the RL approach (108) to adjust the flight path of
each UAV (102A), wherein the fight path of each UAV (102A) is adjusted to the optimal
33
position using the flight controller (206) of each UAV (102A) based on the flight 5 path control
signal that is received from the ground control unit (104) through the communication unit
(204A-B);
determining, using a deep neural network model (110), an optimal transmission power
level for each UAV (102A) based on the user demand data and the environmental data by the
10 ground control unit (104), wherein the optimal transmission power level is determined by
extracting a plurality of spatial features and a plurality of temporal features from the user
demand data and the environmental data; generating a concatenated feature vector by merging
the plurality of spatial features and the plurality of temporal features; and inputting the
concatenated feature vector into the deep neural network model (110) that predicts the optimal
15 transmission power level, wherein the deep neural network model (110) is trained by mapping
spatial and temporal data associated with historical user demand data and environmental data
to corresponding transmission power levels; and
generating, by the ground control unit (104), a power allocation control signal based on
the transmission power level that is determined using the deep neural network model (110) to
20 allocate the transmission power level to each UAV (102A), wherein the optimal transmission
power level is allocated to each UAV (102A) using the power distribution board (210) of each
UAV (102A) based on the power allocation control signal that is received from the ground
control unit (104) through the communication unit (204A-B), thereby enhancing the 5G
wireless network coverage and throughput in disaster management within the disaster zone.
25
9. The method as claimed in claim 8, wherein the method comprising,
34
(i) receiving, by the ground control unit (104), a plurality of performance 5 metrics at
regular intervals from the plurality of UAVs (102A-N) and a feedback of users from the
plurality of receiver nodes (106A-N) to monitor a network performance of each UAV (102A);
(ii) comparing, by the ground control unit (104), the plurality of performance metrics
against pre-set optimal values; and
10 (iii) dynamically adjusting, by the ground control unit (104), the flight path,
transmission power level, and signal strength of the plurality of UAVs (102A-N) using the RL
approach (108) and the deep neural network model (110) through a feedback loop mechanism,
if the plurality of performance metrics deviate from the pre-set optimal values.
15 10. The method as claimed in claim 8, wherein the method comprising deploying, by the
ground control unit (104), additional UAVs to extend network coverage or replace depleted
UAVs based on changes in user density or communication demands through a deployment and
scaling mechanism.
Dated this November 11, 2024
Arjun Karthik Bala
(IN/PA 1021)
Agent for Applicant
Documents
Name | Date |
---|---|
202441087226-COMPLETE SPECIFICATION [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-DECLARATION OF INVENTORSHIP (FORM 5) [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-DRAWINGS [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-EDUCATIONAL INSTITUTION(S) [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-EVIDENCE FOR REGISTRATION UNDER SSI [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-FORM 1 [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-FORM 18 [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-FORM FOR SMALL ENTITY(FORM-28) [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-FORM-9 [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-POWER OF AUTHORITY [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-11-2024(online)].pdf | 12/11/2024 |
202441087226-REQUEST FOR EXAMINATION (FORM-18) [12-11-2024(online)].pdf | 12/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.