image
image
user-login
Patent search/

A DEVICE FOR INTERPRETING GESTURAL LANGUAGE

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

A DEVICE FOR INTERPRETING GESTURAL LANGUAGE

ORDINARY APPLICATION

Published

date

Filed on 29 October 2024

Abstract

A device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), is disclosed. Said device (10) broadly comprises: an at least a wearable member (11); an at least an object capturing member (13); an at least a controlling member (14); an at least a display member (15); and an at least an acoustic signal generating member (16). Said device (10) is capable to be operated in two modes, such as a bare-hand mode and a wearable member mode. The disclosed device (10) offers at least the following advantages: is efficient; is cost effective; is simple in construction; can be configured to support any language; and can be used to control any external device using gesture.

Patent Information

Application ID202441082728
Invention FieldCOMPUTER SCIENCE
Date of Application29/10/2024
Publication Number45/2024

Inventors

NameAddressCountryNationality
Padmapriya PravinkumarAssociate Professor, School of EEE, SASTRA Deemed University, Tirumalaisamudram, Thanjavur - 613401, Tamil NaduIndiaIndia
Mahalakshmi KSchool of EEE, SASTRA Deemed University, Tirumalaisamudram, Thanjavur - 613401, Tamil NaduIndiaIndia
Jeyaraju ASchool of EEE, SASTRA Deemed University, Tirumalaisamudram, Thanjavur - 613401, Tamil NaduIndiaIndia

Applicants

NameAddressCountryNationality
SASTRA DEEMED UNIVERSITYTIRUMALAISAMUDRAM, THANJAVUR - 613401, TAMIL NADUIndiaIndia

Specification

Description:TITLE OF THE INVENTION: A DEVICE FOR INTERPRETING GESTURAL LANGUAGE
FIELD OF THE INVENTION
The present disclosure is generally related to assisting devices for individuals with hearing and speech impairment. Particularly, the present disclosure is related to: a device, for interpreting gestural language, which enables individuals with hearing and speech impairment to effectively communicate with their peers, and to control external devices.
BACKGROUND OF THE INVENTION
Approximately 5% of people worldwide require rehabilitation to address their disability. As per the World Health Organization (WHO), around 70 crore individuals would experience hearing and speech impairments by the year 2050. No matter who is there or where they are, communication makes our surroundings vibrant.
The hearing and speech impairment exerts psychological and social impacts on the affected individuals due to the lack of proper communication. Lives and social relationship of the individuals with hearing and speech impairment are negatively impacted by this communication barrier.
Gestural language is a boon to individuals with hearing and speech impairment for communicating in daily life. The primary issue is that most healthy individuals comprehend sign languages very little to nothing. The gestural language is not easy to learn and even harder to teach. Therefore, for individuals with hearing and speech impairment, effective communication is a challenging task in their daily life.
Gesture Language Interpretation is an emerging and challenging research area where people have to witness the importance of gesture language to make individuals with hearing and speech impairment comfortable with their peers while communicating. Many researchers are concentrating on various gesture recognition techniques using different technologies to meet the challenging requirements.
However, the existing solutions lacks: in customization of gestures; in voice translation; in language selection; in accurate interpretation; and in assisting to control external devices.
There is, therefore, a need in the art, for: a device, for interpreting gestural language, which overcome the aforementioned drawbacks and shortcomings.
SUMMARY OF THE INVENTION
A device, for interpreting gestural language, which enables a user to communicate with their peers, and to control an at least one external device, is disclosed.
Said device broadly comprises: an at least a wearable member (for example, a glove); an at least an object capturing member; an at least a controlling member; an at least a display member; and an at least an acoustic signal generating member.
Said device is capable to be operated in two modes, such as a bare-hand mode and a wearable member mode.
when the device is in use, in the wearable member mode, the user wears the at least one wearable member on his/her hand. Said at least one wearable member broadly comprises: a plurality of force sensing members; and an at least an orientation detecting member.
Each force sensing member among the plurality of force sensing members is disposed on a respective finger of the at least one wearable member throughout the respective finger's length, and sensing the movement of the user's fingers through a series of hand gestures continuously, in real-time, with the sensed data being transmitted to the at least one controlling member.
The at least one orientation detecting member is disposed on a dorsum side of the at least one wearable member. Said at least one orientation detecting member sensing the orientation of the hand continuously, in real-time, with: the sensed data being transmitted to the at least one controlling member.
In an embodiment, the plurality of force sensing members are flex sensors, and the at least one orientation detecting member is an accelerometer sensor.
The at least one object capturing member captures an articulation of the user's hand continuously, in real-time, with: the captured data being transmitted to the at least one controlling member.
In an embodiment, the at least one object capturing member is a camera.
The at least one controlling member facilitating to monitor, manage, and control the operations of the device. Said at least one controlling member is embedded with a capturing and storing engine, and a gesture detecting engine.
The capturing and storing engine is configured to collect and store the data received from the plurality of force sensing members, the at least one orientation detecting member, and the at least one object capturing member.
The gesture detecting engine is configured to identify the gesture based on the data received from the plurality of force sensing members, and at least one orientation detecting member, or the at least one object capturing member, depending on the mode of operation, and output a respective word/ phrase associated to the gesture.
The word/phrase received from the gesture detecting engine is displayed (or presented) through the at least one display member and presented through the at least one acoustic signal generating member for communicating with the peers, as well.
The word/phrase presented through the at least one acoustic signal generating member can be used as a command to actuate (or control) an at least one external device, apart from enabling to communicate with the peers.
In an embodiment, the at least one acoustic signal generating member is a speaker.
In an embodiment, the at least one external device may include, but is not limited to, the IoT devices that can be controlled through voice assistance.
The device further comprises an at least a switching member that switching the mode of operation of the device between the bare-hand mode and the wearable member mode.
The device can be powered, by a battery, and said battery may be a rechargeable battery.
The method of working of the device is also disclosed.
The disclosed device offers at least the following advantages: is efficient; is cost effective; is simple in construction; can be configured to support any language; and can be used to control any external device using gesture.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a device, for interpreting gestural language, in accordance with an embodiment of the present disclosure;
Figure 2a and Figure 2b illustrates a circuit diagram of a device, for interpreting gestural language, in accordance with an embodiment of the present disclosure;
Figure 3 illustrate a media pipe representation of a bare-hand in a device, for interpreting gestural language, in accordance with an embodiment of the present disclosure;
DETAILED DESCRIPTION OF THE INVENTION
Throughout this specification, the use of the words "comprise" and "include", and variations, such as "comprises", "comprising", "includes", and "including", may imply the inclusion of an element (or elements) not specifically recited. Further, the disclosed embodiments may be embodied, in various other forms, as well.
Throughout this specification, the use of the word "device" is to be construed as: "a set of technical components (also referred to as "members") that are communicatively and/or operably associated with each other, and function together, as part of a mechanism, to achieve a desired technical result".
Throughout this specification, the use of the words "communication", "couple", and their variations (such as communicatively), is to be construed as being inclusive of: one-way communication (or coupling); and two-way communication (or coupling), as the case may be, irrespective of the direction of arrows, in the drawings.
Throughout this specification, all technical and scientific terminologies are to be construed, as they would be construed, by a person with ordinary competence in the field (or a person skilled in the art), to which, this disclosure relates, unless otherwise specified.
Throughout this specification, the use of the phrase "gestural language", "gesture language", and their variations, is to be construed as: "a visual-manual modality rather than spoken words to communicate meaning". For example, sign language used by individuals with hearing and speech impairment.
Throughout this specification, the use of the word "user", and its variations is to be construed as being inclusive of an individual with hearing and speech impairment.
Throughout this specification, the use of the word "plurality" is to be construed as being inclusive of: "at least one".
Throughout this specification, where applicable, the use of the phrase "at least" is to be construed in association with the suffix "one" i.e. it is to be read along with the suffix "one", as "at least one", which is used in the meaning of "one or more". A person skilled in the art will appreciate the fact that the phrase "at least one" is a standard term that is used, in Patent Specifications, to denote any component of a disclosure, which may be present (or disposed) in a single quantity, or more than a single quantity.
Throughout this specification, where applicable, the use of the phrase "at least one" is to be construed in association with a succeeding component name.
Throughout this specification, the disclosure of a range is to be construed as being inclusive of: the lower limit of the range; and the upper limit of the range.
Throughout this specification, the phrases "at least a", "at least an", and "at least one" are used interchangeably.
Throughout this specification, the words "the" and "said" are used interchangeably.
Throughout this specification, the word "sensor" and the phrase "sensing member" are used interchangeably. The disclosed sensing members may be of any suitable type known in the art.
Also, it is to be noted that embodiments may be described as a method. Although the operations, in a method, are described as a sequential process, many of the operations may be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. A method may be terminated, when its operations are completed, but may also have additional steps.
A device, for interpreting gestural language (10; also referred to as "device"), which enables a user to effectively communicate with their peers, and to control an at least one external device, is disclosed.
In an embodiment of the present disclosure, as illustrated, in Figure 1 and Figure 2, said device (10) broadly comprises: an at least a wearable member (11, for example, a glove); an at least an object capturing member (13); an at least a controlling member (14); an at least a display member (15); and an at least an acoustic signal generating member (16).
In another embodiment of the present disclosure, said device (10) is capable to be operated in two modes, such as a bare-hand mode and a wearable member mode.
In yet another embodiment of the present disclosure, when the device (10) is in use, in the wearable member mode, the user wears the at least one wearable member (11) on his/her hand. Said at least one wearable member (11) broadly comprises: a plurality of force sensing members (111); and an at least an orientation detecting member (112).
Person skilled in the art will appreciate the fact that the size and the material of the at least one wearable member (11) can be chosen based on the bending action of the user who is going to use it. The size of the hand will vary from user to user and hence the size must be chosen. Based on the measurement of the hand between about 7 inches and about 10 inches, the size varies from small to XXL, respectively. The material used in the construction of the glove depends on the convenience of the user. Similarly, the material of construction of the at least one wearable member (11) is based on the users liking and convenience (i.e., cotton, woollen, leather, etc.).
Each force sensing member among the plurality of force sensing members (111) is disposed on a respective finger of the at least one wearable member (11) throughout respective finger's length, and the at least one orientation detecting member (112) is disposed on a dorsum side of the at least one wearable member (11), above wrist and below knuckles.
In yet another embodiment of the present disclosure, the plurality of force sensing members (111) are flex sensors, and the at least one orientation detecting member (112) is an accelerometer sensor.
The plurality of force sensing members (111) facilitating to determine (or sensing) how the user's fingers move through a series of hand gestures continuously, in real-time, with: the sensed data being transmitted to the at least one controlling member (14).
The at least one orientation detecting member (112) facilitating to detect (or sense) the orientation of the hand continuously, in real-time, with: the sensed data being transmitted to the at least one controlling member (14). Said at least one orientation detecting member (112) is used to fetch information regarding the triplet of x, y, z axes as per the orientation of the hand in different positions.
Hence, for each gesture, a combination of at least five [5] input vales from the plurality of force sensing members (111), and at least three [3] input valves from the at least one orientation detecting member (112), is generated (or determined, or detected).
The plurality of force sensing members (111) are variable resistors whose resistance varies according to the amount of bending or flexing. This change in resistance results in a change in voltage, which is read by the at least one controlling member (14). A 10kΩ pull-down resistor connected in series with each force sensing member among the plurality of force sensing members (111) to create the voltage divider.
The resistance of a force sensing member is determined using the following formula:
R_flex=R_fixed × (V_in/V_out -1)
And the voltage across the plurality of force sensing members (111) is determined using the following formula:
V_out=V_in × R_flex/(R_flex+ R_fixed )
Where, Vin is the input voltage, Rflex is the resistance of the force sensing member, and Rfixed is a fixed resistor connected in series with the force sensing member.
To determine the bending angle, the resistance of the plurality of force sensing members (111) at several known angles such as 0°, 45°, 90° need to be captured.
Let the measured resistance for two known angles is (R_(1 ),θ_1), and (R_(2 ),θ_2). The slope (m), intercept (b), and bending angle (θ) are determined using the following formulae:
m= (θ_2-θ_1)/(R_(2 )-R_(1 ) )
b= θ_1-m ×R_(1 )
θ=m ×R_flex+b
The above formulae uses the slope and intercept to calculate the angle for any given resistance R_flex.
To calibrate the plurality of force sensing members (111), resistance data of the plurality of force sensing members (111) is measurement in both at relaxed condition and at maximum bending angle, for few iterations, and the resistance range of the plurality of force sensing members (111) is determined. Said resistance ranges are mapped with one or more gestures.
The at least one orientation detecting member (112) measures x, y and z positions (equivalent voltages) as per the orientation of the hand at different positions.
When the at least one wearable member (11) is in a rest position x=y=z=0.
When the at least one wearable member (11) is moved forward from rest position x= - 3.9, y=0.31, and z= 10.2.
When the at least one wearable member (11) is moved backward from rest position x=3.81, y=0.296, and z=8.77.
When the at least one wearable member (11) is oriented towards left from rest position x=0.802, y= - 3.40, and z=9.22.
When the at least one wearable member (11) is oriented towards right from rest position x= - 0.8, y= - 3.5, and z=9.84.
The at least one controlling member (14) facilitating to monitor, manage, and control the operations of the device (10).
In yet another embodiment of the present disclosure, the at least one controlling member (14) is a microcontroller. The microcontroller may be Arduino UNO.
In yet another embodiment of the present disclosure, the at least one controlling member (14) is a microcontroller integrated with the at least one communication member. The microcontroller integrated with the at least one communication member may be Arduino UNO WIFI.
In yet another embodiment of the present disclosure, the device (10) further comprises: an at least a switching member (not shown); a capturing and storing engine (141); and a gesture detecting engine (142).
The switching member facilitating to switch the mode of operation of the device (10) between the bare-hand mode and the wearable member mode. In the bare-hand mode of operation, an articulation of the user's hand (12; without wearing the at least one wearable member (11)) is captured continuously by the at least one object capturing member (13), in real-time, and the captured data being transmitted to the at least one controlling member (14).
In yet another embodiment of the present disclosure, the at least one object capturing member (13) is a camera.
The capturing and storing engine (141) and the gesture detecting engine (142) are embedded within the at least one controlling member (14).
The capturing and storing engine (141) is configured to collect and store data received from the plurality of force sensing members (111), the at least one orientation detecting member (112), and the at least one object capturing member (13).
The gesture detecting engine (142) is configured to identify (or determine) the gesture based on the data received from the plurality of force sensing members (111), and at least one orientation detecting member (112), or the at least one object capturing member (13), depending on the mode of operation, and output a respective word/ phrase associated to the gesture.
The word/phrase received from the gesture detecting engine (142) is displayed (or presented) through the at least one display member (15) and presented through the at least one acoustic signal generating member (16) for communicating with the peers, as well.
The word/phrase presented through the at least one acoustic signal generating member (16) can be used as a command to actuate (or control) an at least one external device (17), apart from enabling to communicate with the peers.
In yet another embodiment of the present disclosure, the at least one acoustic signal generating member (16) is a speaker.
In yet another embodiment of the present disclosure, the at least one external device (17) may include, but is not limited to, the IoT devices that can be controlled through voice assistance.
The below table illustrates the gestures and their respective word/phrase identified along with their relevant values for the plurality of force sensing members (111), and the at least one orientation detecting member (112).
Gesture Orientation Detecting Member (X, Y, Z) Force Sensing Member 1 Force Sensing Member 2 Force Sensing Member 3 Force Sensing Member 4 Output
(0,-1,-1) 0-10 degrees >90 degrees >90 degrees >90 degrees Welcome
(-1,0,0) 0-10 degrees 0-10 degrees >90 degrees >90 degrees How are you
(0,-1,-1) 0-10 degrees >90 degrees >90 degrees 0-10 degrees Have a seat
(0,1,1)
>90 degrees >90 degrees >90 degrees >90 degrees Need water

The articulation of user hand (12; i.e., gesture) captured is recognized (or interpreted) by the gesture detecting engine (142) using media pipe that has 21 points, as illustrated in Figure 3, to recognise the hand value points. Initially, the 21 hand key points are marked and landmarks are drawn using the key points on the gesture with the help of media pipe. Subsequently the key point values are extracted for each gesture (and/or for each hand) and stored as reference template.
The captured user hand images (and/or videos) are flipped, to convert them from BGR format into RGB format. The media pipe is used to map the hand value points and a dataset is created. The created dataset is split into 4:1 ratio. The maximum data is used to train the dataset and the remain is used to test.
In yet another embodiment of the present disclosure, Tensor Flow technique is used to train and test the dataset.
In yet another embodiment of the present disclosure, 30 videos for each gesture with a sequence length of 30 frames has collected with the help of the at least one object capturing member (13).
Each frame among the plurality of frames in each video captured contains the key point values in terms of x, y, and z axis, where x and y define the normal distance of the hand from the respective axis, and z defines the defines the distance between the hand and the at least one object capturing member (13).
In yet another embodiment of the present disclosure, the gesture detecting engine (142) is based on recurrent neural network, such as Long Short Term Memory (LSTM) neural network.
The gesture detecting engine (142) introduces a memory cell that can hold the memory for a longer period of time, and is controlled by three gates, namely an input gate, a forget gate, and an output gate. Said gates decides the information to be added, removed, and gives an output from the memory cell.
The method of recognising (or interpreting) the articulation of user hand (12; i.e., gesture) by the gesture detecting engine (142) shall now be explained.
The process begins by capturing the gesture as a sequence of images or frames from a video captured using the at least one object capturing member (13). Each frame represents a point in time during the gesture. For example, in a hand wave gesture, the sequence of frames shows the hand starting at rest, moving across the body, and then returning to rest.
Before sending the sequence of frames to the gesture detecting engine (142), preprocessing is done to make the data suitable. This step normally includes the following steps:
Image Resizing: Resize each frame to a fixed size;
Normalization: Normalize pixel values to improve training; and
Feature Extraction: Instead of raw pixel values, features (like hand landmarks, movement directions) may be extracted for spatial feature extraction.
A person skilled in the art will appreciate the fact that any suitable Convolution Neural Network technique can be employed for feature extraction from the sequence of frames.
After the completion of spatial feature extraction, each frame in the video is converted into feature vector.
The sequence of feature vectors is then fed into the gesture detecting engine (142). Said gesture detecting engine (142) is well-suited for processing sequential data because is has memory cells that "remember" information from previous time steps, allowing them to model the temporal aspect of gestures.
At each time step (frame), the gesture detecting engine (142) updates its hidden state based on the current input (feature vector) and the previous hidden state. This allows the gesture detecting engine (142) to keep track of the progression of the gesture over time.
Once the gesture detecting engine (142) has processed the entire sequence, the final hidden state contains the gesture detecting engine (142)'s understanding of the entire gesture. This information is passed to a classifier (for example, a fully connected layer) that assigns a gesture label based on the learned sequence.
During training, the gesture detecting engine (142) learns to associate different sequences of movements with specific gesture labels. A loss function (e.g., cross-entropy) is used to measure the difference between the predicted and true gesture labels. The weightage of the gesture detecting engine (142) are adjusted through back propagation through time (BPTT) to minimize the loss.
In yet another embodiment of the present disclosure, said device (10) can be powered, by a battery, and said battery may be a rechargeable battery.
In yet another embodiment of the present disclosure, the device (10) can be configured to identify the gesture and output respective word/phrase in any language. Hence, the disclosed device (10) is language independent. Alternatively, in the device (10), the gestures can be matched (or associated) with the words/phrases of any language, by the user.
In yet another embodiment of the present disclosure, the device (10) is communicatively associated with an at least an external computing device. The at least one communication member facilitates establishing communication between the device (10) and the at least one external computing device.
The disclosed device (10) offers at least the following advantages: is efficient; is cost effective; is simple in construction; can be configured to support any language; and can be used to control any external device using gesture.
Implementation of the disclosure can involve performing or completing selected tasks manually, automatically, or a combination thereof. Further, according to actual instrumentation of the disclosure, several selected tasks could be implemented, by hardware, by software, by firmware, or by a combination thereof, using an operating system. For example, as software, selected tasks, according to the disclosure, could be implemented, as a plurality of software instructions being executed, by a computer, using any suitable operating system.
A person skilled in the art will appreciate the fact that the device, and its various components, may be made of any suitable materials known in the art. Likewise, a person skilled in the art will also appreciate the fact that the configurations of the device, and its various components, may be varied, based on requirements.
It will be apparent to a person skilled in the art that the above description is for illustrative purposes only and should not be considered as limiting. Various modifications, additions, alterations, and improvements, without deviating from the spirit and the scope of the disclosure, may be made, by a person skilled in the art. Such modifications, additions, alterations, and improvements, should be construed as being within the scope of this disclosure.


LIST OF REFERENCE NUMERALS
10 - A Device for Interpreting Gestural Language
11 - At Least One Wearable Member
111 - Plurality of Force Sensing Members
112 - At Least One Orientation Detecting Member
12 - Bare Hand Gestures
13 - At Least One Object Capturing Member
14 - At Least One Controlling Member
15 - At Least One Display Member
16 - At Least One Acoustic Signal Generating Member
17 - At Least One External Device , Claims:1. A device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), comprising:
an at least a wearable member (11) that is worn by the user on his/her hand, said at least one wearable member (11) comprising:
a plurality of force sensing members (111), with:
each force sensing member among the plurality of force sensing members (111) being disposed on a respective finger of the at least one wearable member (11) throughout the respective finger's length, and sensing the movement of the user's fingers through a series of hand gestures continuously, in real-time; and
the sensed data being transmitted to an at least a controlling member (14); and
an at least an orientation detecting member (112) that is disposed on a dorsum side of the at least one wearable member (11), said at least one orientation detecting member (112) sensing the orientation of the hand continuously, in real-time, with: the sensed data being transmitted to the at least one controlling member (14);
an at least an object capturing member (13) that capturing an articulation of the user's hand (12) continuously, in real-time, with: the captured data being transmitted to the at least one controlling member (14);
the at least one controlling member (14) that facilitating to monitor, manage, and control the operations of the device (10), said at least one controlling member (14) being embedded with a capturing and storing engine (141), and a gesture detecting engine (142), with:
said capturing and storing engine (141) being configured to collect and store data received from the plurality of force sensing members (111), at least one orientation detecting member (112), and the at least one object capturing member (13); and
the gesture detecting engine (142) being configured to identify the gesture based on the data received from the plurality of force sensing members (111), and at least one orientation detecting member (112), or the at least one object capturing member (13), and output a respective word/phrase associated to the gesture;
an at least a display member (15) that displaying the word/phrase received from the gesture detecting engine (142); and
an at least an acoustic signal generating member (16) that presenting the word/phrase received from the gesture detecting engine (142) for: communicating with the peers, and controlling the at least one external device (17),
with: said device (10) being operated in a bare-hand mode and a wearable member mode.
2. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein: the device is powered by a battery.
3. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein:
the device (10) comprising an at least a switching member that switching the mode of operation of the device (10) between the bare-hand mode and the wearable member mode.
4. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein:
the plurality of force sensing members (111) are flex sensors, and the at least one orientation detecting member (112) is an accelerometer sensor.
5. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein: the at least one object capturing member (13) is a camera.
6. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein: the at least one acoustic signal generating member (16) is a speaker.
7. The device, for interpreting gestural language (10), which enables a user to communicate with their peers, and to control an at least one external device (17), as claimed in Claim 1, wherein:
the at least one external device (17) include the IoT devices that are controlled through voice assistance.

Documents

NameDate
202441082728-COMPLETE SPECIFICATION [29-10-2024(online)].pdf29/10/2024
202441082728-DECLARATION OF INVENTORSHIP (FORM 5) [29-10-2024(online)].pdf29/10/2024
202441082728-DRAWINGS [29-10-2024(online)].pdf29/10/2024
202441082728-EDUCATIONAL INSTITUTION(S) [29-10-2024(online)].pdf29/10/2024
202441082728-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-10-2024(online)].pdf29/10/2024
202441082728-FIGURE OF ABSTRACT [29-10-2024(online)].pdf29/10/2024
202441082728-FORM 1 [29-10-2024(online)].pdf29/10/2024
202441082728-FORM 18 [29-10-2024(online)].pdf29/10/2024
202441082728-FORM 3 [29-10-2024(online)].pdf29/10/2024
202441082728-FORM FOR SMALL ENTITY(FORM-28) [29-10-2024(online)].pdf29/10/2024
202441082728-FORM-5 [29-10-2024(online)].pdf29/10/2024
202441082728-FORM-8 [29-10-2024(online)].pdf29/10/2024
202441082728-FORM-9 [29-10-2024(online)].pdf29/10/2024
202441082728-OTHERS [29-10-2024(online)].pdf29/10/2024
202441082728-POWER OF AUTHORITY [29-10-2024(online)].pdf29/10/2024
202441082728-REQUEST FOR EARLY PUBLICATION(FORM-9) [29-10-2024(online)].pdf29/10/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.