Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
LANGUAGE SUPPORTING SYSTEM
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 26 November 2024
Abstract
Disclosed herein is a natural language processing system (100) comprising a user device (102) and a microprocessor (108) operatively connected to the user device (102). The microprocessor (108) includes a data input module (110) configured to receive user queries in a specified language, a pre-processing module (112) configured to clean normalize and tokenize the input text, and a language processing module (114) integrated with a domain-specific language model for analysing syntax semantics and context. The microprocessor (108) further comprises an intent recognition module (116) to classify user queries into predefined healthcare-related intents and an entity extraction module (118) to identify and extract entities such as symptoms, medications, and dates. A dialogue management module (120) generates responses based on intent and extracted entities, while a response generation module (122) formulates text responses. The output module (124) transmits responses back to the user device (102), ensuring seamless interaction in the specified language.
Patent Information
Application ID | 202441091948 |
Invention Field | ELECTRONICS |
Date of Application | 26/11/2024 |
Publication Number | 49/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
SARITHA SHETTY | DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS, NMAM INSTITUTE OF TECHNOLOGY, NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIA | India | India |
SAVITHA SHETTY | DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NMAM INSTITUTE OF TECHNOLOGY, NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIA | India | India |
MANJUNATH M | DEPARTMENT OF CIVIL ENGINEERING, NMAM INSTITUTE OF TECHNOLOGY, NITTE (DEEMED TO BE UNIVERSITY), NITTE - 574110, KARNATAKA, INDIA | India | India |
SAVITHA G | DEPARTMENT OF DATA SCIENCE AND COMPUTER APPLICATIONS, MANIPAL INSTITUTE OF TECHNOLOGY, MANIPAL ACADEMY OF HIGHER EDUCATION, MANIPAL | India | India |
UMA R | DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY, BANGALORE | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
NITTE (DEEMED TO BE UNIVERSITY) | 6TH FLOOR, UNIVERSITY ENCLAVE, MEDICAL SCIENCES COMPLEX, DERALAKATTE, MANGALURU, KARNATAKA 575018 | India | India |
Specification
Description:FIELD OF DISCLOSURE
[0001] The present disclosure generally relates to language conversational systems, more specifically, relates to a language supporting system for healthcare support and communication based on NLP.
BACKGROUND OF THE DISCLOSURE
[0002] Communication barriers in healthcare can significantly impact patient outcomes particularly in regions where linguistic diversity is high. Language disparities often prevent patients from effectively describing their symptoms or understanding medical advice leading to misdiagnoses non-compliance with treatment plans and inadequate care.
[0003] Current healthcare communication systems such as multilingual chatbots primarily focus on widely spoken languages leaving speakers of low-resource languages underserved. This limitation is particularly evident in regions where Tulu is a predominant language as existing solutions often exclude Tulu due to a lack of linguistic resources and technological advancements tailored to the language.
[0004] Traditional natural language processing (NLP) systems struggle to address the needs of low-resource languages due to the limited availability of annotated corpora and pre-trained language models. Furthermore, existing healthcare dialogue systems often fail to incorporate cultural and linguistic nuances, which are crucial for effective communication and building trust with users.
[0005] The present invention addresses these challenges by introducing a language-supporting system tailored to Tulu and other low-resource languages in the healthcare domain. By leveraging domain-specific training data and innovative machine learning models the proposed solution bridges the communication gap for Tulu-speaking patients enabling better healthcare access, understanding, and outcomes.
[0006] Thus, in light of the above-stated discussion, there exists a need for a Language supporting system.
SUMMARY OF THE DISCLOSURE
[0007] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0008] According to illustrative embodiments, the present disclosure focuses on a Language supporting system which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0009] An objective of the present disclosure is to facilitate effective communication between patients and healthcare providers by developing a language-supporting system tailored for low-resource languages such as Tulu ensuring equitable access to healthcare services.
[0010] Another objective of the present disclosure is to enable accurate and context-aware interactions by leveraging advanced natural language processing (NLP) techniques including language modelling intent recognition and entity extraction trained specifically on healthcare-related Tulu data.
[0011] Another objective of the present disclosure is to enhance user engagement and trust by incorporating cultural and linguistic nuances in the system's responses ensuring that communication is empathetic and relevant to the patient's context.
[0012] Another objective of the present disclosure is to address the challenges of low-resource language adaptation by developing innovative methods for training NLP models with limited linguistic data including the use of transfer learning and domain-specific data augmentation.
[0013] Another objective of the present disclosure is to improve healthcare outcomes by providing timely and accurate responses to patient queries ranging from symptom analysis to medication details through an intuitive and user-friendly conversational interface.
[0014] Yet another objective of the present disclosure is to ensure scalability and adaptability by integrating a secure cloud database that stores medical knowledge conversation logs and user preferences enabling real-time access and system updates.
[0015] Yet another objective of the present disclosure is to employ modular design principles enabling seamless integration of new languages and healthcare domains making the system versatile for diverse linguistic and medical contexts.
[0016] Yet another objective of the present disclosure is to promote inclusive healthcare solutions by bridging the communication gap for Tulu-speaking patients thereby reducing disparities in healthcare access and improving overall patient satisfaction and outcomes.
[0017] In light of the above, in one aspect of the present disclosure, a language supporting system is disclosed herein. The system comprises a user device configured to enable a user to input a query in a specified language and receive a response. The system includes a microprocessor operatively connected to the user device and configured to process and execute language-supporting tasks wherein the microprocessor further comprises the system also includes. The system also includes a data input module configured to receive user queries. The system also includes a pre-processing module configured to clean normalize and tokenize the input text for further processing. The system also includes a language processing module configured to analyse the syntax semantics and context of the pre-processed text using a language model trained on domain-specific data. The system also includes an intent recognition module configured to classify the user query into predefined intents related to healthcare. The system also includes an entity extraction module configured to identify and extract healthcare-related entities including symptoms medications and dates. The system also includes a dialogue management module configured to generate an appropriate response based on the identified intent and extracted entities ensuring contextual continuity in the conversation. The system also includes a response generation module configured to generate an appropriate text response based on the identified intent and entities. The system also includes an output module configured to transmit the generated response back to the user device in the specified language.
[0018] In one embodiment, the user device supports multimodal input including text, speech, or image-based queries enabling users with diverse communication preferences to interact with the system.
[0019] In one embodiment, the microprocessor incorporates parallel processing capabilities to enhance the efficiency of language and intent processing tasks.
[0020] In one embodiment, the user input module configured to detect and dynamically switch between multiple languages based on the linguistic features of the input query.
[0021] In one embodiment, the pre-processing module includes submodules for removing stop words lemmatization and stemming specifically optimized for the grammar and syntax of the specified language.
[0022] In one embodiment, the language processing module employs transformer-based models such as BERT or GPT fine-tuned on healthcare-specific datasets to ensure contextual understanding and semantic accuracy.
[0023] In one embodiment, the response generation module employs a neural sequence-to-sequence model to dynamically generate responses that incorporate user context extracted entities and cultural nuances specific to the specified language.
[0024] In one embodiment, the system further comprises a cloud database configured to securely store healthcare data, including conversation logs medical guidelines and user preferences enabling real-time access and scalability for improved system performance.
[0025] In one embodiment, the system further comprises a communication network configured to facilitate seamless data exchange between microprocessor and the user device ensuring real-time processing and response generation while maintaining data security and integrity.
[0026] These and other advantages will be apparent from the present application of the embodiments described herein.
[0027] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated other embodiments of the present invention are possible utilizing alone or in combination one or more of the features set forth above or described in detail below.
[0028] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0029] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0030] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0031] FIG. 1 illustrates a block diagram of a language supporting system, in accordance with an exemplary embodiment of the present disclosure.
[0032] FIG. 2 illustrates a flowchart of a method outlining the sequential steps for classifying language supporting system, in accordance with an exemplary embodiment of the present disclosure.
[0033] Like reference, numerals refer to like parts throughout the description of several views of the drawing.
[0034] The language supporting system is illustrated in the accompanying drawings which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0035] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0036] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0037] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0038] The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0039] The terms "having", "comprising", "including", and variations thereof signify the presence of a component.
[0040] Referring now to FIG. 1 to FIG. 2 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a perspective view of a language supporting system 100, in accordance with an exemplary embodiment of the present disclosure.
[0041] The system 100 may include a user device 102, a communication network 104, a cloud database 106, a microprocessor 108, a data input module 110, a pre-processing module 112, a language processing module 114, an intent recognition module 116, an entity extraction module 118, dialogue management module 120, response generation module 122, an output module 124.
[0042] The user device 102 serves as the primary interface for user interaction with the system. It is designed to enable patents or healthcare seekers to input queries in Tulu and receive responses in the same language. This device can be a smartphone, tablet, computer, or any other internet-enabled device with a compatible application or browser interface. The user device ensures accessibility and user-friendliness making it easy for individuals with minimal technical skills to access healthcare services. It communicates with the microprocessor via the communication network transmitting user inputs and displaying responses in real time.
[0043] The communication network 104 facilitates the seamless exchange of data between the user device, microprocessor, and cloud database. It ensures that user queries are quickly transmitted to the system and that responses are delivered back to the user without delays. The network can be based on Wi-Fi, cellular data or other internet protocols. It also ensures secure communication by employing encryption and other cybersecurity measures safeguarding sensitive medical information shared between the user and the system.
[0044] The cloud database 106 acts as the central repository for storing healthcare-related data such as medical guidelines symptom descriptions, meditation details and conversation logs. It is scalable and enables real-time access to healthcare information for the microprocessor during response generation. The cloud also facilitates updating the system's knowledge base with new medical data ensuring the system remains current and accurate. By storing data in the cloud, the system can handle high volumes of requests without compromising speed or reliability.
[0045] The microprocessor 108 is the heart of the system responsible for orchestrating all operations and processing tasks. It connects to all modules and oversees the workflow from receiving user inputs to delivering responses. Acting as the computational brain the microprocessor runs the algorithms for language processing intent recognition and response generation. Its efficient handling of tasks ensures a smooth and accurate conversational experience for users.
[0046] The data input module 110 is responsible for receiving raw user queries in Tulu from the user device. It acts as the gateway for inputting data into the system ensuring that queries are accurately captured. This module directly connects to the preprocessing module forwarding the input text for cleaning and normalization. Its role is critical in ensuring that no user input is missed or misinterpreted by the system.
[0047] The pre-processing module 112 cleans normalizes and tokenizes the raw text received from the data input module. It removes irrelevant character corrects formatting inconsistencies and breaks the text into smaller manageable units like words or phrases. This standardization is vital for ensuring accurate processing by downstream modules especially when working with Tulu, a low-resource language with limited linguistic tools.
[0048] The Language processing module 114 is responsible for understanding the syntax semantics, and context of the user query. Using a language model trained on healthcare-related Tulu data the language processing module analyzes the structure and meaning of the text. It ensures that the system can interpret the user's input in a way that aligns with cultural and linguistic nuances enhancing the accuracy and relevance of responses.
[0049] The intent recognition module 116 classifies the user's query into predefined categories or intents such as seeking information about symptoms medications or scheduling appointments. This module ensures that the system understands the purpose behind the query allowing it to respond appropriately.
[0050] The entity extraction module 118 identifies specific details within the user query such as symptoms, medications, dates, or other relevant healthcare entities. For instance, in a query like "What should I take for fever?", this module extracts "fever" as the symptom. This detailed extraction is crucial for generating accurate and actionable responses, as it provides the dialogue manager with all the necessary contexts.
[0051] The dialogue management module 120 is responsible for crafting the conversational flow and determining the system's response. It integrates the extracted intent and entities to generate a meaningful and contextually relevant response. This module also maintains the context of multi-turn conversations ensuring that responses remain consistent and logical even if the user asks follow-up questions.
[0052] The response generation module 122 This module generates the actual text response based on the information provided by the dialogue management module. It uses healthcare-specific templates or dynamic natural language generation techniques to produce responses in Tulu. The response is structured to be clear, accurate, and empathetic, ensuring that users can easily understand and act on the provided information.
[0053] The output module 124 delivers the generated response back to the user device. It ensures that the response maintains the same linguistic and cultural tone as the input query. This module is the final step in the process, closing the communication loop by ensuring the user receives the information they need in a clear and accessible format.
[0054] FIG. 2 illustrates a flowchart of a method outlining the sequential steps for classifying language supporting system, in accordance with an exemplary embodiment of the present disclosure.
[0055] The method 200 may include, at step 202, enabling a user to input a query in a specified language and receive a response via a user device. at step 204, processing and execute language-supporting tasks, wherein the microprocessor further comprises via a microprocessor. at step 206, receiving user queries via a user input module. at step 208, normalizing and tokenize the input text for further processing via a pre-processing module. at step 210, analysing the syntax semantics and context of the pre-processed text using a language model trained on domain-specific data via a language processing module. at step 212, classifying the user query into predefined intents related to healthcare via an intent recognition module. at step 214, identifying and extract healthcare-related entities, including symptoms, medications, and dates via an entity extraction module. at step 216, generating an appropriate response based on the identified intent and extracted entities ensuring contextual continuity in the conversation via a dialogue management module. at step 218, generating an appropriate text response based on the identified intent and entities and via a generation module. at step 220, transmitting the generated response back to the user device in the specified language via an output module.
[0056] At step 202, the user device act as the primary interface for individuals seeking healthcare assistance. It enables users to type or speak queries in their preferred language including Tulu ensuring accessibility for native speakers. The device also displays responses in the same language providing clarity and ease of understanding. The interface is designed to be user friendly accommodating various levels of technical proficiency and support input through text, voice, or both ensuring inclusivity and ease of use.
[0057] At step 204, the microprocessor is the central processing unit that drives the system. It coordinates all computational tasks ensuring efficient execution of language related operations. It seamlessly integrates with various modules to perform complex functions like text processing language analysis intent recognition and response generation. By leveraging its high processing power, the microprocessor ensures real time query handling and smooth communication flow between the user device and other system components.
[0058] At step 206, the user input module act as a gateway for capturing queries submitted by the users through the device. it ensures the accurate transmission of raw input data whether typed or spoken. This module is critical for initiating the system's workflow as it channels the input data directly into the pre-processing module. it also validates the format of the input ensuring compatibility with downstream processing.
[0059] At step 208, The pre-processing module prepares the raw input text for analysis by cleaning and structuring it. It normalizes the text by removing irrelevant characters correcting spelling errors and converting text into a standardized format. Tokenization splits the input into smaller units such as words or phrases which are easier for the language processing module to analyse.
[0060] At step 210, The language processing module interprets the user's query by examining its grammatical structure (syntax), meaning (semantics), and contextual relevance. Leveraging a language model trained specifically on healthcare-related Tulu data this module ensures an accurate understanding of queries. It adapts to the unique nuances of the Tulu language enabling the system to grasp both explicit meanings and implied context thereby improving response accuracy
[0061] At step 212, the intent recognition module categorizes the user's query into specific intents such as symptom inquiry medication information, appointment scheduling or general health advice. By recognizing the purpose of the query, the system can tailor its response accordingly.
[0062] At step 214, The entity extraction module identifies key details within the query, such as symptoms, drug names, dates, or other healthcare-related entities. For example, in the query "What medication should I take for fever?", the module extracts "medication" and "fever" as relevant entities.
[0063] At step 216, The dialogue management module determines the structure and content of the response by integrating the identified intent and extracted entities. It ensures that the response aligns with the user's query while maintaining contextual continuity, especially in multi-turn conversations.
[0064] At step 218, The response generation module creates the actual text response that will be sent to the user. It uses predefined templates or natural language generation techniques to ensure clarity and accuracy. For instance, if the intent is to provide medication advice, the response might be: "For a fever, you can take paracetamol as prescribed by your doctor.
[0065] At step 220, The output module delivers the final response to the user device in the same language as the query. It ensures the text or voice output is properly formatted and accessible. This module also handles any required adjustments, such as converting text responses into speech for audio playback.
[0066] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0067] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0068] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0069] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0070] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. A language supporting system (100) for facilitating natural language communication in healthcare, the system (100) comprising:
a user device (102) configured to enable a user to input a query in a specified language and receive a response;
a microprocessor (108) connected to the user device (102) and configured to process and execute language supporting tasks, wherein the microprocessor (108) further comprises:
a data input module (110) configured to receive user queries;
a pre-processing module (112) configured to clean, normalize and tokenize the input text for further processing;
a language processing module (114) configured to analyse the syntax semantics and context of the pre-processed text using a language model trained on domain-specific data;
an intent recognition module (116) configured to classify the user query into predefined intents related to healthcare;
an entity extraction module (118) configured to identify and extract healthcare-related entities, including symptoms, medications, and dates;
a dialogue management module (120) configured to generate an appropriate response based on the identified intent and extracted entities, ensuring contextual continuity in the conversation;
a response generation module (122) configured to generate an appropriate text response based on the identified intent and entities; and
an output module (124) configured to transmit the generated response back to the user device (102) in the specified language.
2. The system (100) as claimed in claim 1, wherein the user device (102) supports multimodal input including text, speech, or image-based queries enabling users with diverse communication preferences to interact with the system.
3. The system (100) as claimed in claim 1, wherein the microprocessor (108) incorporates parallel processing capabilities to enhance the efficiency of language and intent processing tasks.
4. The system (100) as claimed in claim 1, wherein the user input module (110) configured to detect and dynamically switch between multiple languages based on the linguistic features of the input query.
5. The system (100) as claimed in claim 1, wherein the pre-processing module (112) includes submodules for removing stop words lemmatization and stemming specifically optimized for the grammar and syntax of the specified language.
6. The system (100) as claimed in claim 1, wherein the language processing module (114) employs transformer-based models, such as BERT or GPT fine-tuned on healthcare-specific datasets to ensure contextual understanding and semantic accuracy.
7. The system (100) as claimed in claim 1, wherein the response generation module (122) employs a neural sequence-to-sequence model to dynamically generate responses that incorporate user context extracted entities and cultural nuances specific to the specified language.
8. The system (100) as claimed in claim 1, wherein the system (100) further comprises a cloud database (106) configured to securely store healthcare data, including conversation logs medical guidelines and user preferences enabling real-time access and scalability for improved system performance.
9. The system (100) as claimed in claim 1, wherein the system (100) further comprises a communication network configured to facilitate seamless data exchange between microprocessor (108) and the user device (102) ensuring real-time processing and response generation while maintaining data security and integrity.
10. A method (200) for facilitating natural language communication in healthcare, the method (200) comprising:
enabling a user to input a query in a specified language and receive a response via a user device (202);
processing and execute language-supporting tasks via a microprocessor (204) comprising of several modules;
receiving user queries via a user input module (206);
normalizing and tokenize the input text for further processing via a pre-processing module (208);
analysing the syntax semantics and context of the pre-processed text using a language model trained on domain-specific data via a language processing module (210);
classifying the user query into predefined intents related to healthcare via an intent recognition module (212);
identifying and extract healthcare-related entities, including symptoms, medications, and dates via an entity extraction module (214);
generating an appropriate response based on the identified intent and extracted entities, ensuring contextual continuity in the conversation via a dialogue management module (216);
generating an appropriate text response based on the identified intent and entities; and via a generation module (218);
transmitting the generated response back to the user device in the specified language via an output module (220).
Documents
Name | Date |
---|---|
202441091948-COMPLETE SPECIFICATION [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-DECLARATION OF INVENTORSHIP (FORM 5) [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-DRAWINGS [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-FORM 1 [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-FORM FOR SMALL ENTITY(FORM-28) [26-11-2024(online)].pdf | 26/11/2024 |
202441091948-REQUEST FOR EARLY PUBLICATION(FORM-9) [26-11-2024(online)].pdf | 26/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.