Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
SYSTEM AND METHOD FOR GENERATING RESPONSE BASED ON QUERIES
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 8 November 2024
Abstract
SYSTEM AND METHOD FOR GENERATING RESPONSE BASED ON QUERIES A method (500) and system (100) for generating response based on queries (302) is disclosed. The method (500) includes receiving at least one user query (302). The method may include generating one or more sub-queries from the at least one user query via a sub-query generator tool (310). The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. The method (500) may further include executing the structured sub-query on a structured database (308) to generate a deterministic response. The method (500) may further include executing the semantic sub-query on an unstructured database (306) to generate a semantic response. Further, the method (500) further includes aggregating the deterministic response and the semantic response to create a context (318) of the user query (302). The method (500) includes generating the response (322) based on the context (318) via a Large Language Model (LLM) (320).
Patent Information
Application ID | 202421086195 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 08/11/2024 |
Publication Number | 49/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Anurag Tripathi | Eldeco Aamantran, S5 801, Sector 119, Noida, U.P. 208011 | India | India |
Sudhir Bisane | Plot no. 26, Keshav Nagar, Khat Road, Bhandara, Maharashtra, India Pincode - 441904 | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
INFO ORIGIN TECHNOLOGIES PVT. LTD | Kudwa Ring Road, Gondia-441614, Maharashtra, India | India | India |
Specification
Description:FIELD OF THE INVENTION
[0001] The present disclosure relates to a field of Artificial Intelligence (AI), and more specifically to a method and system for generating response based on queries by using Artificial Intelligence (AI).
BACKGROUND OF THE INVENTION
[0002] Retrieval Augmented Generation (RAG) frameworks have emerged as a powerful solution for improving the accuracy and relevance of responses generated by large language models (LLMs). The core mechanism of RAG involves a retriever that searches for relevant documents based on a user query and a generator that uses the retrieved data to craft a response. In typical RAG implementations, documents are stored in a vector database, and queries are matched with document chunks based on vector similarity. Metadata associated with these documents helps filter the search space, optimizing performance by narrowing down the corpus used for final context generation. However, the conventional RAG frameworks exhibit limitations when faced with complex queries, particularly those involving structured data, which affects their ability to handle both structured and unstructured information simultaneously.
[0003] One major shortcoming of existing RAG frameworks is their inability to handle structured queries effectively. For example, when a user asks a question requiring an operation like "What is the average amount of client XYZ?", the framework lacks support for SQL-based queries necessary for performing such an average calculation. While subqueries can be generated using LLMs, these are typically treated as sequential tasks, mainly for optimizing search spaces. In many real-world applications, subqueries may require a combination of structured (e.g., SQL) and unstructured (e.g., semantic similarity search) data. For instance, a question like "What's the contract amount and termination clause?" demands both deterministic, structured SQL-based retrieval for the contract amount and a semantic similarity-based search for the termination clause. Current RAG frameworks do not effectively combine these types of subqueries into a holistic response, often treating structural queries as semantic, which leads to non-deterministic and less accurate results.
[0004] Furthermore, the conventional RAG frameworks provide no support for hierarchical document structures within vector databases. Although metadata can be used to filter documents and optimize search space, the lack of document hierarchy means that the system cannot reduce the search space based on document organization. This is a significant drawback when handling large documents with deeply nested structures, where queries might require context-sensitive navigation through these hierarchies.
[0005] Therefore, there is a need for a method and system that may create a continuous latent space for gradient-based optimization.
SUMMARY
[0006] The following embodiments presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0007] Some example embodiments disclosed herein provide a method for generating response based on queries, the method may include receiving at least one user query. The method may further include generating one or more sub-queries from the at least one user query via a sub-query generator tool. The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. The method may further include executing the structured sub-query on a structured database to generate a deterministic response. The method may further include executing the semantic sub-query on an unstructured database to generate a semantic response. Further, the method includes aggregating the deterministic response and the semantic response to create a context of the user query. The method further includes generating the response based on the context via a Large Language Model (LLM).
[0008] According to some example embodiments, the sub-query generator tool is a few-shot learning model.
[0009] According to some example embodiments, wherein the structured database comprises metadata of a plurality of documents, and the unstructured database comprises vector representation of the plurality of documents.
[0010] According to some example embodiments, the method further includes performing similarity search on the unstructured database based on the semantic sub-query to generate the semantic response.
[0011] According to some example embodiments, the method includes optimizing a search space before executing subqueries using document metadata and hierarchy.
[0012] According to some example embodiments, the method includes computing an inter-query similarity for search boundary detection via a linear text segmentation technique to optimize a search space.
[0013] Some example embodiments disclosed herein provide a system for generating response based on queries. The system includes a processor, and a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which when executed by the processor, cause the processor to receive at least one user query. The processor executable instructions cause the processor to generate one or more sub-queries from the at least one user query via a sub-query generator tool. The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. Further, the processor executable instructions cause the processor to execute the structured sub-query on a structured database to generate a deterministic response. The processor executable instructions cause the processor to execute the semantic sub-query on an unstructured database to generate a semantic response. Further, the processor executable instructions cause the processor to aggregate the deterministic response and the semantic response to create context of the user query. The processor executable instructions cause the processor to generate the response based on the context via a Large Language Model (LLM).
[0014] According to some example embodiments, the sub-query generator tool is a few-shot learning model.
[0015] According to some example embodiments, the structured database includes metadata of a plurality of documents, and the unstructured database comprises vector representation of the plurality of documents.
[0016] According to some example embodiments, the processor executable instructions, cause the processor to perform similarity search on the unstructured database based on the semantic sub-query to generate the semantic response.
[0017] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF DRAWINGS
[0018] The above and still further example embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0019] FIG. 1 is a block diagram of an environment of a system for generating response based on queries, in accordance with an example embodiment;
[0020] FIG. 2 is a block diagram illustrating various modules within a memory of a computing device configured for generating response based on queries, in accordance with an example embodiment;
[0021] FIG. 3 illustrates a block diagram of a system architecture for generating response based on queries, in accordance with an example embodiment;
[0022] FIG. 4 illustrates a flow diagram of a method for generating response based on query, in accordance with an example embodiment;
[0023] FIG. 5 illustrates an exemplary flow chart for generating response based on queries, in accordance with an example embodiment;
[0024] FIG. 6 illustrate an exemplary flow chart for executing sub-queries to generate response, in accordance with an example embodiment; and
[0025] FIG. 7 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
[0026] The figures illustrate embodiments of the invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
DETAILED DESCRIPTION
[0027] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention can be practiced without these specific details. In other instances, systems, apparatuses, and methods are shown in block diagram form only in order to avoid obscuring the present invention.
[0028] Reference in this specification to "one embodiment" or "an embodiment" or "example embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearance of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
[0029] Some embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout.
[0030] The terms "comprise", "comprising", "includes", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by "comprises… a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
[0031] The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient but are intended to cover the application or implementation without departing from the spirit or the scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
DEFINITIONS
[0032] The term "Large Language Model (LLM)" may refer to a type of artificial intelligence model designed to understand and generate human-like text based on the input.
[0033] The term "Artificial Intelligence (AI)" may refer to a simulation of human intelligence by machines, especially computer systems. The AI perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding.
[0034] The term "Retrieval Augmented Generation (RAG)" may be used to refer to a technique that combines traditional machine learning models for text generation with information retrieval systems. The RAG's main goal is to enhance the quality and accuracy of generated text by pulling relevant data from external sources.
[0035] The term "Vector form" may refer to a representation of data as a sequence or array of numbers (scalars) in a specific order. The vector is typically an ordered list of elements, often used to represent multi-dimensional data points.
[0036] The term "Query" used herein may refer to a request for information or a specific task that an AI system processes in order to provide a relevant result, answer, or action. The query may be a semantic query, a structured query, an aggregated query, etc.
[0037] The term "Semantic Query" used herein may refer to a type of search or retrieval query that aims to understand the meaning and context behind the user's input rather than relying solely on keyword matching. The semantic query leverages semantic technologies, such as natural language processing (NLP) and knowledge graphs, to interpret the intent and meaning of the query in a way that is closer to human understanding.
[0038] The term "Structured query" used herein refer to a formalized query that is designed to retrieve specific information from structured data sources, such as relational databases or knowledge graphs, using predefined query languages or formats. Structured queries are characterized by their well-defined format, syntax, and rules, and they are typically used to interact with data that is organized into schemas (e.g., tables, entities, or relationships).
[0039] The term "Aggregated query" used herein may refer to a query that performs calculations or summaries over multiple data records to return a single or summarized result.
[0040] The term "Linear text segmentation method" used herein may refer to a technique used in Natural Language Processing (NLP) to divide a text into smaller, meaningful segments such as sentences, paragraphs, or sections. The goal is to break the text in a way that maintains the structure and flow of information. The term "linear" refers to the sequence of words and letters, where the segmentation refers to dividing and classifying the sequence of words and letters into topics.
[0041] The term "metadata" used herein may refer to an information that describes the characteristics, structure, and context of other data. Rather than being the actual data itself, metadata helps understand, manage, and use the data efficiently.
[0042] The term "Search space" used herein may refer to a set of all possible solutions or configurations that a problem can have, from which an algorithm or process searches for an optimal solution. It represents the range or domain over which a search is conducted.
[0043] The term "Structured Query Language (SQL)" used herein may refer to a standardized programming language used for managing and manipulating relational databases. The SQL enable users to perform various operations on the data, such as querying, updating, inserting, and deleting records.
END OF DEFINITIONS
[0044] As described earlier, traditional methods of generating responses based on queries provide no support for hierarchical document structures within vector databases and do not effectively combine the semantic sub-queries and structured sub-queries into a holistic response. The present disclosure addresses these challenges by introducing a method and system for generating response based on queries. The proposed method and system use pre-trained and dynamically adaptable Large Language Models (LLMs) coupled with optimizer for effective knowledge and data-driven design generation, testing and evaluation. Further, the proposed method and system use different query execution tools to execute semantic sub-queries and the structured queries, ensuring accurate and precise results with optimal resource utilization and minimum bottlenecks.
[0045] Embodiments of the present disclosure may provide a method, a system for generating responses based on user queries. The method, the system for generating responses based on user queries in such an improved manner are described with reference to FIG. 1 to FIG. 7 as detailed below.
[0046] FIG. 1 illustrates a block diagram of an environment of a system 100 for generating response based on queries, in accordance with an example embodiment. The system 100 is designed to facilitate efficient and accurate generation of responses corresponding to the user queries by utilizing Artificial Intelligence (AI) and Large Language models (LLMs). The system 100 includes a computing device 102 and an external device 108. The computing device 102 may be communicatively coupled with the external device 108 via a communication network 110. Examples of the computing device 102 may include, but are not limited to, a server, a desktop, a laptop, a notebook, a tablet, a smartphone, a mobile phone, an application server, or the like.
[0047] The communication network 110 may be wired, wireless, or any combination of wired and wireless communication networks, such as cellular, Wi-Fi, internet, local area networks, or the like. In one embodiment, the communication network 110 may include one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fibre-optic network, and the like, or any combination thereof. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth®, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), and the like, or any combination thereof.
[0048] The computing device 102 may include a memory 106, and a processor 104. The term "memory" used herein may refer to any computer-readable storage medium, for example, volatile memory, random access memory (RAM), non-volatile memory, read only memory (ROM), or flash memory. The memory 106 may include a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Complementary Metal Oxide Semiconductor Memory (CMOS), a magnetic surface memory, a Hard Disk Drive (HDD), a floppy disk, a magnetic tape, a disc (CD-ROM, DVD-ROM, etc.), a USB Flash Drive (UFD), or the like, or any combination thereof.
[0049] The term "processor" used herein may refer to a hardware processor including a Central Processing Unit (CPU), an Application-Specific Integrated Circuit (ASIC), an Application-Specific Instruction-Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physics Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
[0050] The processor 104 may retrieve computer program code instructions that may be stored in the memory 106 for execution of the computer program code instructions. The processor 104 may be embodied in a number of different ways. For example, the processor 104 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor 104 may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally, or alternatively, the processor 104 may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining, and/or multithreading.
[0051] Additionally, or alternatively, the processor 104 may include one or more processors capable of processing large volumes of workloads and operations to provide support for big data analysis. In an example embodiment, the processor 104 may be in communication with a memory 106 via a bus for passing information among components of the system 100.
[0052] The memory 106 may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory 106 may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a machine (for example, a computing device like the processor 104). The memory 106 may be configured to store information, data, contents, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure. For example, the memory 106 may be configured to buffer input data for processing by the processor 104.
[0053] The computing device 102 may be capable of generating responses based on the queries. The memory 106 may store instructions that, when executed by the processor 104, cause the computing device 102 to perform one or more operations of the present disclosure which will be described in greater detail in conjunction with FIG. 2. The computing device 102 is responsible for performing tasks such as receiving query, generating sub-queries, executing sub-queries, aggregating responses, and generating final response.
[0054] In an embodiment, the computing device 102 receive at least one user query 302. Further, the computing device 102 generates one or more sub-queries from the at least one user query 302 via a sub-query generator tool 310. The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. The computing device 102 further executes the structured sub-query on a structured database 308 to generate a deterministic response. Further, the computing device 102 executes the semantic sub-query on an unstructured database 306 to generate a semantic response. The computing device 102 aggregates the deterministic response and the semantic response to create a context 318 of the user query 302. The computing device 102 further generates the response 322 based on the context 318 via a Large Language Model (LLM) 320.
[0055] The external devices 108 may be various hardware and software tools that may be integrated with the system 100 to enhance its functionality. These devices may include database of documents. This data is essential for generating responses based on the queries. The external device 108 may be used for receiving the queries from the users. The complete process followed by the system 100 is explained in detail in conjunction with FIG. 2 to FIG. 7.
[0056] FIG. 2 illustrates a block diagram 200 illustrating various modules within the memory 106 of the computing device 102 configured for generating response based on queries 302, in accordance with an example embodiment. The memory 106 may include a Receiving module 202, a Sub-query generator module 204, a Structured sub-query execution module 206, a Semantic sub-query execution module 208, a Response aggregation module 210, and a Response generation module 212.
[0057] The receiving module 202 is responsible for receiving at least one user query. The user query may be a natural language query formatted in a question format. The user query may be formatted in English, French, Hindi, Spanish, mandarin, etc., by a user. The receiving module 202 may be communicatively coupled with the external device 108. The user query is transmitted to the receiving module 202 by the user via the external device 108. In some exemplary embodiments, the user query may include a combination of a plurality of sub-queries. The plurality of sub-queries may be a structured sub-query, a semantic sub-query, an aggregation sub-query, and a dependant sub-query. Further, the receiving module 202 may store the user query in the memory 106 for generating response and training of the LLM.
[0058] The sub-query generator module 204 is configured to generate one or more sub-queries from the at least one user query via a sub-query generator tool. The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. The sub-query generator module 204 utilizes the user query received from the receiving module 202 to generate the sub-queries. The sub-query generator tool is a few-shot learning model. The few-shot learning model is a type of machine learning model designed to learn and generalize from only a small number of training examples. The few-shot learning model focuses on solving tasks with very limited data by leveraging prior knowledge or a pre-trained model, making it useful in scenarios where data collection is expensive, time-consuming, or rare. In an embodiment, the sub-query generator module 204 classifies the one or more sub-queries into structured queries and the semantic queries. The aggregation sub-queries and the dependent sub-queries are classified as the semantic sub-query. The structured sub-queries are the structural queries based on relational or non-relational Structured Query Language (SQL) to answer the deterministic part of the user query. The semantic sub-queries are the queries that requires semantic search or similarity search to answer the context of the user query.
[0059] In an exemplary embodiment, the receiving module 202 receives the user query in English "What's the contract amount and termination clause" via the external device 108. Further, the sub-query generator module 204 generates the at least one sub-queries by processing the user query via the LLM model. The sub-query generator module 204 generates a semantic sub-query "what is the termination clause" and a structured sub-query "what is the contract amount".
[0060] Upon receiving the user query, the computing device 102 may optimize the search space of a structured database and an unstructured database before processing user queries using document metadata and hierarchy. The optimization of the search space may include computing an inter-query similarity for search boundary detection via a linear text segmentation technique to optimize a search space. The linear text segmentation is used to detect the topic boundary while ingestion the documents in the structured database and the unstructured database.
[0061] Upon receiving the structured sub-queries, the structured sub-query execution module 206 is responsible for executing the structured sub-query on a structured database to generate a deterministic response. The structured database consists of structural information such as metadata of a plurality of documents. The structured sub-query execution module 206 executes the structured sub-query by parsing the structured sub-query through the structured database. In an embodiment, the structured sub-query execution module 206 may identify the relevant response from the metadata of the plurality of documents stored in the structured database. The metadata may provide information about the document itself, rather than its content. The metadata describes various attributes or properties of the document, helping with organization, discovery, and management of the document in systems such as databases or file storage systems.
[0062] Upon receiving the semantic sub-queries, the semantic sub-query execution module 208 is responsible for executing the semantic sub-query on an unstructured database to generate a semantic response. The unstructured database consists of vector representation of the plurality of documents. The semantic sub-query execution module 208 generates the context of the semantic sub-query and may contain information related to the query, rather than a determined response. In an embodiment, the semantic sub-query execution module 208 performs similarity search on the unstructured database based on the semantic sub-query to generate the semantic response. In some embodiments, the semantic sub-query execution module 208 may execute aggregation sub-queries and the dependent sub-queries parallelly with the semantic sub-queries. It should be noted that the structured sub-query execution module 206 and the semantic sub-query execution module 208 may execute structured sub-query and the semantic sub-query parallelly.
[0063] The response aggregation module 210 is responsible for aggregating the deterministic response and the semantic response to create a context of the user query. The response aggregation module 210 may implement a summary tool that combines the responses from the structured sub-query execution module 206 and the semantic sub-query execution module 208 to generate the context of the user query.
[0064] Further, the response generation module 212 is configured to generate the response based on the context via a Large Language Model (LLM). The LLM is a type of machine learning model designed to understand and generate human language. The LLM models are trained on massive amounts of text data and have plurality of parameters, making them capable of performing tasks like text generation, translation, summarization, and answering questions. The LLM may be one of a plurality of LLMs such as OpenAI, Bidirectional Encoder Representations from Transformers (BERT), Text-to-Text Transfer Transformer (T5), BigScience, Large Language Model Meta AI (LLaMA), etc.
[0065] FIG. 3 illustrates a block diagram of a system architecture 300 for generating response 322 based on queries 302, in accordance with an example embodiment. The system architecture 300 is analogous to the computing device 102 of the system 102. The system architecture 300 may include an unstructured database 306, a structure database 308, a sub-query generation engine 310, a semantic query tool 312, a structured query tool 314, a summarizer tool 316, and a Large Language Model (LLM) 320.
[0066] The unstructured database 306 is configured to store a plurality of information of a plurality of documents 304. The plurality of documents are records of information that are typically written, printed, or stored in digital formats. The documents may be used to capture, share, or store information for a wide variety of purposes. Documents may exist in many forms, text files, reports, contracts, images, or multimedia and may be physical (paper-based) or electronic. The unstructured database 306 may store the vector representation of the plurality of documents. The vector representation of the plurality of documents 304 is a process of converting textual data from documents into numerical formats that may be understood and processed by a machine learning algorithm. The vector representation typically captures the semantic meaning of the text, allowing computers to analyse, compare, and classify documents based on the information.
[0067] The structured database 308 is configured to store a plurality of information of a plurality of documents 304. The structured database 308 may be a relational Structured Query Language (SQL) database or a non-relational SQL database. In an embodiment, the structured database 308 may store the metadata of the plurality of documents. The metadata of the plurality of documents 304 includes structured data that provides information about the document itself, rather than its content. The descriptive information helps categorize, identify, and manage documents in digital or physical systems, aiding in tasks such as search, retrieval, and organization.
[0068] Further, a query 302 is received by the sub-query generation engine 310. The query 302 may be received in any human readable language such as English, French, Spanish, mandarin, Hindi, etc. The sub-query generation engine 310 may generate the plurality of sub-queries from the query 302. The plurality of sub-queries may be one of the structured sub-queries, the semantic sub-query, the aggregation sub-query, and the dependent sub-query. Further, the sub-query generation engine 310 may classify the plurality of sub-queries into structured sub-queries and the semantic sub-queries. The semantic sub-queries may include semantic sub-queries, the aggregation sub-queries, and the dependent sub-queries.
[0069] Further, the semantic query tool 312 receives the semantic sub-queries to generate a semantic response. The semantic query tool 312 is communicatively coupled to the unstructured database 306 to access the vector representations of the plurality of documents. The semantic query tool 312 may execute each of the semantic sub-query on the unstructured database 306 to generate a semantic response. The semantic response consists of the relevant information associated with the semantic sub-query. Further, the semantic response is transmitted to the summarizer tool 316.
[0070] Parallelly, the structured query tool 314 receives the structured sub-queries to generate a structured response. The structured query tool 314 is communicatively coupled to the structured database 308 to access the metadata of the plurality of documents. The structured query tool 314 may execute each of the structured sub-query on the structured database 308 to generate a deterministic response corresponding to the structured response. The structured response consists of the deterministic result of the structured sub-query. Further, the deterministic response is transmitted to the summarizer tool 316.
[0071] Upon receiving the semantic response and the deterministic response, the summarizer tool 316 is configured to create a context 318. The summarizer tool 316 may implement a Machine Learning (ML) model to create the context 318. The context 318 may include the relevant information associated with the query 302 based on the plurality of documents 304. In an exemplary embodiment, the summarizer tool 316 is a call to LLM, where the responses from multiples subqueries are provided as context to LLM. Finally, LLM is generating a response to given query based on the given context generated by subqueries.
[0072] Further, the context 318 may be transmitted to the LLM 320 to generate a response 322 corresponding to the query 302. The LLM 320 may be a Retrieval Augmented Generation (RAG) framework that combines information retrieval with language generation to improve the accuracy and relevance of generated text.
[0073] FIG. 4 illustrates a flow diagram 400 of a method for generating the response 322 based on the query 302, in accordance with an example embodiment. The method 400 is implemented by the system architecture 300. The method 400 includes providing the query 302 to the sub-query generation engine 310. The query 302 may be made up of a plurality of sub queries such as, the semantic sub-query, the structured sub-query, the aggregation sub-query, and the dependent sub-query. Upon receiving the query 302, the sub-query generation engine 310 may extract plurality of sub-queries from the query 302. Further, the sub-query generation engine 310 classifies the plurality of sub-queries into a structured sub-query and the semantic sub-query. The structured sub-queries are then fed to the structured query tool 314, and the semantic sub-queries are fed to the semantic query tool 312.
[0074] In an embodiment, the semantic query tool 312 is communicatively coupled with the unstructured database 306. The unstructured database 306 may include vector representation of the plurality of documents. The plurality of documents is ranked by a keyword tree index technique to find the relevant documents efficiently. The keyword tree index technique 402 is a method used for efficiently finding relevant documents in a collection based on keywords. The technique organizes keywords in a hierarchical tree structure, where each node in the tree represents a keyword or part of a keyword, allowing for quick and structured search and retrieval of documents associated with specific terms or queries. Further, the semantic query tool 312 executes the semantic sub-queries on the unstructured database 306 to generate a semantic response. The semantic query tool 312 may use a similarity measure to extract relevant information from the unstructured database 306 to identify the relevant information corresponding to the semantic sub-queries. Further, the semantic query tool 312 generate the semantic response by extracting the relevant information from the unstructured database 306. The semantic query tool 312 then fed the semantic response to a summarizer tool 408.
[0075] The structured query tool 314 is communicatively coupled with the structured database 308. The structured database 308 may include metadata information of the plurality of documents. The metadata information refers to structured data that provides information about the document itself, rather than its content. This descriptive information helps categorize, identify, and manage documents in digital or physical systems, aiding in tasks such as search, retrieval, and organization. Further, the structured query tool 314 executes the structured sub-queries on the structured database 306 to generate a deterministic response. The structured query tool 314 may use a LLM 406 to parse the structured database 308 to identify the deterministic information corresponding to the structured sub-queries. In an embodiment, the structured query tool 312 may implement the relational SQL or non-relational SQL to identify the deterministic information. Further, the structured query tool 314 generate the deterministic response by extracting the deterministic information from the structured database 308. The structured query tool 314 then fed the deterministic response to the summarizer tool 408.
[0076] Upon receiving the deterministic response and the semantic response, the summarizer tool 408 summarize the deterministic response and the semantic response to generate a context of the query 302. The context may include the properties of both the semantic response and the deterministic response, the response corresponding to the sub-query, location of database, path of the file, etc. In an embodiment, the context may be the surrounding text or information that is used to understand and generate appropriate responses. Further, the context is fed to the LLM 320 to generate a holistic response 322 corresponding to the query 302. The LLM 320 is designed to process language by learning from patterns in the data, and the context helps guide the LLM predictions, responses, or completions by providing relevant clues about meaning, grammar, and semantics of the query 302.
[0077] FIG. 5 illustrates an exemplary flow chart 500 for generating response 322 based on queries 302, in accordance with an example embodiment. It will be understood that each block of the flow diagram of the method 500 may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other communication devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory 106 of the computing device 102, employing an embodiment of the present disclosure and executed by a processor 104. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flow diagram blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flow diagram blocks.
[0078] Accordingly, blocks of the flow diagram support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flow diagram, and combinations of blocks in the flow diagram, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
[0079] FIG. 5 is explained in conjunction with elements from Figs 1, 2, 3, and 4. At step 502, the method 500 is initiated. The method 500, at step 504 may include receiving at least one user query 302. The user query 302 may be a natural language query such as English, French, Mandarin, Hindi, etc. The user query 302 is inputted by the user through the external device 108.
[0080] The method, at step 506 may include generating one or more sub-queries from the at least one user query 302 via a sub-query generator tool 310. The one or more sub-queries includes at least one of a structured sub-query and a semantic sub-query. The sub-queries may be generated using a first LLM model 404. The LLM model 404 may classify the query 302 into a combination of structured queries and unstructured sub-queries using the few shots prompt model.
[0081] The method, at step 508, may include executing the structured sub-query on a structured database 308 to generate a deterministic response. The method 500 include parsing the structured database 308 to identify deterministic information corresponding to the structured sub-query. Further, the method 500 may include generating the deterministic response, by extracting the identified deterministic information.
[0082] Further, the method, at step 510, may include executing the semantic sub-query on an unstructured database 306 to generate a semantic response. The method 500 include parsing the unstructured database 306 to identify relevant information corresponding to the semantic sub-query. Further, the method 500 may include generating the semantic response, by extracting the identified relevant information.
[0083] The method, at step 512, may include aggregating the deterministic response and the semantic response to create a context 318 of the user query 302. The context may include both the properties of the semantic response and the deterministic response.
[0084] The method, at step 514, may further include generating the response 322 based on the context 318 via a Large Language Model (LLM) 320. The LLM 320 may process the query 302 and the context to generate a holistic response 322. The response 320 may include a semantic response corresponding to the semantic sub-query and the deterministic response corresponding to the structured sub-query. The method 500 ends at step 516.
[0085] FIG. 6 illustrates an exemplary flow chart 600 for executing sub-queries to generate response 322, in accordance with an example embodiment. It will be understood that each block of the flow diagram of the method 600 may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other communication devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory 106 of the computing device 102, employing an embodiment of the present disclosure and executed by a processor 104. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flow diagram blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flow diagram blocks.
[0086] Accordingly, blocks of the flow diagram support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flow diagram, and combinations of blocks in the flow diagram, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.
[0087] FIG. 6 is explained in conjunction with elements from Figs 1, 2, 3, and 5. At step 602, the method 600 is initiated. The method 600, at step 604 may include computing an inter-query similarity for search boundary detection via a linear text segmentation technique to optimize a search space. The linear text segmentation is used to detect the topic boundary while ingestion the documents in the structured database and the unstructured database. The Linear text segmentation is a natural language processing (NLP) technique used to divide a long text into coherent, meaningful sub-queries while maintaining the original order of the content. The sub-queries may be paragraphs, sections, or topics that are logically related to each other, allowing the text to be organized in a structured way. Further, the linear text segmentation technique identifies boundaries between different topics or sections within a document, enabling easier analysis and processing of the text.
[0088] The method 600, at step 606 may include optimizing a search space before executing subqueries using document 304 metadata and hierarchy. In an embodiment, the documents 304 to search (i.e., the search space) is narrowed down by leveraging metadata and the structural relationships between documents, before running more detailed queries. The optimization helps reduce the number of documents 304 that need to be processed and makes the search more efficient.
[0089] The method 600, at step 608 further includes performing similarity search on the unstructured database 306 based on the semantic sub-query to generate the semantic response. In an embodiment, the similarity search may include searching for documents or pieces of information that are semantically similar based on meaning rather than exact words to the semantic sub-query. Further, the semantic response is generated that aligns with the meaning or context of the semantic sub-query.
[0090] The method 600, at step 610 may include performing a Structured Query Language (SQL) search on the structured database 308 based on the structured sub-query to generate the deterministic response. In an embodiment, the SQL search may include querying specific data using a SQL command. The structured sub-query is used to retrieve data from the structured database 308 that may influence the main query, ensuring precise results. The structured database 308 may store information in for of tables with rows and columns. The structured sub-query and the structured database 308 returns the same result, when the same structured sub-query is run on the same structured database 308. The method 600 ends at step 612.
[0091] The disclosed methods and systems may be executed on a conventional or general-purpose computing system, such as a personal computer (PC) or server. Referring to FIG. 7, an exemplary computing system 700 is illustrated, which may implement processing functionality for various embodiments (e.g., as a SIMD device, client device, server device, or one or more processors). Those skilled in the art will recognize that other computing systems or architectures may also be used to implement the invention. The computing system 700 may represent a user device, such as a desktop, laptop, mobile phone, personal entertainment device, DVR, or any other special or general-purpose computing device appropriate for a given application or environment. The computing system 900 may include one or more processors, such as processor 702, implemented using a general-purpose or specialized processing engine, such as a microprocessor, microcontroller, or other control logic. In some embodiments, processor 702 may be an AI processor, implemented as a Tensor Processing Unit (TPU), graphical processing unit (GPU), or custom-programmable solution, such as a Field-Programmable Gate Array (FPGA).
[0092] The computing system 700 may further include memory 706 (e.g., Random Access Memory (RAM) or other dynamic memory) for storing instructions and information to be executed by processor 702. Memory 706 may also store temporary variables or intermediate information during execution. Additionally, the computing system 700 may include a read-only memory (ROM) or other static storage device connected to bus 704 for storing static information and instructions for processor 702.
[0093] Storage devices 708 may also be included in computing system 700, consisting of, for example, a media drive 710 and a removable storage interface. Media drive 710 may support fixed or removable storage media, such as hard disk drives, floppy drives, magnetic tape drives, SD card ports, USB ports, optical disk drives (e.g., CD or DVD drives), or other media. Storage media 712 may include hard disks, magnetic tapes, flash drives, or other media that can be read and written to by media drive 710. Storage media 712 may store computer-readable software or data.
[0094] Alternatively, storage devices 708 may include other means for loading computer programs or data into computing system 700, such as removable storage unit 714 and interface 716, program cartridges, removable memory (e.g., flash memory), memory slots, and similar storage units and interfaces.
[0095] Computing system 700 may also include a communications interface 718 to transfer software and data between external devices 112 and system 700. Examples include network interfaces (e.g., Ethernet), communication ports (e.g., USB, micro-USB), Near Field Communication (NFC), and other protocols. The signals transferred via communications interface 718 may include electronic, electromagnetic, optical, or other forms of transmission through channel 720, which may utilize wireless mediums, fibre optics, wires, or cables.
[0096] Computing system 700 may also include Input/Output (I/O) devices 722, such as a display, keypad, microphone, speakers, vibration motors, LED indicators, etc., allowing user interaction and feedback. The term "computer-readable medium" may refer to any storage medium used, such as memory 706, storage devices 708, removable storage unit 714, or signal(s) on channel 720. Such media may store sequences of instructions, or "computer program code," which, when executed, enable computing system 700 to perform the methods and functions described in embodiments of the invention.
[0097] In embodiments where elements are implemented in software, the software may be stored on a computer-readable medium and loaded into computing system 700 via removable storage unit 714, media drive 710, or communications interface 718. When executed by processor 702, this control logic (e.g., software instructions or computer program code) causes processor 702 to perform the invention's functions as described.
[0098] As will be appreciated by those skilled in the art, the techniques described in the various embodiments discussed above are not routine, or conventional, or well understood in the art. The techniques discussed above provide for innovative solutions to address the challenges associated with generating holistic responses based on structural and semantic queries. The disclosed techniques offer several advantages over the existing methods:
[0099] Parallel Processing of Subqueries: The present disclosure allows for parallel execution of independent sub-queries, improving time efficiency and reducing response latency. The parallel processing is advantageous when dealing with complex queries involving both structural and semantic sub-queries.
[0100] Deterministic Structural Query Results: Unlike existing RAG frameworks that treat all queries as semantic, the present disclosure generates deterministic results for structural queries ensuring precise and reliable responses for queries that involve structured data, such as SQL-based queries.
[0101] Hybrid Search Approach: The present disclosure uses a combination of keyword indexing and document tree structures to search for relevant documents. The approach enhances the precision of document retrieval by considering both semantic context and document hierarchy, leading to more accurate search results.
[0102] Holistic Response Generation: By integrating both structural and semantic subquery responses into the final output, the present disclosure generates comprehensive answers. The method ensures that LLMs provide a holistic response that includes both factual (deterministic) and contextual (semantic) information.
[0103] Improved Vector Database Efficiency: The present disclosure supports both SQL-based structural data and vector-based semantic data within the same system. The disclosure enhances the versatility and capability of the retrieval mechanism, optimizing search operations across different types of queries.
[0104] Enhanced Search Space Reduction: The present disclosure introduces document hierarchy support, allowing for more efficient search space reduction based on the organization of the documents. The disclosure reduces unnecessary data processing and improves the overall performance of the retrieval process.
[0105] The disclosed techniques offer several applications including:
[0106] Drug Discovery and Development: The model can be used to generate novel chemical compounds with specific properties, accelerating the drug discovery process. It aids in designing compounds with enhanced binding affinity and selectivity towards protein targets, which is critical in developing effective drugs.
[0107] Question-Answering Systems: The framework may be used in advanced question-answering systems where both deterministic (structural) and contextual (semantic) information are required. The framework is useful for queries such as "What is the contract amount and termination clause?" where responses need to combine precise data with legal or regulatory context.
[0108] Legal Document Analysis: The framework may aid in the analysis of legal documents, extracting structured data (e.g., dates, contract amounts) and interpreting unstructured text (e.g., clauses, terms) to provide comprehensive legal insights.
[0109] Financial Reporting and Analysis: The framework may be employed in financial systems to process queries that require both computational operations (e.g., averages, totals) and contextual understanding of financial statements or reports, such as "What is the average revenue and main business focus of client XYZ?".
[0110] Customer Support Systems: The framework may enhance customer support by retrieving structured data (e.g., order details, account information) alongside relevant policy or procedural information (e.g., return policies) to generate holistic responses to customer queries.
[0111] Enterprise Knowledge Management: Organizations may use the framework to search structured and unstructured internal documents, combining numerical or structured database queries with semantic insights for decision-making, reports, and corporate knowledge management.
[0112] Healthcare Data Retrieval: In healthcare, the framework may be applied to retrieve structured data such as patient records and combine it with medical literature, enabling comprehensive responses to queries involving patient history and treatment guidelines.
[0113] Scientific Research and Publications: Researchers may use the framework to analyse both structured datasets (e.g., experimental results) and unstructured information (e.g., research papers), enabling the generation of insightful, data-driven summaries.
[0114] Business Intelligence Tools: The framework may support business intelligence by enabling queries that require both structured data (e.g., sales figures) and unstructured context (e.g., customer reviews), offering a complete view of business performance.
[0115] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
[0116] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[0117] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0118] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
[0119] While the present invention has been described with reference to particular embodiments, it should be understood that the embodiments are illustrative and that the scope of the invention is not limited to these embodiments. Many variations, modifications, additions, and improvements to the embodiments described above are possible. It is contemplated that these variations, modifications, additions, and improvements fall within the scope of the invention.
, C , C , C , C , Claims:We claim:
1. A method (500) for generating response based on queries (302), the method (500) comprising:
receiving, by a computing device (102), at least one user query (302);
generating, by the computing device (102), one or more sub-queries from the at least one user query (302) via a sub-query generator tool (310), wherein the one or more sub-queries comprises at least one of a structured sub-query and a semantic sub-query;
executing, by the computing device (102), the structured sub-query on a structured database (308) to generate a deterministic response;
executing, by the computing device (102), the semantic sub-query on an unstructured database (306) to generate a semantic response;
aggregating, by the computing device (102), the deterministic response and the semantic response to create a context (318) of the user query (302); and
generating, by the computing device (102), the response (322) based on the context (318) via a Large Language Model (LLM) (320).
2. The method (500) as claimed in claim 1, wherein the sub-query generator tool (310) is a few-shot learning model.
3. The method (500) as claimed in claim 1, wherein the structured database (308) comprises metadata of a plurality of documents (304), and the unstructured database (306) comprises vector representation of the plurality of documents (304).
4. The method (500) as claimed in claim 1, further comprising performing similarity search on the unstructured database (306) based on the semantic sub-query to generate the semantic response.
5. The method (500) as claimed in claim 1, further comprising optimizing a search space before executing subqueries using document metadata and hierarchy.
6. The method (500) as claimed in claim 5, further comprising computing an inter-query similarity for search boundary detection via a linear text segmentation technique to optimize a search space.
7. The system (100) for generating response based on queries (302), the system comprising:
a processor (104); and
a memory (106) communicatively coupled to the processor (104), wherein the memory (106) stores processor instructions, which when executed by the processor (104), cause the processor (104) to:
receive at least one user query (302):
generate one or more sub-queries from the at least one user query (302) via a sub-query generator tool (310), wherein the one or more sub-queries comprises at least one of a structured sub-query and a semantic sub-query;
execute the structured sub-query on a structured database (308) to generate a deterministic response;
execute the semantic sub-query on an unstructured database (306) to generate a semantic response;
aggregate the deterministic response and the semantic response to create context (318) of the user query (302); and
generate the response (322) based on the context (318) via a Large Language Model (LLM) (320).
8. The system (100) as claimed in claim 7, wherein the sub-query generator tool (310) is a few-shot learning model.
9. The system (100) as claimed in claim 7, wherein the structured database (308) comprises metadata of a plurality of documents, and the unstructured database (306) comprises vector representation of the plurality of documents (304).
10. The system (100) as claimed in claim 7, wherein the processor instructions, on execution, cause the processor (104) to perform similarity search on the unstructured database (306) based on the semantic sub-query to generate the semantic response.
Documents
Name | Date |
---|---|
202421086195-FORM-26 [13-11-2024(online)].pdf | 13/11/2024 |
202421086195-Proof of Right [13-11-2024(online)].pdf | 13/11/2024 |
202421086195-FORM 18A [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-FORM 3 [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-FORM-5 [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-FORM-9 [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-FORM28 [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-STARTUP [09-11-2024(online)].pdf | 09/11/2024 |
202421086195-COMPLETE SPECIFICATION [08-11-2024(online)].pdf | 08/11/2024 |
202421086195-DRAWINGS [08-11-2024(online)].pdf | 08/11/2024 |
202421086195-EVIDENCE FOR REGISTRATION UNDER SSI [08-11-2024(online)].pdf | 08/11/2024 |
202421086195-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [08-11-2024(online)].pdf | 08/11/2024 |
202421086195-FORM 1 [08-11-2024(online)].pdf | 08/11/2024 |
202421086195-FORM FOR SMALL ENTITY(FORM-28) [08-11-2024(online)].pdf | 08/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.