Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
A SYSTEM FOR GENERATING PROMPTS FOR GENERATIVE ARTIFICIAL INTELLIGENCE (AI) APPLICATIONS
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 25 November 2024
Abstract
ABSTRACT A SYSTEM FOR GENERATING PROMPTS FOR GENERATIVE ARTIFICIAL INTELLIGENCE (AI) APPLICATIONS The present disclosure discloses a system for generating prompts for generative artificial intelligence (AI) applications. The system(100) comprises a user interface module(102) to receive a user query from a user; a query refinement module(104) to process the user query using (FWCD), a persona corpus(104a) to personalize the user query, a stopword corpus(104b) to out non-essential terms from the personalized user query, an annotator corpus(104c) to add critical features and generate a refined user query, a localized application-specific content corpus(104d) to retrieve context-specific content from a localized content repository(112); a semantic processing module(106) includes a semantic analysis engine(106a) to perform Latent Semantic Analysis on the content retrieved for semantic categorization, a ranking engine(106b) to rank the categorized content for generating a list of prompts; a prompt selection module(108) to select a highest-ranked prompt; a generative AI model interface module(110) to deliver the selected prompt for application in a large language model. Figure 1
Patent Information
Application ID | 202441091788 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 25/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
GAVASKAR SARANGATHARAN | SRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur- 522502, Andhra Pradesh, India | India | India |
ARLA GOPALA KRISHNA | SRM University-AP, Neerukonda, Mangalagiri Mandal, Guntur- 522502, Andhra Pradesh, India | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
SRM UNIVERSITY | Amaravati, Mangalagiri, Andhra Pradesh-522502, India | India | India |
Specification
Description:FIELD
The present disclosure generally relates to the field of artificial intelligence (AI). More particularly, the present disclosure relates to a system for generating prompts for generative artificial intelligence (AI) applications.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
The traditional generative AI systems for the prompt creation process rely on static methods where user inputs are processed with minimal contextual adaptation or semantic understanding. The traditional systems often utilize predefined templates or generalized frameworks to generate outputs, aiming to provide generic responses. The traditional systems primarily focus on structured inputs and static databases. The traditional systems often utilize static templates or rule-based approaches, where prompts are crafted based on general patterns and standard structures.
Despite their benefits, traditional generative AI systems for the prompt face several limitations. The foremost issue associated with traditional generative AI systems is that traditional technologies face significant limitations in meeting the increasing demand for personalized, contextually rich, and semantically aligned responses. The traditional systems struggle with perceiving detailed user intent and refining queries in real-time, which restricts their ability to deliver high-quality, meaningful prompts for AI models. Furthermore, the inability to optimize prompt fluidity and align outputs with user-specific preferences hinders the overall performance and usability of these systems.
Therefore, there is felt a need for a system for generating prompts for generative artificial intelligence (AI) applications that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system for generating prompts for generative artificial intelligence (AI) applications.
Another object of the present disclosure is to provide a system that enhances the accuracy and contextual relevance of prompts for Generative AI.
Still another object of the present disclosure is to provide a system that facilitates the creation of personalized and user-specific prompts for diverse applications.
Yet another object of the present disclosure is to provide a system that ensures semantic categorization and ranking of content to improve prompt quality.
Still another object of the present disclosure is to provide a system that streamlines language flow and improves grammatical precision in generated prompts.
Yet another object of the present disclosure is to provide a system that enables efficient and context-specific prompt creation for varied user roles and localized content requirements.
Still another object of the present disclosure is to provide a system that improves the usability and relevance of AI-generated outputs in diverse domains like education, enterprises, and marketing.
Still another object of the present disclosure is to provide a method for generating prompts for generative artificial intelligence (AI) applications.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a system for generating prompts for generative artificial intelligence (AI) applications. The system comprises a user interface module, a query refinement module, a semantic processing module, a prompt selection module, and a generative AI model interface module.
The user interface module is configured to receive a user query from a user as input for generating personalized and contextually aligned prompts.
The query refinement module is configured to cooperate with the user interface module to process the user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory (FWCD).
The query refinement module includes a persona corpus, a stopword corpus, an annotator corpus, and a localized application-specific content corpus.
The persona corpus is configured to generate data about user preferences, traits, or attributes to personalize the user query.
The stopword corpus is configured to out non-essential terms from the personalized user query to maintain linguistic coherence and ensure essential meanings are preserved.
The annotator corpus is configured to add critical features or annotations to refine the filtered user query for relevance so as to generate a refined user query.
The localized application-specific content corpus is configured to retrieve context-specific content from a localized content repository for the refined user query.
The semantic processing module is configured to cooperate with the query refinement module and further includes a semantic analysis engine and a ranking engine.
The semantic analysis engine is configured to perform a Latent Semantic Analysis (LSA) on the content retrieved from the localized application-specific content corpus for semantic categorization.
The ranking engine is implementing a relevance-based ranking algorithm to rank the categorized content for generating a list of prompts.
The prompt selection module is configured to cooperate with the semantic processing module to select the highest-ranked prompt based on semantic relevance or display the ranked prompts to the user for selection.
The generative AI model interface module is configured to deliver the selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs and deliver the generated outputs to the user interface module for display.
In an embodiment, the semantic processing module is further implements deep learning models, including artificial neural network (ANN) model and recurrent neural network (RNN) model, to enhance the precision of content retrieval within the localized application-specific content corpus.
In an embodiment, the ranking engine is implemented and utilizes a Bidirectional Encoder Representations from Transformers (BERT) based ranking algorithm to evaluate the contextual and semantic relevance of retrieved content.
In an embodiment, the persona corpus is dynamically updated based on user interactions to improve the personalization of future prompts.
In an embodiment, the stopword corpus incorporates adaptive filtering techniques to identify and exclude unnecessary words based on the context of the user query.
In an embodiment, the annotator corpus includes predefined categories and keywords to refine queries for specific applications, including education, marketing, and enterprise workflows.
In an embodiment, the localized content repository is updated periodically with new content from multiple sources, including internal repositories containing proprietary data, third-party APIs for accessing external content, and user-generated content including feedback and queries.
In an embodiment, the user interface module allows the user to modify the refined query or ranked prompts before final submission to the generative AI model interface module.
In an embodiment, the semantic processing module integrates natural language processing (NLP) techniques to enhance contextual understanding of queries.
The present disclosure further envisages a method for generating prompts for generative artificial intelligence (AI) applications. The method includes the following steps:
• receiving, by a user interface module, a user query from a user as input for generating a personalized and contextually aligned prompt;
• processing, by a query refinement module, the user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory (FWCD);
• generating, by a persona corpus, data about user preferences, traits, or attributes to personalize the user query;
• finding, by a stopword corpus, out non-essential terms from the personalized user query to maintain linguistic coherence and ensure essential meanings are preserved;
• adding, by an annotator corpus, critical features or annotations to refine the filtered user query for relevance so as to generate a refined user query;
• retrieving, by a localized application-specific content corpus, context-specific content from a localized content repository for the refined user query;
• performing, by a semantic analysis engine, a Latent Semantic Analysis (LSA) on the content retrieved from the localized application-specific content corpus for semantic categorization;
• ranking, by a ranking engine, the categorized content using a relevance-based ranking algorithm to generate a list of prompts;
• selecting, by a prompt selection module, a highest-ranked prompt based on semantic relevance or the user selection from the ranked prompts;
• delivering, by a generative AI model interface module, the selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs; and
• displaying the generated AI outputs to the user via the user interface module.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system for generating prompts for generative artificial intelligence (AI) applications of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a block diagram of a system for generating prompts for generative artificial intelligence (AI) applications in accordance with an embodiment of the present disclosure;
Figure 2A-2C illustrates a flow chart depicting the steps involved in a method for generating prompts for generative artificial intelligence (AI) applications in accordance with an embodiment of the present disclosure; and
Figure 3 illustrates a workflow of a query generation and selection process for AI models in accordance with an embodiment of the present disclosure.
LIST OF REFERENCE NUMERALS
100 - System
102 - User Interface Module
104 - Query Refinement Module
104a - Persona Corpus
104b - Stopword Corpus
104c - Annotator Corpus
104d - Localized Application-Specific Content Corpus
106 - Semantic Processing Module
106a - Semantic Analysis Engine
106b - Ranking Engine
108 - Prompt Selection Module
110 - Generative AI Model Interface Module
112 - Localized Content Repository
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "including," and "having," are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being "engaged to," "connected to," or "coupled to" another element, it may be directly engaged, connected, or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
The traditional generative AI systems for the prompt creation process rely on static methods where user inputs are processed with minimal contextual adaptation or semantic understanding. The traditional systems often utilize predefined templates or generalized frameworks to generate outputs, aiming to provide generic responses. The traditional systems primarily focus on structured inputs and static databases. The traditional systems often utilize static templates or rule-based approaches, where prompts are crafted based on general patterns and standard structures.
Despite their benefits, traditional generative AI systems for the prompt face several limitations. The foremost issue associated with the traditional generative AI systems is that the traditional technologies face significant limitations in meeting the increasing demand for personalized, contextually rich, and semantically aligned responses. The traditional systems struggle with perceiving detailed user intent and refining queries in real-time, which restricts their ability to deliver high-quality, meaningful prompts for AI models. Furthermore, the inability to optimize prompt fluidity and align outputs with user-specific preferences hinders the overall performance and usability of these systems. To address the issues of the existing systems and methods, the present disclosure envisages a system (hereinafter referred to as "system 100") for generating prompts for generative artificial intelligence (AI) applications and a method (hereinafter referred to as "method 200") for generating prompts for generative artificial intelligence (AI) applications. The system 100 will now be described with reference to Figure 1 and method 200 will be described with reference to Figure 2A-2C, and Figure 3.
Referring to Figure 1, the system 100 comprises a user interface module 102, a query refinement module 104, a semantic processing module 106, a prompt selection module 108, and a generative AI model interface module 110.
The user interface module 102 is configured to receive a user query from a user as input for generating personalized and contextually aligned prompts.
In an embodiment, the user interface module 102 supports multi-language input and provides real-time suggestions to guide users in formulating their queries.
The query refinement module 104 is configured to cooperate with the user interface module 102 to process the user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory (FWCD).
The query refinement module 104 includes a persona corpus 104a, a stopword corpus 104b, an annotator corpus 104c, and a localized application-specific content corpus 104d.
The persona corpus 104a is configured to generate data about user preferences, traits, or attributes to personalize the user query.
The stopword corpus 104b is configured to out non-essential terms from the personalized user query to maintain linguistic coherence and ensure essential meanings are preserved.
The annotator corpus 104c is configured to add critical features or annotations to refine the filtered user query for relevance so as to generate a refined user query.
The localized application-specific content corpus 104d is configured to retrieve context-specific content from a localized content repository 112 for the refined user query.
In an embodiment, the persona corpus 104a is dynamically updated based on user interactions to improve the personalization of future prompts.
In an embodiment, the localized content repository 112 is updated periodically with new content from multiple sources, including internal repositories containing proprietary data, third-party APIs for accessing external content, and user-generated content including feedback and queries.
In an embodiment, the annotator corpus 104c includes predefined categories and keywords to refine queries for specific applications, including education, marketing, and enterprise workflows.
In an embodiment, the stopword corpus 104b is configured to eliminate unimportant or filter words from the query to ensure smooth linguistic transitions.
In an embodiment, the stopword corpus 104b incorporates adaptive filtering techniques to identify and exclude unnecessary words based on the context of the user query.
In an embodiment, the localized application-specific content corpus 104d provides context-specific content. Content is searched and selected from multiple localized corpora stored in the localized content repository 112. The content is filtered based on the criteria from the annotator corpus 104c to ensure that the content aligns with the critical features present in the refined user query.
In an embodiment, the localized content repository 112 contains a variety of content gathered from a variety of sources based on the user base, system purpose, and application need. The variety of sources includes internal databases of the company, third-party content from external API (application programming interface), and user-generated data from feedback queries or interaction from the user base.
The semantic processing module 106 is configured to cooperate with the query refinement module 104 and further includes a semantic analysis engine 106a and a ranking engine 106b.
The semantic analysis engine 106a is configured to perform a Latent Semantic Analysis (LSA) on the content retrieved from the localized application-specific content corpus 104d for semantic categorization.
The ranking engine 106b implements a relevance-based ranking algorithm to rank the categorized content for generating a list of prompts.
In an embodiment, the semantic processing module 106 further implements deep learning models, including the artificial neural network (ANN) model and recurrent neural network (RNN) model, to enhance the precision of content retrieval within the localized application-specific content corpus 104d.
In an embodiment, the ranking engine 106b implements a Bidirectional Encoder Representations from Transformers (BERT) based ranking algorithm to evaluate the contextual and semantic relevance of retrieved content.
In an embodiment, the semantic processing module 106 integrates natural language processing (NLP) techniques to enhance contextual understanding of queries.
The prompt selection module 108 is configured to cooperate with the semantic processing module 106 to select the highest-ranked prompt based on semantic relevance or display the ranked prompts to the user for selection.
The generative AI model interface module 110 is configured to deliver the selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs and deliver the generated outputs to the user interface module 102 for display.
In an embodiment, the user interface module 102 allows the user to modify the refined query or ranked prompts before final submission to the generative AI model interface module 110.
In an embodiment, the user interface module 102 includes an interactive dashboard for viewing, modifying, and selecting ranked prompts.
In an embodiment, the system 100 further comprises a central processing unit and a memory module. The central processing unit and/or a graphics processing unit (GPU) for executing complex natural language processing algorithms and the memory modules for storing intermediate data during semantic categorization and ranking operations.
In an embodiment, the highest-ranked prompt is selected by the system or by the user for the large language model (LLM). The selected prompt is further processed by the LLM model to generate a contextually aligned artificial intelligence (AI) response.
The disclosed system 100 for generating prompts for generative artificial intelligence (AI) applications further comprises hardware components including one or more processors or central processing units (CPU) or a graphics processing unit (GPU), memory units or memory modules, and data storage modules, configured to execute complex natural language processing algorithms and storing intermediate data during semantic categorization and ranking operations.
In an embodiment, the processors or central processing unit (CPU) are configured to execute complex natural language processing techniques.
In an embodiment, the system 100 can include one or more processors or a central processing unit (CPU) and/or a graphics processing unit (GPU) and may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the central processing unit (CPU) and/or a graphics processing unit (GPU) are configured to fetch and execute computer-readable instructions stored in a memory module of the system 100. The memory module may store one or more computer-readable instructions or routines, which may be fetched and executed for executing the instructions. The memory module may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. The functions of a central processing unit (CPU) and/or a graphics processing unit (GPU) may be provided through the use of dedicated hardware as well as hardware capable of executing machine-readable instructions. In other examples, a central processing unit (CPU) and/or a graphics processing unit (GPU) may be implemented by electronic circuitry or printed circuit board. Central processing unit (CPU) and/or a graphics processing unit (GPU) may be configured to execute functions of various modules of the system 100 such as the user interface module 102, the query refinement module 104, the semantic processing module 106, the prompt selection module 108 and the generative AI model interface module 110.
In an alternative aspect, the memory module may be an external data storage device coupled to the system 100 directly or through one or more offline/online data servers.
In an embodiment, the system 100 further comprises the user interface module 102 to receive real-time data inputs from external sources such as databases, APIs, and sensors, which are used by the query refinement module 104 and the generative AI model interface module 110 to continuously receive the user query and deliver the generated outputs. The user interface module 102 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, transceivers, storage devices, and the like. The user interface module 102 may facilitate communication of the system 100 with various devices coupled to the system 100. The user interface module 102 may also provide a communication pathway for one or more components of the system 100. Examples of such components include, but are not limited to, processing module(s) and data storage.
The central processing unit (CPU) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing module(s). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the central processing unit (CPU) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing module(s) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the central processing unit (CPU). In such examples, the system 100 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 100 and the processing resource. In other examples, the central processing unit (CPU) may be implemented by electronic circuitry and include the user interface module 102, the query refinement module 104, the semantic processing module 106, the prompt selection module 108, and the generative AI model interface module 110.
In another embodiment, the user interface module 102 ensures that the user query is received from the user as input supports multi-language input and provides real-time suggestions to guide the user in formulating the query. The user interface module 102 also allows the user to modify the refined query before the final submission to the generative AI model interface module 110.
In an embodiment, the query refinement module 104 processes the user query using a multilayer corpus structure. The multilayer corpus structure includes the persona corpus 104a to customize the prompts based on user attributes, allowing for more personalized user queries. The personalized user query is now processed by stopword corpus 104b to out the non-essential term from the personalized query to ensure smooth linguistic transitions and the annotator corpus 104c emphasizes the critical feature or annotation to refine the filtered user query to generate a refined query that improves the focus of the prompt. The query refinement module 104 further includes the localized application-specific content corpus 104d to retrieve the context-specific content from the localized content repository 112 for the refined query that ensures the AI output is more accurate and contextually aligned with the user intent.
In a further embodiment, the system 100 leverages the semantic processing module 106 to categorize and rank the context-specific content present in the localized application-specific corpus 104d. The ranking engine 106b of the semantic processing module 106 implements a relevance-based ranking algorithm to rank the categorized content for generating a list of prompts. The semantic processing module 106 further configures to implement a deep learning model including an artificial neural network model and recurrent neural network to enhance the precision of the content retrieved within the localized application-specific content corpus.
In yet another embodiment, the prompt selection module 108 selects the highest-ranked prompt based on semantic relevance or displays the ranked prompts to the user for selection. The selected prompts are delivered to a large language model (LLM) to generate AI output.
Figure 2A-2C illustrates a flow chart depicting the steps involved in a method for generating prompts for generative artificial intelligence (AI) applications in accordance with an embodiment of the present disclosure. The order in which method 200 is described is not intended to be construed as a limitation, and any number of the described method steps may be combined in any order to implement method 200, or an alternative method. Furthermore, method 200 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine-readable medium/instructions, or a combination thereof. The method 200 comprises the following steps:
At step 202, the method 200 includes receiving, by a user interface module 102, a user query from a user as input for generating a personalized and contextually aligned prompt.
At step 204, the method 200 includes processing, by a query refinement module 104, the user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory.
At step 206, the method 200 includes generating, by a persona corpus 104a, data about user preferences, traits, or attributes to personalize the user query.
At step 208, the method 200 includes finding, by a stopword corpus 104b, out non-essential terms from the personalized user query to maintain linguistic coherence and ensure essential meanings are preserved.
At step 210, the method 200 includes adding, by an annotator corpus 104c, critical features or annotations to refine the filtered user query for relevance so as to generate a refined user query.
At step 212, the method 200 includes retrieving, by a localized application-specific content corpus 104d, context-specific content from a localized content repository 112 for the refined user query.
At step 214, the method 200 includes performing, by a semantic analysis engine 106a, a Latent Semantic Analysis (LSA) on the content retrieved from the localized application-specific content corpus 104d for semantic categorization;
At step 216, the method 200 includes ranking, by a ranking engine 106b, the categorized content using a relevance-based ranking algorithm to generate a list of prompts.
At step 218, the method 200 includes selecting, by a prompt selection module 108, the highest-ranked prompt based on semantic relevance or a user selection from the ranked prompts.
At step 220, the method 200 includes delivering, by a generative AI model interface module 110, the selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs.
At step 222, the method 200 includes displaying the generated AI outputs to the user via the user interface module 102.
Figure 3 illustrates a workflow of a query generation and selection process for AI models in accordance with an embodiment of the present disclosure. The user provides an input query for generating personalized and contextually aligned responses. The query is received by query refinement module 104 and refines the query by integrating four distinct corpora. The four distinct corpora are the persona corpus 104a, the stopword corpus 104b, the annotator corpus 104c, and the localized application-specific content corpus 104d. The persona corpus 104a personalizes the query based on the user's preferences, traits, or attributes, ensuring the query reflects individual requirements. The stopword corpus 104b removes non-essential terms, maintaining linguistic coherence and retaining only the critical components of the query. The annotator corpus 104c cooperates with the stopword corpus 104b to enhance the query by adding key features or annotations that improve its contextual relevance and precision, the persona corpus 104a, the stopword corpus 104b, and the annotator corpus 104c together generate a refined query.
The refined query is received by the localized application-specific content corpus 104d which contains content organized across multiple corpora (eg corpus-1, corpus-2, corpus-3, … corpus-n) based on the application-specific domains or contexts. The localized application-specific content corpus 104d retrieves the context-specific content from the multiple corpora which is stored in the localized content repository 112. The localized application-specific content corpus 104d uses the advance techniques, including artificial neural network (ANN) and recurrent neural network (RNN) to search through the multiple corpora to retrieve the most relevant content related to the annotations within the query.
The semantic processing module 106 receives the retrieved context-specific content to semantically categorize and rank the context-specific content. The semantic processing module 106 includes the semantic analysis engine 106a and the ranking engine 106b. The semantic analysis engine 106a semantically groups the context-specific content into categories (semantic categorization-1, semantic categorization-2…, semantic categorization-n) as shown in Figure 3 via a latent semantic analysis (NSA) to identify patterns and assign meaning to the content, making it easier to align with the user intent. After categorizing, the content is ranked by ranking engine 106b, ranking engine 106b uses the BERT ranking technique to evaluate the semantic relevance of the categorized content and generate a list of prompts ranked by their appropriateness (prompt rank #1, prompt rank#2…, prompt rank#n) where prompt rank #1 being the most relevant and prompt rank #n the least. The prompt selection module 108 selects the most suitable prompt based on the context and intent of the user, the selection of prompt is achieved by either the user or the system. The selected prompt is delivered to the generative AI model interface, which processes the selected prompt using the large language model (LLM), such as ChatGPT or Copilot, to generate a contextually aligned response.
In an operative configuration, the system 100 for generating prompts for generative artificial intelligence (AI) applications is configured to create a prompt for generative AI applications through a series of interconnected modules. The system 100 begins with the user interface module 102, which receives the user query as input from the user to generate personalized and contextually aligned prompts. This module includes an interactive dashboard for viewing, modifying, and selecting ranked prompts that support multi-language input, and provide real-time suggestions to guide users in formulating their query.
Further, the query refinement module 104 cooperates with the user interface module 102 to process the user query using a multilayered corpus structure referred to as a four-way corpus directory (FWCD). The four-way corpus directories are the persona corpus 104a, the stopword corpus 104b, the annotator corpus 104c, and the localized application-specific content corpus 104d. The persona corpus 104a is configured to generate data about user preferences, traits, or attributes to personalize the user query, and the stopword corpus 104b filters out the non-essential terms from the personalized user query to maintain linguistic coherence and ensure essential meanings are preserved. The annotator corpus 104c of the query refinement module 104 adds the critical feature or annotation to refine the filtered user query for relevance so as to generate the refined user query. The localized application-specific content corpus 104d uses the refined user query to retrieve the context-specific content from the localized content repository 112.
To ensure that the refined user query remains effective, the system 100 incorporates the semantic processing module 106. The semantic processing module 106 categorizes and ranks the refined user query to select the highest-ranked prompt for the generative AI. For categorizing and ranking the refined user query the semantic processing module 106 includes the semantic analysis engine 106a and the ranking engine 106b. The semantic analysis engine 106a performs a latent semantic analysis (LSA) on the content retrieved from the localized application-specific content corpus 104d for semantic categorization and the ranking engine 106b implements a relevance-based ranking algorithm to rank the categorized content for generating a list of prompts. The ranking engine 106b utilizes a Bidirectional Encoder Representations from Transformers (BERT) based ranking algorithm to evaluate the contextual and semantic relevance of retrieved content.
To ensure that the high-ranked prompt gets selected, the system 100 incorporates the prompt selection module 108 to select the highest-ranked prompt based on semantic relevance or display the ranked prompts to the user for selection. The system 100 further incorporates the generative AI model interface module 110 to deliver the selected prompt for application in a large language model (LLM) to generate contextually aligned AI output and deliver the generated outputs to the user interface module 102 for display.
Advantageously, the system 100 for generating prompts for generative artificial intelligence (AI) applications represents a significant advancement in enhancing user interaction and improving the response quality of generative AI by using the query refinement module 104, the user interaction module 102, and the semantic processing module 106. The system 100 ensures user-specific personalization via the persona corpus 104a, enhances query relevance with the annotator corpus 104c, and retrieves domain-specific content using the localized application-specific content corpus 104d. The semantic processing module 106 employs the Latent Semantic Analysis (LSA) and relevance-based ranking to ensure precise categorization and ranking of content, enabling the prompt selection module 108 to identify and select the most suitable prompt for processing by a large language model (LLM). By integrating these advanced functionalities, the system 100 significantly enhances the efficiency of response generation, ensures precision and contextual relevance of AI outputs, and provides real-time adaptability to user input, making it a cutting-edge solution for generating prompts for generative artificial intelligence (AI) applications.
The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or codes on a computer-readable medium. Other examples and implementations are within the scope and spirit of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system for generating prompts for generative artificial intelligence (AI) applications that:
• provides a dynamic framework for creating accurate and contextually relevant prompts for Generative AI;
• provides semantic categorization and ranking of content to refine and prioritize prompt inputs;
• provides a user-specific Persona Corpus for tailoring prompts based on individual attributes;
• provides an annotator corpus to highlight critical features and improve prompt focus;
• provides a stopword corpus to ensure smooth language flow and grammatical precision in prompts;
• provides improved context comprehension and adaptability compared to traditional prompt-generation methods; and
• provides enhanced personalization and usability for diverse applications like education, enterprises, and marketing.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression "at least" or "at least one" suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
, C , C , Claims:WE CLAIM
1. A system for generating prompts for generative artificial intelligence (AI) applications, said system (100) comprising:
• a user interface module (102) configured to receive a user query from a user as input for generating personalized and contextually aligned prompts;
• a query refinement module (104) configured to cooperate with said user interface module (102) to process said user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory (FWCD), which includes:
o a persona corpus (104a) configured to generate data about user preferences, traits, or attributes to personalize said user query;
o a stopword corpus (104b) configured to out non-essential terms from said personalized user query to maintain linguistic coherence and ensure essential meanings are preserved;
o an annotator corpus (104c) configured to add critical features or annotations to refine a filtered user query for relevance so as to generate a refined user query; and
o a localized application-specific content corpus (104d) configured to retrieve context-specific content from a localized content repository (112) for said refined user query;
• a semantic processing module (106) configured to cooperate with said query refinement module (104), said semantic processing module (106) including:
o a semantic analysis engine (106a) configured to perform a Latent Semantic Analysis (LSA) on said content retrieved from said localized application-specific content corpus (104d) for semantic categorization; and
o a ranking engine (106b) implementing a relevance-based ranking algorithm to rank said categorized content for generating a list of prompts;
• a prompt selection module (108) configured to cooperate with said semantic processing module (106) to select a highest-ranked prompt based on semantic relevance or display the ranked prompts to said user for selection; and
• a generative AI model interface module (110) configured to deliver said selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs, and deliver the generated outputs to said user interface module (102) for display.
2. The system (100) as claimed in claim 1, wherein said semantic processing module (106) further implements deep learning models, including artificial neural network (ANN) model and recurrent neural network (RNN) model, to enhance the precision of content retrieval within said localized application-specific content corpus (104d).
3. The system (100) as claimed in claim 1, wherein said ranking engine (106b) implements utilizes a Bidirectional Encoder Representations from Transformers (BERT) based ranking algorithm to evaluate said contextual and semantic relevance of retrieved content.
4. The system (100) as claimed in claim 1, wherein said persona corpus (104a) is dynamically updated based on user interactions to improve the personalization of future prompts.
5. The system (100) as claimed in claim 1, wherein said stopword corpus (104b) incorporates adaptive filtering techniques to identify and exclude unnecessary words based on the context of said user query.
6. The system (100) as claimed in claim 1, wherein said localized content repository (112) is updated periodically with new content from multiple sources, including internal repositories containing proprietary data, third-party APIs for accessing external content, and user-generated content including feedback and queries.
7. The system (100) as claimed in claim 1, wherein said annotator corpus (104c) includes predefined categories and keywords to refine queries for specific applications, including education, marketing, and enterprise workflows.
8. The system (100) as claimed in claim 1, wherein said user interface module (102) allows said user to modify said refined query or ranked prompts before final submission to said generative AI model interface module (110).
9. The system (100) as claimed in claim 1, wherein said semantic processing module (106) integrates natural language processing (NLP) techniques to enhance contextual understanding of queries.
10. A method (200) for generating prompts for generative artificial intelligence (AI) applications, said method (200) comprising:
• receiving, by a user interface module (102), a user query from a user as input for generating a personalized and contextually aligned prompt;
• processing, by a query refinement module (104), said user query using a multi-layered corpus structure referred to as a Four-Way Corpus Directory (FWCD);
• generating, by a persona corpus (104a), data about user preferences, traits, or attributes to personalize said user query;
• finding, by a stopword corpus (104b), out non-essential terms from said personalized user query to maintain linguistic coherence and ensure essential meanings are preserved;
• adding, by an annotator corpus (104c), critical features or annotations to refine the filtered user query for relevance so as to generate a refined user query;
• retrieving, by a localized application-specific content corpus (104d), context-specific content from a localized content repository for said refined user query;
• performing, by a semantic analysis engine (106a), a Latent Semantic Analysis (LSA) on said content retrieved from said localized application-specific content corpus (104d) for semantic categorization;
• ranking, by a ranking engine (106b), said categorized content using a relevance-based ranking algorithm to generate a list of prompts;
• selecting, by a prompt selection module (108), a highest-ranked prompt based on semantic relevance or said user selection from said ranked prompts;
• delivering, by a generative AI model interface module (110), said selected prompt for application in a large language model (LLM) to generate contextually aligned AI outputs; and
• displaying said generated AI outputs to said user via said user interface module (102).
Dated this 25th day of November, 2024
_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA - 25
of R.K.DEWAN & CO.
Authorized Agent of Applicant
TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT CHENNAI
Documents
Name | Date |
---|---|
202441091788-FORM-26 [26-11-2024(online)].pdf | 26/11/2024 |
202441091788-COMPLETE SPECIFICATION [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-DECLARATION OF INVENTORSHIP (FORM 5) [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-DRAWINGS [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-EDUCATIONAL INSTITUTION(S) [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-EVIDENCE FOR REGISTRATION UNDER SSI [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-FORM 1 [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-FORM 18 [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-FORM FOR SMALL ENTITY(FORM-28) [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-FORM-9 [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-PROOF OF RIGHT [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-REQUEST FOR EARLY PUBLICATION(FORM-9) [25-11-2024(online)].pdf | 25/11/2024 |
202441091788-REQUEST FOR EXAMINATION (FORM-18) [25-11-2024(online)].pdf | 25/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.