Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
“An automated system for the evaluation of the textual and diagrammatic answers”
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 12 November 2024
Abstract
Disclosed is an automated system for evaluating the textual and diagrammatic answers from the answer sheets, the system comprising: a scanning device, wherein the physical answer sheets are collected and then scanned by the conventional high quality scanners to produce a high quality digital images; an Optical character Recognition Unit (OCR), wherein the wherein after processing, the text content from the scanned document is extracted; a textual answer processing unit, wherein the unit utilizes advanced Natural Language Processing (NLP) models to parse the extracted text; a diagram processing unit to assess the diagrammatic answers; a Text -diagram Relationship analyser, a complex diagram classifier, a grading and feedback unit; wherein , the system analyses the answer sheets based on the NLP model and also processes the diagrams to provide an improved grading efficiency with accurate and consistent result.
Patent Information
Application ID | 202421087294 |
Invention Field | ELECTRONICS |
Date of Application | 12/11/2024 |
Publication Number | 49/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
RINA DAMDOO | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
KANAK KALYANI | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
ANKIT PANDE | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
MAYANK JAISWAL | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
RAMDEOBABA UNIVERSITY, NAGPUR | Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
SHRI RAMDEOBABA COLLEGE OF ENGINEERING AND MANAGEMENT, NAGPUR | Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
RINA DAMDOO | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
KANAK KALYANI | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
ANKIT PANDE | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
MAYANK JAISWAL | Ramdeobaba University, Katol road, Nagpur-440013, Maharashtra, INDIA | India | India |
Specification
Description:Field of invention:
This invention relates to automated assessment system, specifically for grading theoretical and diagrammatic answers using advanced AI, OCR, and NLP technologies. The system is designed to improve grading efficiency, accuracy, and consistency, providing a scalable solution suitable for large-scale educational assessments.
Background of the invention:
Online tests are very popular these days. However, these online tests came into the forefront and gained popularity during the COVID-19 pandemic. Online tests assess students' skills and knowledge. In these online exams, the instructor or examiner grade such students through online mode. Various tests like written, analytical, diagrammatic labelling, etc. may be given through the online mode.
There are assessments in which students type comments or material on paper and send it to a website for assessment. In an exam, the questions are displayed on the device or computers, and students must write their answers on paper, snap photos/images, or scan them and upload them to the website after checking/evaluation. When written papers are collected for a normal or written exam, the instructor or examiner must hand-verify each paper, which takes a lot of time, effort, and energy. Due to human contact, checking may be inaccurate, and a real document may miss investigation.
Such tests are difficult to evaluate in real time. Usually, a system manually reviews student answers. Manual checking is time-consuming and error-prone. There are programs that automatically grade diagrams, written tests, etc. Auto-grading (Automatic Correction) of texts or other multimodal solutions is constrained by student inputs' abstract and metaphorical nature.
Evaluators should evaluate all students' written online multimodal responses in real time, regardless of language or subject, and provide feedback depending on their performance. Because objective questions cannot evaluate all topics, descriptive questions are often used to evaluate students, however evaluating descriptive questions for mass education pupils is difficult.
Thus, an expanded system for automatic evaluation of descriptive multimodal answers that can be written in any form and represented by texts, drawings/figures, or charts, as well as a mechanism to operate the system and handle the above concerns, is needed.
Manual correction of answer sheets involves teachers or examiners individually reviewing each student's responses to assess and score their performance. This process includes reading through written answers, comparing them to an answer key or grading rubric, and assigning points based on accuracy, completeness, and relevance. It requires significant attention and consistency to ensure fair and unbiased assessment. With the rise of Large Language Models (LLMs) and Artificial Intelligence (AI), there have been attempts to automate this process. Some of the approaches used presently to evaluate the answer sheets are Manual approach, Optical Mark Recognition (OMR), Eklavvyaa (AI assisted assessment tool).
Manual approach refers to the 'Manual correction' of the answer sheet. Manual Correction is a daunting task as it involves immense complexity and a huge volume. This process is not only time-consuming but also prone to bias and subjectivity of the evaluators, human error, and a lack of personalized feedback. The pressure on evaluators to meet tight deadlines while ensuring accurate assessment often results in stress and burnout.
The other approach is the Optical Mark Recognition (OMR). The Optical Mark Recognition (OMR) technology is designed to detect and interpret marks made on paper, such as filled bubbles or checkboxes, commonly used in multiple-choice question (MCQ) exams. However, OMR is limited to fixed-format responses and cannot handle open-ended questions or handwritten answers.
Another approach which is popular is the use of AI assessment tools designed to evaluate descriptive answers. These tools leverage Natural Language Processing (NLP) combined with Generative AI models to analyse student responses, offering personalized feedback to both students and teachers. However, these tools are typically limited to typed responses, which often need to be input directly on their portal. This constraint restricts their use to institutions in India that primarily rely on traditional, paper-based assessments though willing to adopt digital technologies.
The application IN202221060608 describes a system that scans answer sheets, separates questions using a segmentation model, and evaluates text with NLP and diagrams with optical recognition (Fig. 1 & Fig. 2) . However, it lacks a sophisticated diagram assessment capability and provides only basic feedback.
Similarly, KR102610681B1, talks about the invention which generates metadata from PDFs to create structured reference material, focusing on organizing content rather than grading accuracy or diagram analysis.
Another application US11967253B2, describes a method, computer system, and a computer program product for semi-automated exam grading. (Fig.3)
In WO2023249388A1, the invention focused on review learning, wherein this system generates review problems based on learning content but does not assess original answers or provide detailed, annotated feedback on graded content.
Yet another application KR20240017495A describes a chatbot-based grading system that uses OCR and NLP to score answers and provides feedback via conversation, but it's limited in diagram assessment and lacks the capability to handle large volumes of traditional paper-based answer sheets.
Manual grading of answer sheets is labour-intensive, prone to subjectivity, and challenging to scale for large examinations. Several existing systems attempt to automate grading, each with limitations. Therefore, there is a need to develop a robust system that can system that can evaluates both text and diagrams with accuracy and depth, using specialized components for diagram assessment and also feedback generation.
Limitations of the existing system:
The following are the limitations of the existing systems that are available:
I. Time-Consuming Manual Grading: Traditional manual grading of answer sheets is labour-intensive and time-consuming, especially for large-scale examinations.
II. Human Error and Subjectivity: Human graders can make mistakes and their assessments can be influenced by subjective biases. Different graders might have varying standards, leading to inconsistent scoring.
III. Inadequate Feedback for Students: Students often receive limited feedback on their performance, hindering their ability to learn from mistakes.
IV. Difficulty in Grading Diagrams: Evaluating hand-drawn diagrams accurately is challenging and time-consuming for both human graders and currently available automated solutions.
V. Delayed Result Declaration: The manual grading process can cause significant delays in result declaration, impacting students' academic timelines.
VI. Lack of Transparency in Grading: Students often lack insight into how their answers were graded and may perceive the process as opaque or unfair.
VII. Challenges in Integrating Textual and Diagrammatic Assessment: Existing systems may struggle to effectively integrate and evaluate both textual and diagrammatic components of an answer.
Summary of the invention:
The present invention therefore, differentiates itself from existing inventions by integrating multiple advanced technologies listed below into a cohesive system that comprehensively evaluates both textual and diagrammatic answers. Unlike prior inventions that focus on either text or diagram assessment separately or offer basic automated grading, this instant invention uniquely combines these capabilities with a robust feedback mechanism and inter-agent communication for context-aware grading, ensuring an accurate, fair, and transparent assessment process across diverse answer formats. The present invention Automate the Assessment process to significantly reduce the time and effort required for grading theoretical and diagrammatic answers. It also ensures uniform and unbiased grading by eliminating human errors and subjective judgment, provides a system capable of handling a large volume of answer sheets efficiently, suitable for educational institutions and standardized testing. It also provides for a system capable of handling a large volume of answer sheets efficiently, suitable for educational institutions and standardized testing. Through this invention, detailed analysis and grading of both textual and diagrammatic answers using advanced AI techniques can be performed. Further, the system can be integrated system can be integrated with existing educational platforms and databases, ensuring seamless operation and data management.
Objectives of the present invention:
The aim of the present invention is to provide the following:
I. Efficiency in Grading: Automate the Assessment process to significantly reduce the time and effort required for grading theoretical and diagrammatic answers.
II. Consistency and Fairness: Ensure uniform and unbiased grading by eliminating human errors and subjective judgment.
III. Scalability: Provide a system capable of handling a large volume of answer sheets efficiently, suitable for educational institutions and standardized testing.
IV. Comprehensive Assessment: Enable detailed analysis and grading of both textual and diagrammatic answers using advanced AI techniques.
V. Transparency and Feedback: Offer detailed feedback to students, explaining the grading process and areas of improvement.
VI. Integration: The system can be integrated with existing educational platforms and databases, ensuring seamless operation and data management.
VII. Student Engagement: Involving students in the review process by allowing them to challenge grades fosters transparency and trust in the assessment system.
Brief description of the drawing:
The features and advantages of the instant invention will become more clear from the following detailed description of the drawings and illustrations, which is to be read in conjunction with the accompanying drawings. The various features of the drawings are not to scale. However, the same will be clear for someone one skilled in the art in understanding the invention in conjunction with the detailed description.
Fig. 1 : Describes a system and method for automated evaluation of multimodal contents, as available in the prior art.
Fig. 2 : Describes a flowchart representing the various processes for the answer sheet evaluation system.
Fig. 3 : Describes a flowchart depicting a semi-automated long answer exam evaluation process.
Fig. 4 : Describes components depicting the process of automated assessment system of the present invention.
Fig. 5 : Represents the sample answer sheet and its assessment by the system of the present invention.
Fig. 6 : Represents the process diagram depicting the process of automated assessment system of the present invention.
Detailed description of the invention with respect to the drawings:
The following detailed description is made with reference to the technology disclosed. The preferred implementations are presented to exemplify the disclosed technology, not to constrain its scope, which is delineated by the claims. Individuals possessing ordinary skill in the subject matter will acknowledge numerous equivalent modifications of the description.
This section delineates examples of systems, apparatuses, computer-readable storage media, and techniques as per the disclosed implementations. These examples are included exclusively to offer context and facilitate comprehension of the disclosed implementations. It will be evident to those proficient in the field that the disclosed implementations may be executed without certain or all of the specific features supplied. In some cases, specific processes or operational methods, referred to as "blocks," have not been elaborated upon to prevent unnecessary obfuscation of the revealed implementations. Additional implementations and uses are feasible; therefore, the subsequent examples should not be regarded as conclusive or restrictive in either scope or context. This extensive explanation references the accompanying illustrations, which are integral to the description and illustrate specific implementations. While the disclosed implementations are detailed enough for a skilled practitioner to replicate them, it should be noted that these examples are not exhaustive; alternative implementations may be employed, and modifications may be made to the disclosed implementations without deviating from their essence and scope. The sequence of the methods illustrated and detailed herein may vary in different implementations. Furthermore, in some implementations, the disclosed methods may encompass a greater or lesser number of blocks than those stated.
A. Components and the system architecture of the invention:
The present invention integrates various advanced components to automate the grading process for theoretical and diagrammatic answers effectively. By leveraging AI, OCR, NLP, and image recognition technologies, it ensures accuracy, consistency, and fairness in educational assessments. Each component plays a crucial role in the overall functioning of the system, enabling it to deliver comprehensive evaluations and detailed feedback that enhance the educational experience for students and educators alike.
The details of the components used in the present invention for the automatic assessment of the answer sheets (also shown in Fig. 4) are as follows:
1. Physical Answer Sheet Collection and Scanning Unit:
This unit is responsible for capturing physical answer sheets and converting them into digital images. The system employs high-resolution scanners capable of capturing fine details of handwritten and printed responses. After scanning, the images are processed to enhance quality, such as adjusting brightness and contrast to improve OCR accuracy.
2. OCR (Optical Character Recognition) and Handwriting Recognition Unit:
This unit Converts printed and handwritten text into machine-readable formats for further processing. The OCR unit recognizes the printed text by analyzing the shapes of characters, while handwriting recognition uses machine learning algorithms to interpret varying handwriting styles. This unit outputs a structured text format that is essential for the subsequent text assessment processes.
3. Textual Answer Processing Unit:
This unit evaluates and scores textual answers against predefined answer keys. This unit utilizes advanced Natural Language Processing (NLP) models to parse the extracted text. It employs techniques such as keyword extraction, semantic analysis, and context understanding to determine the relevance and accuracy of the responses. The Scoring algorithms compare the student responses to model answers, allowing for scoring based on accuracy, completeness, and relevance.
4. Diagram Processing Unit:
This unit specifically assesses diagrammatic answers, ensuring accuracy and adherence to expected standards.
a. Diagram Segmentation : This process involves isolating diagrammatic elements from the textual content using image segmentation techniques.
b. Feature Extraction : The system identifies key features in diagrams, such as shapes, labels, and spatial relationships, essential for evaluation.
c. Pattern Recognition: Utilizing machine learning algorithms (e.g., CNNs - Convolutional Neural Networks), this component compares the extracted diagrams against correct templates, assessing their accuracy based on predefined criteria.
5. Text-Diagram Relationship Analyzer:
This unit ensures coherence and relevance between textual and diagrammatic responses. The function of this unit is to cross-verify the information presented in the text with the diagrams. For example, if a student describes a diagram, this unit checks if the description aligns with the features identified in the diagram. It utilizes NLP techniques to analyze linguistic structures and semantic content, providing a comprehensive assessment of the relationship between text and diagram.
6. Complex Diagram Classifier:
This unit identifies and classifies components of complex diagrams, ensuring accurate evaluation. This is accomplished by deploying deep learning techniques to analyze intricate diagrams, classifying components into predefined categories (e.g., flowcharts, scientific illustrations). This helps in grading based on the presence and correctness of essential elements within the diagram.
7. Grading and Feedback Unit:
This unit Compiles the scores and generates a detailed feedback for each answer. This unit synthesizes the assessments from both textual and diagrammatic evaluations to calculate overall scores. Feedback is generated based on the analysis, highlighting strengths and areas for improvement, and is annotated directly on the digital answer sheets.
8. Result Management System:
This unit manages overall grading results, allowing students to access their scores and feedback. The system compiles all graded responses and feedback into a cohesive format It also allows students to view annotated answer sheets, providing transparency in grading and enabling them to review their performance.
B. Process Flow for the Automated assessment of the answer sheets as per the invention:
According to the present invention and also in accordance with the Fig. 6 and Fig. 5, the physical answer sheets are collected and then scanned by the conventional high quality scanners to produce a high -quality digital images.
After the process of scanning of completed, the high resolution scanned images are sent for the OCR process (Optical Character Recognition) wherein after processing, the text content from the scanned document is extracted. The OCR unit recognizes the printed text by analyzing the shapes of characters, while handwriting recognition uses machine learning algorithms to interpret varying handwriting styles. This unit outputs a structured text format that is essential for the subsequent text assessment processes. The process also extracts and converts the handwritten or printed answers into machine readable formats.
After the process of the OCR , the extracted text is analysed using the NLP models. To assess the correctness and applicability of the answers, it makes use of methods including keyword extraction, semantic analysis, and comprehending the context. Algorithms for scoring allow for relevance, completeness, and correctness by comparing student answers to model answers.
Once the text and the diagrams are segregated and the text portion is analysed, the assessment of the diagrams is done. The diagrams are segmented and analyzed for features, patterns, and accuracy. This involves isolating diagrammatic content and applying advanced recognition techniques. Diagrammatic solutions are specifically evaluated in this section, with the goal of assuring accuracy and conformity to the criteria that are required. In the diagram segmentation process, the diagrammatic elements are separated from the textual material through the utilization of image segmentation techniques. The system is able to identify crucial aspects in diagrams, such as shapes, labels, and spatial relationships, which are essential for evaluation. This is made possible by the feature extraction feature. In the next step, i.e. during the pattern recognition, the machine learning methods (such as CNNs, which stands for convolutional neural networks) is used to compare the extracted diagrams to the right templates, evaluating the accuracy of the diagrams based on the criteria that have been set.
After the process of extraction of diagrams and its comparison with the predefined templates, the data is sent to the Text- data relationship analyser, wherein the unit makes sure the coherence and relevance of diagrammatic and textual responses. This unit serves to verify the information presented in the text by comparing it to the diagrams. For instance, this unit determines whether the description of a diagram corresponds with the features identified in the diagram when a pupil provides a description. It employs natural language processing (NLP) techniques to evaluate semantic content and linguistic structures, thereby offering a thorough evaluation of the correlation between text and diagram. The complex diagrams are sent to the complex diagram classifier which is responsible for accurately evaluating complicated diagrams by identifying and classifying their components. To achieve this, deep learning methods are used to examine complex diagrams and sort their parts into known groups (such as scientific pictures or flowcharts). Important parts of the diagram must be present and accurate for the grading process to work.
Once the text and the drawings are analysed, the data is sent to the grading and feedback unit. This unit Compiles the scores and generates a detailed feedback for each answer. This unit synthesizes the assessments from both textual and diagrammatic evaluations to calculate overall scores. Feedback is generated based on the analysis, highlighting strengths and areas for improvement, and is annotated directly on the digital answer sheets.
Finally, the data is sent to the Result Management Unit. The aggregate grading results are managed by this unit, which enables students to access their scores and feedback. The system organizes all feedback and graded responses into a unified format. It also enables students to evaluate their performance and provides transparency in grading by allowing them to view annotated answer sheets.
While some embodiments of the present disclosure have been illustrated and described, those are completely exemplary in nature. The information provided doesn't restrict itself to the examples given here. People who are versed in the field will be able to see that there are many more variations that are possible without changing the original ideas that were presented. All changes, modifications, variations, replacements, and similar things are covered by this declaration. There is no limit on the innovative subject matter to the protective scope of the claims that are connected.
, Claims:1. An automated system for evaluating the textual and diagrammatic answers from the answer sheets, the system comprising:
a. a scanning device, wherein the physical answer sheets are collected and then scanned by the conventional high quality scanners to produce a high quality digital images;
b. an Optical character Recognition Unit (OCR), wherein the wherein after processing, the text content from the scanned document is extracted;
c. a textual answer processing unit, wherein the unit utilizes advanced Natural Language Processing (NLP) models to parse the extracted text;
d. a diagram processing unit to assess the diagrammatic answers;
e. a Text -diagram Relationship analyser,
f. a complex diagram classifier,
g. a grading and feedback unit;
wherein , the system analyses the answer sheets based on the NLP model and also processes the diagrams to provide an improved grading efficiency with accurate and consistent result.
2. An automated system as claimed in claim 1, wherein the textual answer processing unit uses an advanced Neural Language Processing (NLP) model to parse the text extracted from the document.
3. An automated system as claimed in claim 1, wherein the diagram processing unit has:
a. a Diagram segregation unit wherein the diagrams are isolated from the textual contents;
b. a feature extraction unit which identifies the key features in diagram for evaluation;
c. a pattern recognition unit which compares the extracted diagrams against the correct templates using the Machine Language algorithms like Convolution Neural Networks (CNN)
wherein the extracted image or diagram is compared with a reference template to assess the accuracy based on predefined criteria.
4. An automated system as claimed in claim 1, wherein the relationship between the extracted text and the extracted image/ diagram is analysed by the Text-Diagram relationship analyser, using a NLP model, to ensures coherence and relevance between textual and diagrammatic responses.
5. An automated system as claimed in claim 1, wherein the complex diagram classifier identifies and classifies components of complex diagrams, ensuring accurate evaluation using 'Deep Learning' models.
6. An automated system as claimed in claim1, wherein the grading and feedback unit compiles the overall scores for the answer sheets and provides a detailed feedback for each answer.
7. An automated system as claimed in claim 6, wherein the overall scores are evaluated using the assessments from both textual and diagrammatic evaluations.
8. An automated system as claimed in claim 6 or 7, wherein the feedback is generated based on the analysis highlighting the strengths and areas for improvement.
9. An automated system as claimed in claim 1, wherein the result management system compiles all the graded responses and feedbacks into a cohesive format.
Documents
Name | Date |
---|---|
202421087294-COMPLETE SPECIFICATION [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-DRAWINGS [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-EDUCATIONAL INSTITUTION(S) [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-EVIDENCE FOR REGISTRATION UNDER SSI [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-FORM 1 [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-FORM 18 [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-FORM FOR SMALL ENTITY(FORM-28) [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-FORM-9 [12-11-2024(online)].pdf | 12/11/2024 |
202421087294-REQUEST FOR EARLY PUBLICATION(FORM-9) [12-11-2024(online)].pdf | 12/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.