image
image
user-login
Patent search/

A SYSTEM FOR SKETCHING CULPRIT PORTRAIT

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

A SYSTEM FOR SKETCHING CULPRIT PORTRAIT

ORDINARY APPLICATION

Published

date

Filed on 18 November 2024

Abstract

ABSTRACT A SYSTEM FOR SKETCHING CULPRIT PORTRAIT The present invention captures a speech description and develops it into highly detailed forensic facial sketches through a multi-step, technology-driven process. Speech inputs are the first step that utilizes cutting-edge recognition speech systems to develop high- quality transcriptions of the speech. These are then analyzed through NLP techniques, from where it finds out its key facial descriptors, such as "curly hair" or "long nose." Then it cross-checks these findings against historical records for accuracy of facial features. It then used the descriptors it had developed to output a photorealistic facial image, which was artistically constructed to look sketchy. The system permits the user to have real-time iterations of changes through voice prompts so that he can therefore have infinite iterations over the sketch. Once he is satisfied with how the sketch appears, it is to be subjected to forensic scrutiny and snapped or shared. This course translates the verbal descriptions into forensic sketches that are immediately and greatly helpful to an effective investigation. In addition, it provides another feature that we can able to cross check the sketch with CCTV footage if required and we can able to find the time log that person entry and exit. Fig 1

Patent Information

Application ID202441088995
Invention FieldELECTRONICS
Date of Application18/11/2024
Publication Number47/2024

Inventors

NameAddressCountryNationality
M.VAISHNAVIAStudent, Department of IT, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia
V.VIVEHAStudent, Department of IT, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia
K.M.MOHAMED FAARISStudent, Department of CSE, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia
M.MOHAMED ABU BAKKARStudent, Department of CSE, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia
A.PAVITHRAAssistant Professor, Department of IT, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia
M.RUBAAssistant Professor, Department of CSE, K.Ramakrishnan College of Engineering, Samayapuram, Trichy, Tamil Nadu, India-621112IndiaIndia

Applicants

NameAddressCountryNationality
K.RAMAKRISHNAN COLLEGE OF ENGINEERINGThe Principal, K.Ramakrishnan College of Engineering, NH-45, Samayapuram, Trichy, Tamil Nadu, India- 621112IndiaIndia

Specification

Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003


COMPLETE SPECIFICATION
(See Section 10; rule 13)

TITLE OF THE INVENTION
A SYSTEM FOR SKETCHING CULPRIT PORTRAIT

APPLICANT
K.RAMAKRISHNAN COLLEGE OF ENGINEERING
NH-45, Samayapuram,
Trichy, Tamilnadu, India- 621112


The following specification particularly describes the invention and the manner in which it is to be performed.
A SYSTEM FOR SKETCHING CULPRIT PORTRAIT
TECHNICAL FIELD
The present invention relates to the field of forensic technology. More specifically, it pertains to an advanced system for generating detailed forensic facial sketches based on verbal descriptions. The system is designed to efficiently translate spoken inputs into realistic facial sketches through a multi-step, technology-driven process.
BACKGROUND
These existing market problems underline the need for an innovative, automated solution which can efficiently generate, refine, and create forensic sketches and helps in modern investigation processes. It helps to improve the speed and accuracy in suspect identification.
Traditionally suspect identification happens through the artist with help of witness description which may lead to inconsistencies and collapse in sketches. Searching for skilled artists which consumes time and there are very few skilled artists who draw the sketches accurately through witness description.
In manual sketching, the artist takes more time to complete the entire portrait and repeatedly asks suggestions to match the correct portrait which may lead to investigation delay. It is difficult to compare traditional sketches with modern digital forensic systems and
we need to use some AI tools to compare the traditional sketches with modern digital forensic systems.
Due to incomplete description from witnesses the traditional artist struggles to complete the gaps and less accurate result. In traditional method we didn't compare the sketches with the past record and didn't able to refine the sketches. It leads inability to generate accurate sketches
Traditional sketching methods which can't be adjusted based on witness feedback. Once the portrait sketches are completed by the artist it isn't able to change the entire process of sketches. Creating accurate forensic sketches requires specialized artistic skills, making the process inaccessible to smaller law enforcement agencies without access to professional sketch artists.
Hiring and retaining skilled forensic sketch artists can be costly, especially for smaller law enforcement agencies with limited budgets, making it difficult to produce high-quality forensic sketches. Traditionally Performing face matching with CCTV footage and finding number of times a person entered and exited is difficult.
OBJECT OF THE INVENTION
This invention aims to transform suspect identification in law enforcement using AI-driven technology, which creates facial sketches automatically from witness statements. Consequentially, this improves efficiency, accuracy, and also enhance privacy for the witness. Usually, it is an activity that is cumbersome, time-consuming, and with variations between methods of hot-sketching hence, This AI system consume less time to provide more consistent results. That minimizes the subjectivity associated with drawings while refining them with feedback from witnesses and a
history of data. Facial recognition coupled with a criminal database would mean that the generated sketches can be matched with past records in no time, thereby giving actionable leads. The AI empowers the witnesses and victims to contribute directly and securely to the investigation through a user-friendly interface, making sure their privacy is not breached and hence encouraging participation.
In other words, it is supposed to eliminate human biases and errors by training AI models through varied datasets. It identifies suspects in a much fairer way while enhancing the effectiveness and speed of the process. The self-improving system updates with the feedback and case results, implying that it will also be effective over time with the development of technologies parallel to changes in criminal behavior.
This approach helps in increasing public security and winning the community's confidence in working together with the police to prevent crime while maintaining the anonymity of information providers through encryption and other safety measures.
SUMMARY
Our invention is an advanced forensic technology system that transforms verbal descriptions into highly detailed facial sketches. It starts by capturing speech inputs and transcribing them using speech recognition technology. Natural Language Processing (NLP) techniques then analyze the transcription to extract key facial features such as "curly hair" or "long nose." These features are verified against historical records for accuracy.
Using the identified descriptors, the system generates a photorealistic, sketch-like image. Users can refine the sketch iteratively through voice commands until the desired result is achieved. The final forensic-quality sketch can be shared or stored, enhancing the efficiency of criminal investigations by quickly converting verbal descriptions into detailed visual representations. And an additional option is available to match the sketch with the CCTV footage if needed. It helps to generate the time log specifying the number of times a person entered and exited.
BRIEF DESCRIPTION OF THE DRAWINGS
The first step in our solution is to analyze the speech input from the witness and convert it into text with the help of speech recognition models such as Mozilla's Deepspeech.
Next, we apply Natural Language Processing (NLP) techniques to extract relevant facial descriptors from the text.Once we get the structured facial data from the speech, we proceed to the face generation phase that means it begins the sketching generation process.
The generated face is then passed through a sketch transformation model. Thismodel applies a filter to convert the realistic image into a pencil-sketch-like representation, and helps to create the most accurate image of the culprit.
At each stage of sketching, the generated sketch will be cross checked with the past records of criminals from police database.If the sketch matches with the records, then the record/document will be Displayed, if not the sketching process will be continued
To create an accurate sketch, the system allows for iterative, voice-driven adjustments. Users can provide further voice commands such as "make the chin sharper" or "add a beard" to modify the sketch in real time. These additional commands help to edit the sketch, resulting in a refined sketch that better matches the user's description and give an appropriate sketch of the culprit.
The final output is an artistic, hand-drawn-style sketch of the culprit described by the witness, displayed on the user interface.Then user is given the opportunity to match the finalized sketch with particular CCTV footage.
If they need to match the sketch with the CCTV footage, then the sketch will be taken as an input, then with the help of face recognition AI, the sketch will be matched with the footage. Through this the AI generates a time log specifying the number of times a person entered and exited.
DETAILED DESCRIPTION
Our invention helps in the field of forensics, accurately sketching the face of a suspect based on vocal descriptions of witness.it is a systematic approach to transform descriptions spoken into detailed human face sketches and helps in investigation process. It uses advanced technologies like speech recognition, natural language processing (NLP), and Generative Adversarial Networks (GANs) to generate these sketches.
Therefore, the functioning of the invention can be divided into the following steps:
The process starts with getting speech input, followed by transforming the speech input into text, then facial feature extraction, with the help of description given by the witness face generation begins, and iterative refinement service is also available it helps to make the sketch as more accurate.
This invention also helps to match the generated face with historical records of criminal from police database and it also provides an additional service of matching the sketch with the CCTV footage if needed. With the help this it match the sketch with CCTV footage we can able to find the number of times a person entered and exited in the footage and helps in creating a time log.
The system begins by recording the spoken input from witnesses or informants using a high-quality microphone. The goal at this stage is to capture the spoken words clearly for later processing. To ensure that the recorded audio is as clean as possible, background noise is minimized during the recording process using a noise-canceling microphone or by utilizing noise reduction techniques in the environment. Once the audio is recorded, it is saved in a clean and noise-free format for transcription in the next steps. This recorded speech input will eventually serve as the basis for generating the sketch, but at this stage, the focus is purely on accurately capturing the spoken description of the suspect or relevant details for future processing.
Speech recognition, like Mozilla's DeepSpeech, works by converting spoken words into text. DeepSpeech is an open-source system that uses deep learning It relies on a type of artificial neural network called a bi-directional Long Short-Term Memory (bi-LSTM). This architecture is great at understanding both past and future speech patterns, which helps it better interpret and transcribe speech, even in complex or unclear situations. It's trained on a wide range of spoken language, so it can understand different accents, speech styles, and nuances. Once the system has transcribed your speech into text, that text is then used to generate a detailed sketch based on the description.
After obtaining the transcribed text, the system uses Natural Language Processing (NLP) to analyze and extract facial descriptors crucial for generating the sketch. To accomplish this in Python, It uses two main packages: spaCy and NLTK (Natural Language Toolkit). These packages together allow for powerful text processing and extraction capabilities. First, It uses spaCy, which provides pre-trained NLP models that excel in named entity recognition, dependency parsing, and part-of-speech tagging. By leveraging spaCy's custom pipeline, It could isolate and tag specific descriptors such as "round face," "long nose," "curly hair," and "thick eyebrows" based on pre-defined entities that represent physical features.
To further refine and structure the descriptors, It incorporates NLTK for advanced text analysis. Using NLTK, the system tokenizes the transcribed text and applied semantic analysis to identify patterns that describe facial features. For example, the system uses word similarity metrics to ensure terms like "large nose" and "big nose" are treated as equivalent, enhancing the robustness of the descriptor extraction process. These NLP techniques helped in automatically categorizing the transcribed words into structured, relevant facial attributes.
With both spaCy and NLTK, the system then compiles the extracted descriptors into a structured dataset. This dataset functions as a blueprint for the sketch generation, with each extracted descriptor translating into a command for creating the facial features of the sketch. This structured approach ensures that the relevant descriptors are clearly defined, organized, and ready for the face generation phase.
After generating the sketch, the system implementes a cross-referencing process to compare the new sketch with a database of historical criminal records. For this part of the project, the system uses Python's `face_recognition` package, which is a powerful library for facial recognition. This package allows us to encode the features of a face image into a numerical format, enabling precise comparisons between the generated sketch and images in the historical records. It first convertes each criminal's stored photograph into a facial encoding vector using `face_recognition`. Then, It creates an encoding for the newly generated sketch and compared it to these stored vectors, calculating the similarity to identify potential matches.
To manage and query the database of historical records, It uses `SQLite` in Python, which provides a lightweight and efficient SQL database system for storing past criminal records. It organizes the data to include details such as names, crime history, and associated facial encodings for each individual. By querying the database for similar encodings, the system can efficiently retrieve any matching records. If a match is found, the system displays the criminal's history, giving investigators valuable information on any prior offenses, patterns, or known behaviors of the suspect. This database connection and comparison are key in assisting law enforcement to quickly identify suspects with a known history of similar crimes.
If no match is found in the database, the sketch generation process continues until more details or refinement yield a potential match. This iterative process, combined with an established database, ensures a streamlined approach to identifying suspects and provides critical insights to aid the investigation, drawing on past crime records. The records also allow investigators to analyze if the suspect has a specific method of committing crimes, as patterns from previous offenses can provide important leads.
After extracting the facial descriptors from the text, the system used Generative Adversarial Networks (GANs) to generate a realistic facial image based on the extracted descriptors. In Python, the system leverages `TensorFlow` and `Keras` to design and train the GAN. A GAN consists of two main components. The generator and the discriminator. The generator creates facial images based on the facial feature descriptors provided, while the discriminator evaluates the generated images against real images to ensure accuracy. By iteratively training these networks, the generator becomes skilled at creating realistic images that closely match the provided descriptors. The descriptors, such as "round face" or "curly hair," serve as input features to guide the GAN, ensuring that the generated facial image accurately represents the characteristics specified in the verbal description.
To allow for iterative refinement, the system integrates a real-time feedback loop using Python's `speech_recognition` package to capture additional voice commands from the user. With each command-such as "make the chin sharper" or "add a thicker beard"-the system re-processes the command and updates the descriptors accordingly. For instance, if a command modifies the chin shape, the facial descriptors are adjusted to reflect a sharper chin, and the GAN then regenerates the sketch with these updates. This feedback loop ensures that the sketch remains aligned with the user's evolving input.
After generating a realistic facial image, the system applies a sketch transformation model to convert it into a pencil-sketch-like representation. In Python, it uses `OpenCV` to achieve this transformation by applying edge-detection filters, which highlight the contours and details of the face while reducing the color information, creating a sketch effect. Additionally, the `PencilSketch` model from `cv2` is used to refine this sketch transformation, allowing control over shading intensity and stroke thickness to create a detailed, lifelike pencil-sketch appearance. This transformation preserves key facial features while ensuring the output has the look and feel of a hand-drawn sketch, a format commonly preferred for forensic studies.
For displaying the final output to forensic experts, It developes a user interface using `Tkinter`, which enables quick interaction and visualization of the sketch. The interface showcases each stage of the sketch refinement, allowing experts to review the progression from the original GAN-generated image to the final sketch. This approach provides transparency in how the image was refined and allows experts to see if any additional adjustments are needed. Forensics teams can then analyze the sketch directly on the interface, with options to zoom, annotate, or save the sketch for records.
By presenting the final output in this way, the system significantly reduces time in preparing sketches for investigations, as experts receive a ready-to-use, highly detailed sketch that aligns with their input and real-time refinements. This streamlined process accelerates the identification and investigative phases, enabling forensic experts to use the sketch as a reliable tool in their analysis and decision-making. The sketch, having been iteratively refined for accuracy, becomes a valuable asset in moving the investigation forward based on the best possible visual representation of the suspect.
In addition to generating the sketch, the system offers an optional service to match the generated sketch with CCTV footage, aiding investigators in tracking the suspect's movements. If the police request this service, they can upload the specific CCTV footage for analysis. The system uses the generated sketch as a reference to identify and monitor the suspect in the footage. For this, used Python's face_recognition package to perform facial recognition on each person appearing in the video, comparing each detected face with the generated sketch.
To process the footage efficiently, the system utilizes OpenCV to break down the video into individual frames and isolate each face within the frames. Using face_recognition, the system encodes each face detected in the footage and compares it to the encoding of the generated sketch. For each match found, the system logs the entry and exit times, counting each appearance to generate a time log of the suspect's movements. This log includes the precise times the individual entered and exited the camera's view, providing a comprehensive record of their activity in the specified footage.
This additional capability not only enhances the identification process but also offers insights into the suspect's behavior and potential movement patterns. The time logs allow investigators to track the suspect's frequency of visits, timing, and possible locations, offering valuable information to support the investigation and refine further search efforts.
The main aim of our innovation is to assist the police department in accelerating their investigation process. By automating key tasks such as sketch generation, cross-referencing criminal records, and analyzing CCTV footage, the system significantly reduces the time and effort required for suspect identification and tracking. It minimizes dependency on manual sketch artistry and provides highly accurate, adaptable sketches that evolve based on real-time feedback. This streamlined approach allows investigators to move quickly from suspect identification to active investigation, enabling a faster response that can be critical in time-sensitive cases. The ability to track suspects across CCTV footage further enhances the police's capabilities, offering clear time logs and entry/exit patterns that can reveal important insights. Ultimately, this innovation aims to make police work more efficient and effective, allowing resources to be focused where they are needed most.
, Claims:CLAIMS
We Claim,
1. A system for sketching culprit portrait based on verbal descriptions, comprising:
A speech recognition module is configured to capture and transcribe speech inputs accurately it helps to record spoken input from witnesses or informants using a high-quality microphone removing the background noise from the audio to aid the transcription of the audio more easily;
A Natural Language Processing (NLP) module for analyzing transcriptions and extracting key facial descriptors. Organize the extracted facial descriptors into a structured dataset that defines specific features for the face generation phase;
An image generation module that utilizes extracted descriptors to create photorealistic, sketch-like facial images based on the witness description;
A historical database of criminals is used for cross-referencing and verifying the criminal with the help of this the generated sketch will continuously be matched with the dataset; and,
A user interface enabling real-time iterative refinement through voice commands for adjusting facial features.it helps in edit or modifying the sketch.
2. The system for sketching culprit portrait as claimed in claim 1, wherein the Speech recognition module adapts to different accents and speech patterns to improve transcription accuracy.
3. The system for sketching culprit portrait as claimed in claim 2, wherein the image generation module uses Generative Adversarial Networks (GANs) to generate photorealistic or sketch like facial sketches based on the extracted descriptors.
4. The system for sketching culprit portrait as claimed in claim 3, wherein the historical database is dynamically updated with new data to enhance the accuracy and relevance of generated facial features.
5. The system for sketching culprit portrait as claimed in claim 5, wherein the interface provides options for exporting the final forensic sketch in multiple formats for sharing or storing in investigative databases
6. The system for sketching culprit portrait as claimed in claim 6, wherein the microcontroller coordinates the operation of all modules to ensure an optimized and seamless user experience throughout the sketch generation process.

Documents

NameDate
202441088995-COMPLETE SPECIFICATION [18-11-2024(online)].pdf18/11/2024
202441088995-DRAWINGS [18-11-2024(online)].pdf18/11/2024
202441088995-EDUCATIONAL INSTITUTION(S) [18-11-2024(online)].pdf18/11/2024
202441088995-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-11-2024(online)].pdf18/11/2024
202441088995-FORM 1 [18-11-2024(online)].pdf18/11/2024
202441088995-FORM FOR SMALL ENTITY(FORM-28) [18-11-2024(online)].pdf18/11/2024
202441088995-FORM-9 [18-11-2024(online)].pdf18/11/2024
202441088995-POWER OF AUTHORITY [18-11-2024(online)].pdf18/11/2024
202441088995-REQUEST FOR EARLY PUBLICATION(FORM-9) [18-11-2024(online)].pdf18/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.