image
image
user-login
Patent search/

METHOD AND SYSTEM FOR DETECTION OF SYNTHETIC MEDIA USING VECTOR SIMILARITY SEARCH

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

METHOD AND SYSTEM FOR DETECTION OF SYNTHETIC MEDIA USING VECTOR SIMILARITY SEARCH

ORDINARY APPLICATION

Published

date

Filed on 5 November 2024

Abstract

ABSTRACT METHOD AND SYSTEM FOR DETECTION OF SYNTHETIC MEDIA USING VECTOR SIMILARITY SEARCH Aspects of present disclosure relate to a method and system for detection of synthetic media using vector similarity search. The computer-implemented system for detecting synthetic media, comprising of a database for storing embeddings of verified media; a processor configured to execute a media detection method; and a user interface for submitting media for verification and displaying detection results. The method converts verified media content into vector embeddings, which are stored in a database for rapid retrieval and comparison against newly submitted media. By comparing the similarity of embeddings, the solution accurately identifies whether a media/content is real or manipulated without relying on visual cues, making it highly resistant to pixel-perfect synthetic media. The proposed method is scalable, ensuring efficient detection across a large number of images, videos and other content while maintaining a low computational overhead. This method provides a reliable, long-term solution to the growing threat posed by increasingly sophisticated synthetic media technologies. (Figure 1 is the reference figure)

Patent Information

Application ID202421084607
Invention FieldCOMPUTER SCIENCE
Date of Application05/11/2024
Publication Number48/2024

Inventors

NameAddressCountryNationality
Mr. Vansh Kharidia701, Sunil Apartments, Gajanan Colony, Jawahar Nagar, Goregaon West, Mumbai, Maharashtra - 400104IndiaIndia
Ms. Dhruvi Goradia1B/82, Kalpataru Gardens, Ashok Nagar, Near East-West Flyover, Kandivali East, Mumbai, Mahrashtra 400101IndiaIndia
Mr. Hemant Purswani502, 3B, Green Park CHS. LTD., Behind Jangid And Poonam Towers, Near Gokul Village, Mira Road East, Thane, Maharashtra - 401107IndiaIndia
Dr. Manoj SankheElectronics & Telecommunication Engineering Department, SVKM’S NMIMS, Mukesh Patel School Of Technology Management & Engineering, Bhakti Vedant Swami Marg, Near Cooper Hospital, JVPD Scheme, Vile Parle (West), Mumbai. Maharashtra- 400 056, India.IndiaIndia
Ms. Sumita NainanE-41, Rustomjee Central Park, Next to Solitaire Park, Andheri Kurla Road, Andheri East., Mumbai, Maharashtra Pin: 400093IndiaIndia
Mr. Vishram Bapat1, Surakshita CHS, Dadabhai Cross Road no 2, 48 Linking Road Extn, Santacruz West Mumbai, Maharashtra 400054IndiaIndia

Applicants

NameAddressCountryNationality
Mr. Vansh Kharidia701, Sunil Apartments, Gajanan Colony, Jawahar Nagar, Goregaon West, Mumbai, Maharashtra - 400104IndiaIndia
Ms. Dhruvi Goradia1B/82, Kalpataru Gardens, Ashok Nagar, Near East-West Flyover, Kandivali East, Mumbai, Mahrashtra 400101IndiaIndia
Mr. Hemant Purswani502, 3B, Green Park CHS. LTD., Behind Jangid And Poonam Towers, Near Gokul Village, Mira Road East, Thane, Maharashtra - 401107IndiaIndia
Dr. Manoj SankheElectronics & Telecommunication Engineering Department, SVKM’S NMIMS, Mukesh Patel School Of Technology Management & Engineering, Bhakti Vedant Swami Marg, Near Cooper Hospital, JVPD Scheme, Vile Parle (West), Mumbai. Maharashtra- 400 056, India.IndiaIndia
Ms. Sumita NainanE-41, Rustomjee Central Park, Next to Solitaire Park, Andheri Kurla Road, Andheri East., Mumbai, Maharashtra Pin: 400093IndiaIndia
Mr. Vishram Bapat1, Surakshita CHS, Dadabhai Cross Road no 2, 48 Linking Road Extn, Santacruz West Mumbai, Maharashtra 400054IndiaIndia

Specification

Description:METHOD AND SYSTEM FOR DETECTION OF SYNTHETIC MEDIA USING VECTOR SIMILARITY SEARCH
FIELD OF INVENTION
[0001] The present disclosure relates to media authenticity verification, and particularly relates to a method and system for detection of synthetic media using vector similarity search.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] The proliferation of artificial intelligence (AI) and machine learning (ML) technologies has led to significant advancements in generating synthetic media, encompassing images, videos, and audio. Initially, Generative Adversarial Networks (GANs) were predominantly used to create such content. However, the emergence of advanced AI models, including generative AI, has enhanced the realism and accessibility of synthetic media production.
[0004] The current state of technology involves:
1. AI-Generated Images and Videos:
● Generative AI: These models iteratively refine random noise into coherent images or videos by learning the data distribution of a training dataset. Notable examples include DALL•E by OpenAI and Stable Diffusion, which can generate highly realistic and artistic images from textual descriptions
2. Deepfakes:
● Definition: Deepfakes are synthetic media where an individual's likeness or voice is replicated using AI techniques. Initially popularized through GANs, deepfakes have evolved with the integration of generative AI models, enhancing their realism.
● Applications: While deepfakes have legitimate uses in entertainment and education, they pose risks such as misinformation, identity theft, and unauthorized use of personal likenesses.
[0005] Limitations of current technologies:
1. Detection Challenges:
● Enhanced Realism: Advanced AI models produce synthetic media that is increasingly difficult to distinguish from authentic content. Traditional detection methods, which relied on identifying visual or audio artifacts, are becoming less effective as the quality of synthetic media improves.
2. Scalability Issues:
● Computational Resources: Detecting synthetic media, especially in high volumes, requires substantial computational power. This demand poses challenges for real-time detection systems and limits scalability.
3. Adaptability:
● Evolving Techniques: As AI models continue to advance, detection methods must also evolve. A detection technique effective today may become obsolete as new generation methods emerge, necessitating continuous updates and research
4. Ethical and Legal Considerations:
● Misuse: The accessibility of AI tools for generating synthetic media raises concerns about potential misuse, including the creation of non-consensual explicit materials, political propaganda, and financial scams.
● Regulatory Gaps: Current laws may not adequately address the challenges posed by synthetic media, leading to difficulties in prosecuting misuse and protecting individuals' rights.
[0006] While AI advancements have significantly enhanced the capability to produce realistic synthetic media, they have also introduced complex challenges in detection, scalability, adaptability, and ethical governance. Addressing these issues requires ongoing interdisciplinary research and collaboration.
[0007] Efforts have been made previously to provide media authenticity verification. Patent application US10929677B1 disclosed Methods and systems for detecting deepfakes. The application discloses a system for detecting synthetic videos may include a server, a plurality of weak classifiers, and a strong classifier. The server may be configured to receive a prediction result from each of a plurality of weak classifiers; and send the prediction results from each of the plurality of weak classifiers to a strong classifier.
[0008] A scientific paper by Bonettini et al. disclosed Video face manipulation detection through ensemble of CNNs. The scientific paper focuses on the problem of face manipulation detection in video sequences targeting modern facial manipulation techniques (Bonettini, Nicolò & Cannas, Edoardo & Mandelli, Sara & Bondi, Luca & Bestagini, Paolo & Tubaro, Stefano. (2021). 5012-5019. 10.1109/ICPR48806.2021.9412711.).
[0009] A webpage by Daisy-Zhang discloses Awesome-Deepfakes-Detection. It provides a collection list of Deepfakes Detection related datasets, tools, papers, and code (https://github.com/Daisy-Zhang/Awesome-Deepfakes-Detection).
[0010] However, these disclosures face limitations as existing alternatives. The problems faced by the existing solutions are given below.
[0011] Problems and limitations of existing solutions:
1. Inability to Detect AI-Generated Deepfakes: As AI-generated deepfakes, particularly those created using generative AI, become more sophisticated, existing detection systems struggle to identify manipulated content. These advanced models produce synthetic media that is often indistinguishable from authentic media, making traditional artifact-based detection methods increasingly ineffective. Current technologies are particularly vulnerable to high-quality, pixel-perfect forgeries, limiting their applicability in real-world scenarios.
2. Scalability Issues with Large-Scale Detection: Many existing systems face challenges when scaling to handle vast amounts of data, such as videos or images uploaded to social media or other platforms. The high computational cost associated with detecting deepfakes on a frame-by-frame basis, combined with the need for real-time results, makes these systems impractical for large-scale applications.
3. Narrow Focus on Specific Types of Manipulation: Existing solutions are often narrowly focused on detecting particular types of synthetic media, such as face-swap deepfakes, leaving other forms of synthetic manipulation unaddressed. This limits the applicability of current systems to specific use cases and restricts their ability to detect a broader range of synthetic content, including audio and non-facial manipulations.
4. Decreasing Lifespan of Detection Models: Traditional deepfake detection models have a short lifecycle, becoming obsolete as new media generation techniques are developed. The need to constantly update models and retrain detection systems leads to increasing costs and lower reliability over time.
5. Reliance on Detecting Visual Artifacts: Most existing detection technologies rely on the presence of visual artifacts or inconsistencies within the manipulated media. As AI deepfake generation improves, these artifacts are becoming less noticeable, reducing the accuracy of current systems. This dependency on visual errors leaves current methods ill-equipped to handle advanced, near-flawless synthetic media.
[0012] Hence, design and development of a system and method is needed which provides a robust solution to the growing challenge of distinguishing authentic content from manipulated media, which has become increasingly sophisticated due to advancements in artificial intelligence (AI) and machine learning (ML). Keeping this in mind, the present disclosure provides a method and system for detection of synthetic media using vector similarity search. In the present invention, a state-of-the-art system for detecting synthetic media, particularly deepfakes, utilizing advanced vector similarity techniques is disclosed.
[0013] In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
[0014] As used in the description herein and throughout the claims that follow, the meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0015] The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.

OBJECTS OF THE INVENTION
[0016] It is an object of the present disclosure which provides a method and system for detection of synthetic media using vector similarity search.
[0017] It is an object of the present disclosure which provides detection of synthetic media which identifies whether a video is real or manipulated without relying on visual cues, making it highly resistant to pixel-perfect synthetic media.
[0018] It is yet another object of the present disclosure which provides a scalable system/solution, ensuring efficient detection across millions of images or videos while maintaining a low computational overhead.

SUMMARY
[0019] The present disclosure is directed towards a method for detection of synthetic media using vector similarity search executed by a processing unit. The method comprising of converting verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings; storing the vector embeddings in a database in a processing unit; receiving and converting a new media into an embedding by the processing unit; comparing the embedding of the new media with the stored vector embeddings; and computing a similarity score for determining the authenticity of the new media.
[0020] In an aspect of the present disclosure, the converting the verified media into vector embeddings is by utilizing pre-trained deep learning models.
[0021] In an aspect of the present disclosure, the computing the similarity score includes computing similarity between the new media embedding and the embeddings of the verified media for determining the authenticity of the new media, and wherein the similarity score exceeding a predefined threshold confirms authenticity of the media.
[0022] In an aspect of the present disclosure, the threshold is dynamically adjusted by employing a dynamic thresholding mechanism by adapting based on contextual factors associated with the analysed media.
[0023] In an aspect of the present disclosure, the computing the similarity score by employing distance metrics including but not limited to cosine similarity or Euclidean distance.
[0024] The present disclosure is also directed towards a system for detection of synthetic media using vector similarity search. The system comprising of a processing unit configured to execute: conversion of verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings, wherein the processing unit is configured to convert the verified media into vector embeddings utilizing pre-trained deep learning models, and computation of similarity between new media embedding and the embeddings of the verified media; a Database in the processing unit for storing the embeddings of the verified media; and a user interface with a display for uploading and displaying the media and results respectively.
[0025] In an aspect of the present disclosure, the processing unit is configured to employ distance metrics including but not limited to cosine similarity or Euclidean distance for computing similarity between new media embedding and the embeddings of the verified media.
[0026] In an aspect of the present disclosure, the processing unit is configured to determine a similarity score for computing similarity, and wherein the similarity score exceeding a predefined threshold confirms authenticity of the media.
[0027] In an aspect of the present disclosure, the processing unit is configured to employ a dynamic thresholding mechanism for adjusting the similarity threshold by adapting based on contextual factors associated with the analysed media.
[0028] In an aspect of the present disclosure, the processing unit is configured to integrate additional content like textual analysis from associated metadata, captions, or audio transcriptions for enhancing detection accuracy of the synthetic media.
[0029] One should appreciate that although the present disclosure has been explained with respect to a defined set of functional modules, any other module or set of modules can be added/deleted/modified/combined and any such changes in architecture/construction of the proposed system are completely within the scope of the present disclosure. Each module can also be fragmented into one or more functional sub-modules, all of which also completely within the scope of the present disclosure.
[0030] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0032] FIG. 1 illustrates a flowchart of User Flow: Content verification process in accordance with embodiments of the present disclosure.
[0033] FIG. 2 illustrates a flowchart of Client Flow: User registration process in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION
[0034] Aspects of the present disclosure relate to a method and system for detection of synthetic media using vector similarity search.
[0035] The following is a detailed description of embodiments of the disclosure. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0036] Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references below to the "invention" may in some cases refer to certain specific embodiments only. In other cases it will be recognized that references to the "invention" will refer to subject matter recited in one or more, but not necessarily all, of the claims.
[0037] If the specification states a component or feature "may", "can", "could", or "might" be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. Various terms as used herein are shown below. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0038] In an embodiment of the present disclosure, the present invention discloses a method and system for detection of synthetic media using vector similarity search. The method comprising of converting verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings; storing the vector embeddings in a database in a processing unit; receiving and converting a new media into an embedding by the processing unit; comparing the embedding of the new media with the stored vector embeddings; and computing a similarity score for determining the authenticity of the new media. A non-transitory computer-readable medium storing instructions that, when executed by a processing unit, enables the performance of the method for detecting synthetic media.
[0039] The system comprising of a processing unit configured to execute conversion of verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings, and computation of similarity between new media embedding and the embeddings of the verified media. The system also includes a Database in the processing unit for storing the embeddings of the verified media, and a user interface with a display for uploading and displaying the media and results respectively.
KEY COMPONENTS OF THE INVENTION
[0040] Vector Embeddings:
The core of the invention revolves around the conversion of verified media into vector embeddings. These embeddings represent the content in a high-dimensional space, capturing the essential features of the media while discarding irrelevant details. The embeddings are typically generated using deep learning models that are pre-trained on a large dataset to ensure they accurately reflect the characteristics of real media.
For example, an image of a celebrity may be converted into a 2048-dimensional vector that encodes its visual attributes, such as color, texture, and shape. Similarly, video frames can be processed to create embeddings that capture the temporal dynamics of the content.
[0041] Database:
The system utilizes a database to store the embeddings of verified media. This database is optimized for efficient retrieval and similarity searches, allowing the system to quickly compare new inputs against a vast collection of known authentic media.
When a new piece of media is encountered, it is converted into an embedding and compared with the stored embeddings to determine its similarity. The database can handle millions of embeddings, ensuring scalability for applications dealing with large volumes of synthetic media.
[0042] Similarity Comparison:
The core detection mechanism involves computing the similarity between the new media embedding and those in the database. The system employs distance metrics, such as cosine similarity or Euclidean distance, to quantify how closely related the embeddings are.
If the similarity score exceeds a predefined threshold, the system concludes that the media is likely authentic. Conversely, if the score falls below the threshold, the content may be identified as manipulated or unverifiable.
[0043] Thresholding Mechanism:
The invention incorporates a dynamic thresholding mechanism that adapts based on the content and context of the media being analyzed. This ensures that the detection process remains accurate.
The threshold can be adjusted based on various factors, such as the type of media (image vs. video), the source of the input, or the level of confidence required for a particular application. This flexibility enhances the system's overall reliability.
[0044] Integration of additional features like Textual Analysis:
To further enhance detection accuracy, the invention can also incorporate additional content like textual analysis of associated metadata, captions, or audio transcriptions. For example, by analyzing the words spoken in a video or written in a caption, the system can identify discrepancies between the content and its context.
For example, if a video claims to depict a specific event but the audio content contradicts this claim, the system can flag it as potentially manipulated.
[0045] User Interface and Accessibility:
The invention includes a user-friendly interface that allows users to upload images, videos or audio for verification. The interface provides instant feedback on the authenticity of the submitted media, ensuring that users can easily access the detection results. Users can also view other similar content from the ones that they uploaded based on similarity.
Public figures, organizations and general public can benefit from this system by creating verified profiles where their authentic media is stored. This enables users to check the legitimacy of content they encounter online, promoting trust and integrity in digital media.
[0046] Scalability and Efficiency:
The design of the invention ensures high scalability and efficiency, capable of processing millions of media files quickly. The use of lightweight vector embeddings minimizes the computational burden during detection, allowing for real-time analysis.
As the media landscape continues to grow, this scalable architecture will accommodate the increasing demand for reliable detection solutions, making it suitable for deployment in various applications, including social media platforms, news organizations, and security systems.
EXAMPLE SCENARIOS
[0047] The invention can be better understood by the following exemplary scenarios.
Scenario 1
Celebrity Verification: A celebrity uploads verified images to their profile in the database. When a new image of the celebrity is uploaded online, users can submit it to the system for verification. The system quickly converts the new image into an embedding and compares it with the verified embeddings. If the similarity score is above the threshold, the image is confirmed as authentic; otherwise, it is flagged as manipulated, potentially manipulated or unverified.
Scenario 2
Social Media Monitoring: A social media platform implements the detection system to monitor uploaded videos for deepfake content. As users post videos, the system automatically processes each video in real time. By comparing the embeddings of the uploaded videos with the verified database, the platform can promptly identify and alert users to any suspected deepfakes, thus reducing the spread of misinformation.
[0048] In conclusion, the invention is a state-of-the-art system for detecting synthetic media, particularly deepfakes, utilizing advanced vector similarity techniques. This method provides a robust solution to the growing challenge of distinguishing authentic content from manipulated media, which has become increasingly sophisticated due to advancements in artificial intelligence (AI) and machine learning (ML).
[0049] Advantages of method and system for detection of synthetic media using vector similarity search over existing technologies:
1. Vector Similarity-Based Detection of Synthetic Media: Unlike existing methods that depend on identifying visual or audio artifacts, this invention leverages vector similarity techniques to detect synthetic media. Verified media (images, videos and audio) is stored in a database as embeddings. New synthetic media is converted into embeddings and compared against the stored authentic media. This approach is highly resistant to pixel-perfect synthetic media, as it focuses on comparing the underlying content, not superficial visual cues. It ensures that even advanced AI-generated media can be detected, as it does not rely on artifacts or errors present in the synthetic content.
2. Scalable and Efficient Processing: The system is designed to handle large-scale applications by leveraging databases optimized for rapid search and retrieval. Since embeddings are lightweight, the system can perform similarity searches across millions of entries with minimal overhead. This makes it highly suitable for platforms handling massive volumes of synthetic media, such as social media sites, ensuring real-time synthetic media detection even in high-traffic environments.
3. Broad Applicability: The invention is not limited to detecting only facial manipulation. By focusing on vector embeddings that represent entire media objects, the system is capable of detecting various forms of synthetic media, including audio and non-facial manipulations. This broadens the scope of detection and allows for a more comprehensive approach to identifying manipulated content, encompassing AI-generated images, videos, and audio.
4. Long-Term Viability: The reliance on vector-based comparisons ensures that the system remains robust even as synthetic media generation techniques evolve. Unlike artifact-based methods that degrade as new AI models improve, this invention is resistant to advancements in media generation, making it future-proof and reducing the need for frequent updates or retraining of the detection models.
5. Artifact-Independent Detection: By not relying on the presence of visible artifacts or errors, the system can detect synthetic media even when no detectable visual cues are present. This allows it to identify high-quality synthetic content like AI-generated deepfakes that would evade detection by traditional methods. The vector similarity approach enables the system to maintain high accuracy, regardless of the perfection of the forgery.
[0050] This invention introduces a method and system for detecting synthetic media through vector similarity comparison. The method converts verified media content into vector embeddings, which are stored in a database for rapid retrieval and comparison against newly submitted media. By comparing the similarity of embeddings, the solution accurately identifies whether a video is real or manipulated without relying on visual cues, making it highly resistant to pixel-perfect synthetic media. The proposed method is scalable, ensuring efficient detection across millions of images, videos or audio while maintaining a low computational overhead. This method provides a reliable, long-term solution to the growing threat posed by increasingly sophisticated deepfake and generative AI technologies.
[0051] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
[0052] Thus, the scope of the present disclosure is defined by the appended claims and includes both combinations and sub-combinations of the various features described hereinabove as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.
, Claims:I/We Claim:
1. A method for detection of synthetic media using vector similarity search executed by a processing unit, the method comprising of:
converting verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings;
storing the vector embeddings in a database in a processing unit;
receiving and converting a new media into an embedding by the processing unit;
comparing the embedding of the new media with the stored vector embeddings; and
computing a similarity score for determining the authenticity of the new media.
2. The method for detection of synthetic media using vector similarity search executed by a processing unit as claimed in claim 1, wherein the converting the verified media into vector embeddings is by utilizing pre-trained deep learning models.
3. The method for detection of synthetic media using vector similarity search executed by a processing unit as claimed in claim 1, wherein the computing the similarity score includes computing similarity between the new media embedding and the embeddings of the verified media for determining the authenticity of the new media, and wherein the similarity score exceeding a predefined threshold confirms authenticity of the media.
4. The method for detection of synthetic media using vector similarity search executed by a processing unit as claimed in claim 3, wherein the threshold is dynamically adjusted by employing a dynamic thresholding mechanism by adapting based on contextual factors associated with the analysed media.
5. The method for detection of synthetic media using vector similarity search executed by a processing unit as claimed in claim 1, wherein the computing the similarity score by employing distance metrics such as cosine similarity or Euclidean distance.
6. A system for detection of synthetic media using vector similarity search, the system comprising of:
a processing unit configured to execute:
conversion of verified media including a plurality of images, a plurality of videos and a plurality of audio into vector embeddings, wherein the processing unit is configured to convert the verified media into vector embeddings utilizing pre-trained deep learning models, and
computation of similarity between new media embedding and the embeddings of the verified media;
a Database in the processing unit for storing the embeddings of the verified media; and
a user interface with a display for uploading and displaying the media and results respectively.
7. The system for detection of synthetic media using vector similarity search as claimed in claim 6, wherein the processing unit is configured to employ distance metrics including but not limited to cosine similarity or Euclidean distance for computing similarity between new media embedding and the embeddings of the verified media.
8. The system for detection of synthetic media using vector similarity search as claimed in claim 6, wherein the processing unit is configured to determine a similarity score for computing similarity, and wherein the similarity score exceeding a predefined threshold confirms authenticity of the media.
9. The system for detection of synthetic media using vector similarity search as claimed in claim 8, wherein the processing unit is configured to employ a dynamic thresholding mechanism for adjusting the similarity threshold by adapting based on contextual factors associated with the analysed media.
10. The system for detection of synthetic media using vector similarity search as claimed in claim 6, wherein the processing unit is configured to integrate textual analysis from additional cues like associated metadata, captions, or audio transcriptions for enhancing detection accuracy of the synthetic media.

Documents

NameDate
202421084607-FER.pdf18/12/2024
Abstract.jpg26/11/2024
202421084607-FORM-26 [22-11-2024(online)].pdf22/11/2024
202421084607-COMPLETE SPECIFICATION [05-11-2024(online)].pdf05/11/2024
202421084607-DECLARATION OF INVENTORSHIP (FORM 5) [05-11-2024(online)].pdf05/11/2024
202421084607-DRAWINGS [05-11-2024(online)].pdf05/11/2024
202421084607-FORM 1 [05-11-2024(online)].pdf05/11/2024
202421084607-FORM 18A [05-11-2024(online)].pdf05/11/2024
202421084607-FORM-9 [05-11-2024(online)].pdf05/11/2024
202421084607-REQUEST FOR EARLY PUBLICATION(FORM-9) [05-11-2024(online)].pdf05/11/2024
202421084607-STATEMENT OF UNDERTAKING (FORM 3) [05-11-2024(online)].pdf05/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.