Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
END-TO-END ZERO-WATERMARKING SYSTEM USING ARTIFICIAL NEURAL NETWORKS FOR IMAGE COPYRIGHT PROTECTION AND METHOD THEREOF
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 7 November 2024
Abstract
The present invention discloses an end-to-end zero-watermarking system for image copyright protection, powered by artificial neural networks (ANNs). It utilizes a Contextual Neural Network Encoder (CoNE) and Local Keypoint Patch Aggregation Network (LK-PAgNet) to extract local and global image features without embedding watermarks into the image. The system generates unique watermark identifiers based on these features, ensuring copyright protection while preserving image quality. Trained on diverse datasets, the system remains robust under various distortions such as scaling, noise, and compression, maintaining high discriminability. The watermark identifiers are stored externally, ensuring image integrity and facilitating real-time watermark generation and extraction. The system is scalable, adaptable to other media types like video and audio, and offers an efficient solution to the challenges of unauthorized duplication and distribution. Accompanied Drawing [Figure 1-3]
Patent Information
Application ID | 202411085338 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 07/11/2024 |
Publication Number | 47/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
Dr. Ashish Dixit | Associate Professor, Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad | India | India |
Kritika Gupta | Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad | India | India |
Shalini | Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad | India | India |
Parth Rajawat | Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad | India | India |
Anmol Srivastava | Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
Ajay Kumar Garg Engineering College | 27th KM Milestone, Delhi - Meerut Expy, Ghaziabad, Uttar Pradesh 201015 | India | India |
Specification
Description:[001] The present invention relates to the field of digital image copyright protection, specifically an end-to-end zero-watermarking system utilizing artificial neural networks (ANNs). The invention introduces a novel approach to safeguarding image copyrights through zero-watermarking technology, bypassing the need for manual feature extraction.
BACKGROUND OF THE INVENTION
[002] In the digital era, the protection of intellectual property, particularly in the form of digital images, has become increasingly important due to the ease of duplication and unauthorized distribution. Copyright infringement of digital content is a growing concern for creators and industries, leading to the development of various watermarking techniques aimed at securing digital images from unauthorized use. Traditional watermarking methods involve embedding visible or invisible marks within the image to signify ownership, but these techniques often come with trade-offs related to image quality and data integrity.
[003] Watermarking techniques can be broadly categorized into two types: visible watermarking and invisible watermarking. Visible watermarking integrates easily identifiable marks into the image, which may be obtrusive, while invisible watermarking aims to hide the ownership information within the image data. However, most of these methods require manual feature extraction and expert knowledge to design the watermark. This dependency on manual intervention introduces inefficiencies and potential compromises in the accuracy and robustness of the watermarking process.
[004] Several prior art techniques have attempted to address these challenges, including watermarking based on manual feature descriptors and zero-watermarking. Traditional watermarking methods rely heavily on manually designed feature descriptors to identify unique elements within an image, which can then be used to embed or detect a watermark. For example, techniques involving discrete cosine transforms (DCT), wavelet transforms, and keypoint-based methods (e.g., SIFT or SURF) have been extensively used in watermarking. Although effective to a degree, these methods struggle with robustness and computational efficiency, especially when images are subjected to various attacks like compression, noise addition, or geometric transformations.
[005] Similarly, zero-watermarking has emerged as an alternative method where the watermark is not physically embedded within the image but is instead generated using specific characteristics of the image. This eliminates concerns regarding quality degradation caused by embedding, but these systems still rely on manually extracted features and are often less robust against transformations or attacks. Prior works such as image zero-watermarking using discrete wavelet transform (DWT) or principal component analysis (PCA) show limited performance in handling image distortions or modifications due to their simplistic feature extraction methods.
[006] The key drawbacks of the existing methods include the dependence on manual feature design, which introduces subjectivity and inconsistency in watermarking performance. Furthermore, traditional techniques often result in compromised data integrity or image quality, as embedding watermarks within the image can alter its original structure. Zero-watermarking techniques, while avoiding data embedding, suffer from limitations in terms of robustness and discriminability, especially when dealing with diverse image sets and transformations. These methods also tend to be computationally inefficient, making them impractical for real-time or large-scale applications.
[007] The present invention overcomes these shortcomings by introducing an end-to-end zero-watermarking system exclusively using artificial neural networks (ANNs), eliminating the need for manual feature extraction. By leveraging neural networks such as CoNE and LK-PAgNet, present invention enhances both local feature extraction and contextual understanding, leading to increased robustness and discriminability.
[008] The system efficiently generates unique watermark identifiers without embedding data into the image, preserving data integrity and quality. With systematic training, invention provides robust performance against various image transformations and attacks, achieving a normalized coefficient above 0.968, a Hamming distance exceeding 88, and an optimized average processing time of 94.8 ms. This marks a significant improvement in both efficiency and accuracy over the current state of the art.
SUMMARY OF THE PRESENT INVENTION
[009] The present invention introduces an advanced end-to-end zero-watermarking system for image copyright protection utilizing artificial neural networks (ANNs), eliminating the need for manual feature descriptors. The system, referred to as "ZWat," incorporates CoNE and LK-PAgNet, which significantly enhance local feature extraction and contextual understanding of images. The watermark block within the system generates unique identifiers, integrates copyright information, and optimizes the watermark characteristics for robust identification. Systematic training of the neural networks ensures high levels of discriminability and robustness, allowing distinct identifiers for different images and identical identifiers for targeted ones. The system is designed to handle various image transformations and attacks effectively.
[010] The invention demonstrates exceptional resilience and efficiency, achieving a normalized coefficient above 0.968 and a Hamming distance greater than 88 during experimental validation, surpassing conventional techniques in terms of accuracy and robustness. Additionally, the system offers fast processing with an average time of 94.8 milliseconds per image, ensuring that it can be seamlessly integrated into existing image processing pipelines. ZWat's approach ensures traceability, ownership rights, and protection against unauthorized use without compromising the data integrity or quality of the images. Variants of this system may be adapted for different media types or optimized for specific use cases, making it a versatile solution for copyright protection across various platforms.
[011] In this respect, before explaining at least one object of the invention in detail, it is to be understood that the invention is not limited in its application to the details of set of rules and to the arrangements of the various models set forth in the following description or illustrated in the drawings. The invention is capable of other objects and of being practiced and carried out in various ways, according to the need of that industry. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[012] These together with other objects of the invention, along with the various features of novelty which characterize the invention, are pointed out with particularity in the disclosure. For a better understanding of the invention, its operating advantages and the specific objects attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated preferred embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[013] When considering the following thorough explanation of the present invention, it will be easier to understand it and other objects than those mentioned above will become evident. Such description refers to the illustrations in the annex, wherein:
Figure 1 illustrates a block design associated with the proposed system;
Figure 2 illustrates working flowchart associated with the Zero Watermarking process along with its verification; and
Figure 3 illustrates the primary structure of PAgNet associated with the proposed system, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[014] The following sections of this article will provided various embodiments of the current invention with references to the accompanying drawings, whereby the reference numbers utilised in the picture correspond to like elements throughout the description. However, this invention is not limited to the embodiment described here and may be embodied in several other ways. Instead, the embodiment is included to ensure that this disclosure is extensive and complete and that individuals of ordinary skill in the art are properly informed of the extent of the invention.
[015] Numerical values and ranges are given for many parts of the implementations discussed in the following thorough discussion. These numbers and ranges are merely to be used as examples and are not meant to restrict the claims' applicability. A variety of materials are also recognised as fitting for certain aspects of the implementations. These materials should only be used as examples and are not meant to restrict the application of the innovation.
[016] Referring to Figure 1-3, the invention relates to an end-to-end zero-watermarking system utilizing artificial neural networks (ANNs) for image copyright protection, offering a robust and efficient solution to the growing challenge of unauthorized image duplication and distribution. The system, named ZWat, is designed to circumvent the limitations of traditional watermarking techniques that rely heavily on manual feature extraction and suffer from quality compromise, slow processing, and vulnerability to various attacks.
[017] At the heart of ZWat is the zero-watermarking technology, which differs from traditional watermarking methods where visible or invisible watermarks are physically embedded into the image. Instead, ZWat generates a watermark based on inherent image features, preserving the integrity and quality of the original image. Unlike previous systems, ZWat is powered entirely by ANNs, removing the need for expert-driven feature descriptors. The invention incorporates advanced neural networks, specifically CoNE (Contextual Neural Network Encoder) and LK-PAgNet (Local Keypoint Patch Aggregation Network), which improve both local feature extraction and global contextual understanding.
[018] The architecture of ZWat consists of multiple modules, including the feature extraction module, the watermark generation block, and the training block. Each module is designed to handle specific tasks to ensure efficient, accurate, and resilient watermark generation. The feature extraction module employs CoNE and LK-PAgNet to extract unique features from the image, focusing on both fine-grained local details and broader contextual patterns. These features are passed to the watermark generation block, which computes unique identifiers for each image based on the extracted features. The watermark block also incorporates copyright metadata, optimizing the watermark to ensure traceability and protection.
[019] The training block is critical to ZWat's performance. ANNs are systematically trained on diverse datasets using a combination of real-world images, synthetic distortions, and various attack scenarios. The training ensures that the system remains robust under common image manipulations, such as scaling, rotation, cropping, and compression, while maintaining high discriminability. For different images, distinct watermark identifiers are generated, whereas for identical images or similar content, the system consistently produces the same identifiers, ensuring effective copyright management.
[020] The integration of CoNE and LK-PAgNet enhances ZWat's ability to extract local features with precision while simultaneously understanding the broader contextual relationships within the image. CoNE is responsible for encoding the high-level contextual information, such as object positioning and scene structure, which is critical in ensuring robustness to distortions. On the other hand, LK-PAgNet operates at the keypoint level, identifying distinctive patches in the image and aggregating them to provide a fine-grained representation. This dual network setup results in a highly discriminative system that can detect subtle changes while preserving the overall structure.
[021] The watermark generation block in ZWat is a novel component designed to produce unique and robust identifiers for each image. This block takes the features extracted by CoNE and LK-PAgNet and processes them to generate a watermark that can be directly linked to the image's copyright owner. The identifier generated is optimized for resilience, ensuring that even under attacks such as compression, noise addition, and geometric transformations, the watermark remains identifiable.
[022] The watermark does not alter the original image; rather, it creates an external reference that links the image to a copyright registry. This is key to maintaining the original image's data integrity, unlike traditional watermarking techniques that embed the watermark into the image and risk degrading its quality. By separating the watermark from the image data, ZWat ensures that the integrity and authenticity of the image are preserved.
[023] ZWat's training process involves the use of a large dataset, covering diverse types of images, from natural scenes to synthetic patterns, to ensure the system's ability to generalize across different domains. During training, adversarial attacks such as noise, geometric distortions, and compression artifacts are applied to test the system's resilience. The training loss function incorporates both robustness and discriminability metrics, ensuring that the system can differentiate between similar images while maintaining the same identifier for identical content.
[024] The discriminability of ZWat is measured through two key performance metrics: the normalized coefficient and the Hamming distance. In experimental trials, ZWat consistently achieved a normalized coefficient exceeding 0.968, indicating strong robustness against distortions, and a Hamming distance greater than 88, highlighting its ability to generate distinct watermark identifiers for different images.
[025] Extensive experimental validation of ZWat was conducted to evaluate its efficiency and performance. The system was tested against existing zero-watermarking and traditional watermarking techniques. Performance metrics such as watermark extraction time, resilience to image attacks, and watermark uniqueness were measured. ZWat demonstrated superior efficiency with an average processing time of 94.8 milliseconds per image, significantly outperforming traditional methods that often require multiple seconds to process complex images.
[026] In terms of resilience, ZWat was subjected to a variety of attacks, including image compression, resizing, rotation, and noise addition. The system consistently maintained a Hamming distance above 88 across all attacks, ensuring that the watermark remained intact and distinguishable. Additionally, simulations showed that ZWat outperformed other systems in maintaining watermark accuracy, even under extreme compression ratios or large-scale geometric transformations.
[027] ZWat is designed for seamless integration into existing image processing pipelines, making it a versatile tool for industries ranging from digital content creation to medical imaging and satellite imagery. The system's architecture allows for easy deployment as part of larger copyright protection frameworks, providing real-time image watermarking capabilities without the need for complex preprocessing or post-processing.
[028] The invention can also be adapted for different media types, such as video, audio, and text, through minor modifications in the feature extraction and watermark generation blocks. For instance, the CoNE and LK-PAgNet modules could be fine-tuned to handle time-series data for video content or text analysis for document watermarking. These adaptations ensure that ZWat is not only limited to images but can serve a broader range of copyright protection applications.
[029] A comparative study between ZWat and other state-of-the-art watermarking systems reveals several key advantages. Unlike traditional systems that rely on manually designed feature descriptors, ZWat's exclusive use of ANNs eliminates the need for expert intervention, making the system more scalable and adaptive. Additionally, traditional methods often struggle with data integrity and image quality compromise, especially when embedding watermarks directly into the image. ZWat overcomes this limitation by generating external identifiers that do not alter the original content.
[030] Zero-watermarking systems, while promising, have historically faced challenges with robustness and discriminability. Prior systems often relied on simple feature extraction techniques, leading to poor performance under image attacks. ZWat addresses this by leveraging deep learning models to extract highly discriminative features, ensuring that even subtle differences between images are captured while maintaining robustness to transformations.
[031] ZWat's design ensures that the system can be scaled to handle large datasets without a significant increase in processing time. The system's parallel architecture allows multiple images to be processed simultaneously, making it ideal for applications that require real-time or high-throughput watermarking. This scalability is particularly beneficial for industries like digital photography and content creation, where thousands of images are generated daily.
[032] Moreover, ZWat's processing efficiency is a standout feature. With an average processing time of 94.8 milliseconds, the system is capable of embedding and extracting watermarks in real-time applications without introducing noticeable delays. This is a significant improvement over traditional systems, which often require several seconds to process complex images due to their reliance on manual feature descriptors and complex mathematical transformations.
[033] In accordance to the embodiment of the present invention, video watermarking can be achieved by extending the feature extraction blocks to capture temporal patterns across frames, while audio watermarking may involve modifications to handle frequency domain features. These variants will retain ZWat's core advantage of robustness and efficiency while adapting to the specific requirements of each media type.
[034] In conclusion, ZWat represents a significant advancement in the field of digital copyright protection, offering a novel approach through zero-watermarking technology and artificial neural networks. With its ability to maintain data integrity, provide robust watermarking, and process images efficiently, ZWat stands as a powerful tool for addressing the growing challenges of unauthorized image duplication and distribution in the digital age.
[035] It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-discussed embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description.
[036] The benefits and advantages which may be provided by the present invention have been described above with regard to specific embodiments. These benefits and advantages, and any elements or limitations that may cause them to occur or to become more pronounced are not to be construed as critical, required, or essential features of any or all of the embodiments.
, Claims:1. An end-to-end zero-watermarking system for image copyright protection, comprising:
a) a feature extraction module utilizing artificial neural networks (ANNs), wherein the feature extraction module employs a Contextual Neural Network Encoder (CoNE) and Local Keypoint Patch Aggregation Network (LK-PAgNet) for extracting local and global features from an input image;
b) a watermark generation block configured to generate a unique watermark identifier based on features extracted by the feature extraction module, wherein the watermark generation block integrates copyright information into the generated watermark without embedding it into the image;
c) a training module that systematically trains the ANNs using real-world and synthetic image datasets, ensuring robustness against distortions, image manipulations, and attacks including compression, scaling, rotation, and noise addition;
d) a testing module that validates the watermark's resilience, ensuring high discriminability by maintaining a normalized coefficient above 0.968 and a Hamming distance greater than 88 for different images;
wherein the system is configured to generate identical watermark identifiers for the same image content and distinct watermark identifiers for different images, ensuring copyright protection without compromising image integrity.
2. The system as claimed in Claim 1, wherein the watermark generation block further optimizes the generated watermark identifier for resilience under adversarial attacks, including but not limited to geometric distortions, noise addition, compression, and scaling.
3. The system as claimed in Claim 1, wherein the CoNE (Contextual Neural Network Encoder) is configured to capture high-level contextual information of the image, such as object positioning and scene structure, to enhance robustness against various image manipulations.
4. The system as claimed in Claim 1, wherein the LK-PAgNet (Local Keypoint Patch Aggregation Network) is configured to extract fine-grained keypoint-based features from the image and aggregate them to form a distinctive and detailed feature representation for watermark generation.
5. The system as claimed in Claim 1, further includes a scalability feature that enables parallel processing of multiple images, wherein the system can handle large-scale datasets in real-time without significant processing delays, ensuring efficient watermark generation and extraction.
6. A method for copyright protection using an end-to-end zero-watermarking system, comprising the steps of:
i. extracting both local and global image features using artificial neural networks (ANNs) with CoNE and LK-PAgNet;
ii. generating a unique watermark identifier based on the extracted features without embedding the watermark into the image;
iii. integrating copyright information into the watermark identifier;
iv. training the system using a diverse dataset of images to ensure robustness against various image manipulations; and
v. validating the watermark's resilience through systematic testing, ensuring high discriminability with a normalized coefficient above 0.968 and a Hamming distance greater than 88.
7. The method as claimed in Claim 6, wherein the training process includes subjecting the system to adversarial attacks such as noise, compression, rotation, and scaling, to ensure the watermark identifier remains robust under all tested conditions.
8. The method as claimed in Claim 6, wherein the generated watermark identifier can be stored in an external copyright registry, linking the image to its respective owner without altering the original image content.
9. The method acc as claimed in ording to Claim 6, wherein the watermark generation process ensures real-time processing efficiency, reducing the average time for generating and extracting watermarks to 94.8 milliseconds per image.
10. The method as claimed in Claim 6, further includes the adaptation of the feature extraction module to handle different types of media, including video, audio, and text, by modifying CoNE and LK-PAgNet to capture relevant features specific to each media type.
Documents
Name | Date |
---|---|
202411085338-COMPLETE SPECIFICATION [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-DECLARATION OF INVENTORSHIP (FORM 5) [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-DRAWINGS [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-FORM 1 [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-FORM 18 [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-FORM-9 [07-11-2024(online)].pdf | 07/11/2024 |
202411085338-REQUEST FOR EARLY PUBLICATION(FORM-9) [07-11-2024(online)].pdf | 07/11/2024 |
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.