image
image
user-login
Patent search/

Optimized DenseU-Net with Attention Mechanism for High-Accuracy Hyperspectral Image Classification

search

Patent Search in India

  • tick

    Extensive patent search conducted by a registered patent agent

  • tick

    Patent search done by experts in under 48hrs

₹999

₹399

Talk to expert

Optimized DenseU-Net with Attention Mechanism for High-Accuracy Hyperspectral Image Classification

ORDINARY APPLICATION

Published

date

Filed on 6 November 2024

Abstract

The invention provides a method for hyperspectral image classification using an optimized DenseU-Net architecture with integrated attention mechanisms (DesU-NetAM). The method involves preprocessing hyperspectral images, extracting spectral-spatial features using DenseNet-based encoding and U-Net-based decoding layers, and refining features with attention mechanisms. By optimizing with batch normalization and data augmentation, the model achieves high classification accuracy and reduced computational complexity, making it suitable for applications in remote sensing, agriculture, and environmental monitoring.

Patent Information

Application ID202441084923
Invention FieldBIO-MEDICAL ENGINEERING
Date of Application06/11/2024
Publication Number46/2024

Inventors

NameAddressCountryNationality
PRATHYUSHA GAJAWADADepartment of Computer Science and Engineering, B V Raju Institute of Technology Vishnupur, Narsapur, Medak, Telangana 502313IndiaIndia
NIROSHA VEERAMACHANENIDepartment of Computer Science and Engineering, B V Raju Institute of Technology Vishnupur, Narsapur, Medak, Telangana 502313IndiaIndia
BALAJI KARNAMDepartment of Computer Science and Engineering, B V Raju Institute of Technology Vishnupur, Narsapur, Medak, Telangana 502313IndiaIndia

Applicants

NameAddressCountryNationality
B V RAJU INSTITUTE OF TECHNOLOGYDepartment of Computer Science and Engineering, B V Raju Institute of Technology Vishnupur, Narsapur, Medak, Telangana 502313IndiaIndia

Specification

Description:FIELD OF THE INVENTION:
The present invention relates to the field of artificial intelligence (AI) and machine learning, specifically to deep learning architectures for image processing. More particularly, it involves an optimized DenseU-Net architecture with integrated attention mechanisms designed for accurate classification of hyperspectral images, useful in applications such as remote sensing, agricultural analysis, medical imaging, and environmental monitoring.
BACKGROUND OF THE INVENTION:
Hyperspectral imaging (HSI) is a technique used to obtain the spectral information across multiple bands for each pixel in an image. This provides a detailed representation of the material characteristics within the image, which is beneficial for applications that require precise image classification. However, processing hyperspectral images is challenging due to high-dimensional data and complex patterns, necessitating advanced algorithms and architectures.

Traditional Convolutional Neural Networks (CNNs) have demonstrated limitations when applied to hyperspectral images due to their lack of spectral-spatial information integration. Although recent advancements in Dense Convolutional Networks (DenseNets) and U-Net architectures have shown promising results in enhancing feature propagation, they are still inadequate for hyperspectral image classification due to their limited ability to focus on salient regions. Attention mechanisms have been introduced as a solution to this limitation, allowing the network to emphasize important features while reducing unnecessary complexity.

The present invention provides an optimized DenseU-Net architecture combined with an attention mechanism (DesU-NetAM) to achieve high accuracy in hyperspectral image classification. By incorporating attention layers, the architecture enhances the model's ability to focus on relevant spectral and spatial features, thereby improving classification performance.________________________________________
SUMMARY OF THE INVENTION:
The primary objective of this invention is to introduce a novel deep learning model called DesU-NetAM (Optimized DenseU-Net with Attention Mechanism) for accurate classification of hyperspectral images. This method leverages the following innovative steps:
1. DenseU-Net Architecture: Utilizes DenseNet's feature reuse capabilities in conjunction with U-Net's encoder-decoder structure to improve spectral-spatial feature learning.
2. Attention Mechanism Integration: Integrates attention modules within the DenseU-Net layers to highlight relevant features and suppress irrelevant ones.
3. Optimization Techniques: Implements model optimization techniques, such as batch normalization, parameter tuning, and early stopping, to improve model efficiency and classification accuracy.
4. Training and Inference: Utilizes transfer learning and fine-tuning strategies for effective training of the model on hyperspectral image datasets, thereby enhancing its adaptability to different datasets.

The proposed architecture provides high accuracy, reduced computational complexity, and better generalization ability in hyperspectral image classification.

________________________________________
DETAILED DESCRIPTION OF THE INVENTION:
Step1: Overview of DesU-NetAM Architecture
The DesU-NetAM architecture combines elements of DenseNet and U-Net architectures while incorporating attention mechanisms to focus on significant features. The architecture consists of an encoder-decoder structure with skip connections for retaining spatial information across different layers. Each Dense block in the encoder is followed by an attention mechanism to refine the features.

Step2: Components of DesU-NetAM Architecture

DenseNet Encoder: The encoder portion of the architecture is built using DenseNet blocks. DenseNet blocks enable feature reuse, reducing the number of parameters and allowing better learning of hyperspectral data patterns. Dense connections in the encoder facilitate information flow and prevent gradient vanishing issues, ensuring effective feature extraction.

Attention Mechanism: Attention layers are integrated after each Dense block in the encoder to enhance the model's ability to focus on important regions of the hyperspectral data. The attention mechanism improves spectral-spatial resolution by allowing the network to highlight relevant features. This is implemented using self-attention, where attention weights are dynamically computed based on the feature maps.

U-Net Decoder with Skip Connections: The decoder portion of the architecture follows the U-Net framework, where skip connections link encoder and decoder layers to ensure spatial information is preserved throughout the network. These connections allow the model to maintain high-resolution features during the upsampling process, leading to better classification accuracy.

Optimization Layer: The architecture also includes batch normalization layers and dropout layers to prevent overfitting and improve generalization.

Step 3: Process of Hyperspectral Image Classification

Data Preprocessing: Hyperspectral image data is preprocessed to remove noise and standardize pixel values across spectral bands. Data augmentation techniques, such as rotation, flipping, and cropping, are applied to increase dataset variability and improve model robustness.

Model Training: The DesU-NetAM model is trained using supervised learning on labeled hyperspectral image datasets. The model undergoes backpropagation, with loss calculated based on cross-entropy for classification tasks. Transfer learning is used to initialize weights from pre-trained models on similar datasets, enhancing convergence speed.

Inference and Post-Processing: After training, the model is used for hyperspectral image classification. The output is a classified image with each pixel labeled based on material composition. Post-processing is applied to refine classification boundaries and enhance visual quality.
4. Benefits and Advantages

Improved Classification Accuracy: The attention mechanism enhances the model's ability to focus on relevant spectral-spatial features, resulting in higher classification accuracy.

Efficient Feature Propagation: The DenseNet structure facilitates feature reuse, reducing the number of parameters and computational requirements.

Enhanced Generalization: Optimization techniques, such as dropout and batch normalization, improve the model's ability to generalize across different hyperspectral datasets.

Reduced Computational Complexity: The architecture is designed to be computationally efficient, making it feasible for large-scale hyperspectral image processing.
, Claims:Claim 1: I/We claim a method for hyperspectral image classification, comprising the steps of: a) Preprocessing hyperspectral images to standardize pixel values and remove noise; b) Utilizing a DenseU-Net architecture with integrated attention mechanisms to extract spectral-spatial features from the hyperspectral images; c) Applying optimization techniques including batch normalization and dropout to improve model generalization and prevent overfitting; d) Training the model using supervised learning on labeled hyperspectral datasets to produce classified output images; e) Utilizing post-processing techniques to refine classification boundaries in the output image.
Claim 2: I/We claim a method of claim 1, wherein the DenseU-Net architecture comprises DenseNet-based encoding layers followed by U-Net-based decoding layers, with skip connections for preserving spatial information across layers.
Claim 3: I/We claim a method of claim 1, wherein attention mechanisms are integrated after each Dense block in the encoder to emphasize relevant spectral-spatial features for enhanced classification accuracy.
Claim 4: I/We claim a method of claim 1, wherein the model is optimized using data augmentation techniques including rotation, flipping, and cropping to increase dataset variability and improve model robustness.
Claim 5: I/We claim a method of claim 1, wherein transfer learning is applied to initialize model weights, allowing faster convergence during training.
Claim 6: I/We claim a hyperspectral image classification system utilizing a DenseU-Net with integrated attention mechanisms and configured to perform the method of claim 1.

Documents

NameDate
202441084923-COMPLETE SPECIFICATION [06-11-2024(online)].pdf06/11/2024
202441084923-DECLARATION OF INVENTORSHIP (FORM 5) [06-11-2024(online)].pdf06/11/2024
202441084923-FORM 1 [06-11-2024(online)].pdf06/11/2024
202441084923-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-11-2024(online)].pdf06/11/2024

footer-service

By continuing past this page, you agree to our Terms of Service,Cookie PolicyPrivacy Policy  and  Refund Policy  © - Uber9 Business Process Services Private Limited. All rights reserved.

Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.

Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.