Consult an Expert
Trademark
Design Registration
Consult an Expert
Trademark
Copyright
Patent
Infringement
Design Registration
More
Consult an Expert
Consult an Expert
Trademark
Design Registration
Login
AI-DRIVEN CYBERBULLYING DETECTION AND INTERVENTION SYSTEM UTILIZING HYBRID LSTM-BERT ARCHITECTURE
Extensive patent search conducted by a registered patent agent
Patent search done by experts in under 48hrs
₹999
₹399
Abstract
Information
Inventors
Applicants
Specification
Documents
ORDINARY APPLICATION
Published
Filed on 18 November 2024
Abstract
This innovation focuses on cutting-edge methods and technologies that combine machine learning (ML) and artificial intelligence (Al) to detect and minimize online misbehaviour, especially online misconduct. This approach uses an integrated design that makes use of Bidirectional Encoder Representations from Transformers (BERT) to improve relevant awareness regarding social networking articles and LSTM networks (long short-term memory) for efficient series analysis. The technique effectively identifies scenarios of online misconduct in social networking posts by using these cutting-edge techniques, resolving an emerging problem in the modem digital landscape. Additionally, the framework has a strong assessment technique which assesses the seriousness of every cyberbullying incident that is found, enabling accurate classification of wrongdoing. Since mutual evaluations are recorded continuously, the algorithm can efficiently carry out corrective measures. In order to create a more secure digital space, the platform promptly blocks and eliminates an individual's account from the relevant online social networking application when the user's ranking exceeds an established threshold. The innovation has significant implications because it gives social networking companies the ability to stop negative conduct prior to it commences, which will ultimately result to a safer virtual community. Potential enhancements might involve the ability to observe in instantaneous fashion and the extension of technological capabilities to handle online misbehavior in addition to online misconduct.
Patent Information
Application ID | 202441089126 |
Invention Field | COMPUTER SCIENCE |
Date of Application | 18/11/2024 |
Publication Number | 48/2024 |
Inventors
Name | Address | Country | Nationality |
---|---|---|---|
VANDANA REDDY | Department of Computer Science and Engineering, CHRIST(Deemed to be University), Kanmanike, Bengaluru-560074. | India | India |
PREETHAM NOEL P | Department of Computer Science and Engineering, CHRIST(Deemed to be University), Kanmanike, Bengaluru-560074. | India | India |
BOPPURU RUDRA PRATHAP | Department of Computer Science and Engineering, CHRIST(Deemed to be University), Kanmanike, Bengaluru-560074. | India | India |
Applicants
Name | Address | Country | Nationality |
---|---|---|---|
CHRIST UNIVERSITY | CHRIST (Deemed to be University), Bangalore - 560074. | India | India |
Specification
FIELD OF INVENTION
[0001] Cyber bullying has become a pervasive issue with the rise of social media and online communication platforms. Various laws and regulations have been enacted to combat this problem. For instance, the Information Technology Act, 2000, and specific sections of the Indian Penal Code (IPC) provide legal frameworks to address cyber bullying and online harassment. However, the enforcement of these regulations is challenging due to the sheer volume of online content and the difficulty in monitoring and identifying instances of cyber bullying promptly.
[0002] Traditional systems rely heavily on manual reporting and moderation, which are not sufficient to handle the scale of online interactions. These systems often result in delayed responses, allowing harmful content to proliferate and cause damage before any action is taken. Moreover, they do not effectively integrate with legal mechanisms to ensure that offenders are held accountable under existing laws.
BACKGROUND OF THE INVENTION:
|0003] Khan and Rizvi employed an LSTM network for detecting sequential patterns in text indicative of bullying, achieving notable improvements in recall.
[0004] Nguyen and Le highlighted the strengths of LSTM in handling long conversations, which may be particularly useful in detecting bullying within comment threads and social media interactions.
[00051 Park and Lee proposed integrating LSTM with sentiment analysis to better explain the emotional context of potential bullying, thereby enhancing detection accuracy.
[0006] Zhang et al. took this a step further by integrating attention mechanisms into LSTM networks, which improved the model's ability to focus on the most relevant text segments forbore accurate cyber bullying detection.
[0007] Kim and Kwon introduced a hierarchical LSTM model designed to process text at multiple levels, from sentences to entire documents, to detect bullying patterns more effectively.
[0008] The combination of BERT and LSTM has shown significant promise. For instance, Sharma et al. integrated BERT for contextual understanding with LSTM for sequential data processing, achieving superior results in detecting cyber bullying against celebrities.
[0009[ Verma and Singh developed a real-time cyber bullying detection system using this hybrid BERT-LSTM model, which demonstrated high accuracy and efficiency.
[000101 Further research has supported the scalability and adaptability of these hybrid models. Lee et al. evaluated the scalability of BERT-LSTM integrations and found that they can effectively manage large datasets.
[000111 The combination of BERT and LSTM has shown significant promise. For instance, Sharma et al. integrated BERT for contextual understanding with LSTM for sequential data processing, achieving superior results in detecting cyber bullying against celebrities.
[000121 Verma and Singh developed a real-time cyber bullying detection system using this hybrid BERT-LSTM model, which demonstrated high accuracy and efficiency.
[00013] Further research has supported the scalability and adaptability of these hybrid models. Lee et al. evaluated the scalability of BERT-LSTM integrations and found that they can effectively manage large datasets.
[00014] Kumar et al. further enhanced the hybrid model by incorporating emotion recognition techniques, improving the detection of emotionally expressive bullying instances.
[00015] Ahmed and Raj successfully applied these models across various social media platforms, proving their feasibility in diverse contexts.
|00016J Comparative studies have also emphasized the complementary strengths of BERT and LSTM. Joshi and Desai observed that while BERT excels in contextual understanding, LSTM is more efficient for sequence processing.
[00017] Saha and Roy found that hybrid BERT-LSTM models outperformed standalone models in terms of overall performance.
[00018] Malhotra et al. demonstrated the adaptability of pre-trained BERT and LSTM models to cyber bullying detection through transfer learning.
[00019] The explainability of these models has also been explored. Reddy et al. investigated how BERT and LSTM models make decisions in the detection process, contributing to a better understanding of their inner workings.
[00020] Singh and Gupta further strengthened these models against adversarial attacks, enhancing their reliability.
[00021] Lastly, Thakur et al. introduced continuous learning frameworks for BERT and LSTM models, allowing them to evolve in response to real-time bullying patterns.
[00022] Bose and Rao expanded the application of these models by integrating text, image, and video analysis into a comprehensive solution for cyber bullying detection on multimedia platforms.
[00023] Social media presents a significant cybersecurity risk for every business, as individuals often share extensive personal information online, including details about friendships, demographics, family, activities, and work-related data. If an organization's policies, training, and technology do not adequately address these issues, this shared information can pose potential risks, particularly due to employee behaviour that may compromise critical company information. Social media has evolved into a reconnaissance tool for malicious actors, turning user accounts into treasure troves for cybercriminals. This research project aims to collect and analyze open-source data from platforms like LinkedIn to uncover data leakage and assess personality types using software as a service (SaaS), ultimately determining whether behavioral factors can predict individuals' attitudes toward disclosing sensitive data.
[00024] This study is informed by the work of Mazzarolo et al. which addresses the risks of employee cyber misconduct on social media and highlights the need for organizations to protect against unintentional insider threats.
[00025] Mazzarolo, G., Casas, J. C. F., Jurcut, A. D., & Le-Khac, N. A. Protect against unintentional insider threats: The risk of an employee's cyber misconduct on a social media site. In Cybercrime in context: The human factor in victimization, offending, and policing (pp. 79-101).
100026] Data safety within an entity was frequently the accountability of a single individual or small team in the information technology department. However, cybersecurity continues to have an essential part in corporate strategy, with all stakeholders providing some accountability, as information becomes more important to fraudulent organizations. Comparably, even with a greater understanding of the consequences, scholarly misconduct is still frequently detected by a single person using text-matching techniques. Johnson, Reddy, and Davies propose that lessons from cyber security can enhance academic integrity, suggesting that techniques used to protect data in
higher education institutions could also be adapted for detecting academic misconduct. This is particularly relevant in light of the findings by Mazzarolo et al, which address the risks associated with cyber misconduct on social media.
BRIEF SUMMARY OF THE INVENTION
[□0027] The proposed invention leverages an LSTM (Long Short-Term Memory) network to analyze and rank online misconduct detected within social media posts, forums, and online communities. The system includes a process that begins with data collection, feature extraction, and evaluation metrics. It incorporates ethical considerations, legal compliance, user feedback, and adversarial example handling.
[00028] The core functionality involves a Machine Learning (ML) algorithm that rates and grades comments based on pre-trained models. Depending on the severity of the detected misconduct, the system can automatically decide whether to freeze a user's account or delete it. The final decision is made based on a cumulative score evaluation and an escalation protocol, ensuring appropriate actions are taken to uphold platform integrity and user safety.
BRIEF DESCRIPTION OF THE DRAWINGS
[00042] The complete description of the current research work is proposed in this section. The current research work is AI-Driven Cyber bullying Detection and Intervention System Utilizing Hybrid LSTM-BERT Architecture. In this work, an Al-driven system is designed to detect,
evaluate, and respond to online misconduct, such as cyber bullying. The process begins with data collection from various sources like social media posts and forums. Features are extracted, and evaluation metrics are applied, considering ethical and legal aspects. The core of the system is an LSTM network that processes the detected misconduct. This network analyzes the severity of the content, determining whether the user should be retained or removed. The analysis results are fed into a machine learning algorithm, which rates and grades the comments' context based on pretrained models. The final decision-making phase involves either freezing the user's account or automatically deleting it. This decision is based on a cumulative score evaluation and an escalation protocol, ensuring that only severe cases of misconduct lead to account termination. The entire process is designed to ensure platform safety, compliance with regulations, and user retention in less severe cases. The invention will be described and explained with the accompanying drawing in which:
D - DETAILED DESCRIPTION
METHODLOGY
[00043] The Long Short-Term Memory (LSTM) network serves as a key element of the hybrid framework utilized in the technique for detecting and mitigating online acts of misconduct. The LSTM analyzes text data sequences taken from social networking posts. Its primary function involves discovering occurrences of online assault with accuracy by capturing and modeling the language's successive interactions and trends. The LSTM network proves especially beneficial at identifying discrete and intricate forms of web-based improper conduct due to its comprehension of the progression and advancement of communication over an episode. The technique incorporates an appropriate ranking process within the LSTM network alongside identification. Each occurrence of online misconduct identified by the LSTM receives an intensity rating as it analyzes the text sequences of characters. This evaluation, based on the LSTM's trends and classification of languages, ensures a precise determination of severity. The system's overall productivity enhances through the LSTM's capacity to offer a sophisticated assessment of every scenario by capturing distant dependencies and background information within the written material.
[00044] A process for monitoring the aggregate degree of severity assigned by the LSTM over time also forms part of the technique. This system is designed to instinctively eliminate or dismiss an individual's social networking account once their cumulative score reaches a predetermined threshold. Thanks to LSTM's resilient process simulation, the overall rating appropriately represents significant or frequent instances of cyber bullying, allowing the platform to swiftly react and restrict improper conduct on the web.
[00045] For the purpose of representing successive interconnections, the LSTM layer is subsequently administered toward the Long Short-Term Memory (LSTM) level. The LSTMs have the greatest potential for background information underneath a series because they can manage distant dependencies and variations in an order. The mechanism of Awareness employs a method that prioritizes making the progression's pertinent portions the primary focus. This distinction enhances effectiveness by giving the framework the ability to evaluate various feedback sequence segments distinctly to determine essential data. The culminated illustrations from the mechanism of attention and LSTM are sent to fully correlated stages for categorization. These successive layers have performed additional processing on the initial data in sequence to anticipate the category label.
[00046] The resultant layer generates possibilities for various categories of cyberbullies. A softmax function is a technique that converts every result from the final layer into a category probability. This approach is used in a system of continuous tracking that responds and assists individuals in need on its own. The platform keeps an eye out for online misconduct by constantly monitoring online behavior and responding to individuals in an immediate fashion to help minimize destruction.
[00047] Subsequently, handling distant dependencies often proves difficult for LSTM, and the model is unable to manage the intricacy of multi-tum discussions where circumstances may change significantly. This constraint reduces its effectiveness in identifying subtle types of misconduct, such as disguised remarks or bullying. Although BERT's cognitive processes are strong, they are not precisely tailored to address the unique problems associated with online misconduct detection. The reciprocal nature of BERT represents an enhancement over LSTM, but models such as BELM systematically optimize the strategy, making BERT less suitable for real-world scenarios where vocabulary frequently becomes context-specific and unreliable.
INPUT: Social media Tweet
OUTPUT: The probability of Cyber bullying content and user cumulative score on Cyber bullying.
STEP 1 Data Collection:
• Source: Social media posts, forums, and online communities are monitored for user interactions such as likes, dislikes, and comments.
• Purpose: Collects raw text data from various user interactions across these platforms to serve as input for the LSTM network.
STEP 2 Feature Extraction:
• Processing: The collected text data undergoes feature extraction to highlight relevant characteristics that may indicate online misconduct.
• Significance: These features include elements like sentiment, keyword usage, and context, which help in analyzing the content.
STEP 3 LSTM Network Analysis
• LSTM Network Role: The LSTM (Long Short-Term Memory) network processes the extracted features, analyzing the sequence of user interactions to detect potential online misconduct.
• Contextual Awareness: The LSTM considers the context of the interactions, enabling it to recognize patterns of misconduct over time.
STEP 4 Evaluation Metrics
• Assessment: The LSTM network ranks detected instances of online misconduct based on predefined evaluation metrics.
• Severity Determination: This ranking helps in determining the severity of the misconduct and the necessary course of action.
STEP 5 Machine Learning Algorithm
• Security Group: This algorithm further processes the ranked instances by rating and grading the comments' context using pre-trained models.
• Context Grading: It assesses the context and assigns a score to the misconduct, which influences subsequent actions.
STEP 6 Legal and Regulatory Compliance
• Compliance Checks: The platform ensures that actions taken align with legal and regulatory frameworks, safeguarding against wrongful account actions.
• Ethical Considerations: In addition to legal compliance, the platform considers ethical aspects, ensuring fair treatment of users.
STEP 7 User Feedback and Iteration.
• Feedback Loop: User feedback is collected and integrated into the system, allowing for continuous improvement of the LSTM network's detection capabilities.
• Adversarial Example Handling: The system is also equipped to handle adversarial examples-deliberate attempts to bypass detection-by iterating and refining its algorithms.
STEP 8 Decision Making
• Severity Score Threshold; Based on the severity score derived from the evaluation and grading process, the system decides whether to freeze the user account or initiate automated deletion.
• Cumulative Score Evaluation: It checks whether the cumulative score of detected misconduct exceeds the threshold for serious action.
STEP 9 Automated Actions
• User Account Management: If the severity is high, the system can automatically freeze or delete the user account.
• Escalation Protocol: For less severe cases, an escalation protocol may be followed, which involves monitoring the user more closely rather than immediate action.
Word-Encoded Value Dataset:
This data set includes an inventory of terms along with their associated numerical illustrations. These expressions are typically essential phrases or words that are crucial for the implementation of natural language processing in the majority of linguistic scenarios. "Read" has an encoded value of 11009, whereas "people" has an assigned value of 15698. This type of encoding relates consistently to each word in the piece of text, making it useful for text analysis, search, and comparison within the system. Such encoding could also improve the efficiency of various computations created for text manipulations in a variety of techniques, especially by reducing their computational requirements when dealing with large text messages.
Stop Words Dataset:
The Dataset includes a predetermined collection of stop words, "i," "me," "our," "he," "it," "they," and "this." Since these phrases appear frequently and have little significance in the text, they are typically removed. By eliminating these phrases from the text's data, the algorithm can more easily find relevant information, improving text evaluation's effectiveness and precision.
The present research develops and implements a data collection system that improves the performance of various NLP procedures. This is feasible by means of phrase encoding by sorting the stop words and consequently, resulting in it to be examined into greater detail through text evaluation, comparing, and recovery. This could effortlessly prove to be very beneficial in software applications like content classification and sentiment evaluation, among others.
The algorithm's overall analyzing efficiency and precision are improved by combining the stop words and encoded value datasets, which together produce an effective instrument for organizing content and substantial text evaluation.
CLAIMS
100028] We claim,
1. The integrated AI/ML system, utilizing LSTM networks with contextual awareness, significantly improves cyber bullying detection by analyzing sequences and understanding nuanced online interactions.
|0029] We claim,
2. The platform's automated blocking feature actively monitors real-time behavior, identifying and preventing persistent or repeated instances of cyber bullying without requiring manual intervention.
[0030] We claim,
3. Instantaneous enforcement mechanisms ensure swift action by automatically applying corrective measures, such as content removal or account suspension, to enhance platform security and enforce compliance with anti-cyber bullying regulations.
Talk To Experts
Calculators
Downloads
By continuing past this page, you agree to our Terms of Service,, Cookie Policy, Privacy Policy and Refund Policy © - Uber9 Business Process Services Private Limited. All rights reserved.
Uber9 Business Process Services Private Limited, CIN - U74900TN2014PTC098414, GSTIN - 33AABCU7650C1ZM, Registered Office Address - F-97, Newry Shreya Apartments Anna Nagar East, Chennai, Tamil Nadu 600102, India.
Please note that we are a facilitating platform enabling access to reliable professionals. We are not a law firm and do not provide legal services ourselves. The information on this website is for the purpose of knowledge only and should not be relied upon as legal advice or opinion.