The pervasive problem of hate speech on social media has received much attention in the fields of computational linguistics and artificial intelligence. The abstract summarizes the results of a groundbreaking study investigating the use of Bi-LSTM models to detect and analyze hate speech. This study highlights the need for modern machine learning algorithms to analyze the large and ever-changing volume of material on social media. The goal is to accurately detect and reduce instances of hate speech. The paper reviews several procedures used to detect hate speech and traces their evolution over time. It highlights the shortcomings of traditional models and emphasizes the need for more sophisticated, context-aware approaches. The use of Bi-LSTM structures, known for their effectiveness in capturing long-term relationships in sequential data, marks a methodological advance in the field. The results of this study demonstrate that Bi-LSTM models have a greater ability to understand the complexities of language on social media. As a result, they offer a more efficient and effective method for detecting hate speech. This study makes a valuable contribution to the ongoing debate on digital politeness through rigorous experimentation and analysis. It proposes a robust framework that can be adapted and extended across various social media platforms to create safer online communities.
BIDIRECTIONAL LONG SHORT-TERM MEMORY IN HATE SPEECH DETECTION PROBLEM ON NETWORKS
Published September 2024
66
18
Abstract
Keywords
BiLSTM, LSTM, AI, NLP, social media
Language
English
How to Cite
[1]
Azhibekova, Z., Aliyeva , A., Sarsenbiyeva, N., Kaldarova, B. and Toktarova, A. 2024. BIDIRECTIONAL LONG SHORT-TERM MEMORY IN HATE SPEECH DETECTION PROBLEM ON NETWORKS. Bulletin of Abai KazNPU. Series of Physical and mathematical sciences. 87, 3 (Sep. 2024), 123–131. DOI:https://doi.org/10.51889/2959-5894.2024.87.3.010.