Machine Learning Techniques for Identifying Child Abusive Texts in Online PlatformsShow others and affiliations
2024 (English)In: 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]
This study addresses the critical issue of child abuse in digital communications by developing advanced machine learning and deep learning models for detecting child abusive texts in the Bengali language on online platforms. The primary goal of this research is to contribute to child abuse prevention by creating an effective tool for accurately identifying abusive content. Utilizing a combination of natural language processing (NLP) techniques and deep learning algorithms, this model distinguishes between abusive and non-abusive text. The dataset, comprising manually collected and labeled comments and posts from social media sites, is categorized into Positive, Neutral, and Negative classes. For model development, a variety of machine learning algorithms including RF, DT, and SVM, alongside deep learning approaches such as LSTM and CNN, were tested. The performance of the proposed model was assessed using precision, recall, and F1 score metrics. The findings reveal that the model performs with high accuracy in classifying abusive text in Bengali, offering significant potential as a tool for effective child abuse identification and prevention.
Place, publisher, year, edition, pages
IEEE, 2024.
Keywords [en]
Sentiment Analysis, Natural Language Processing, Supervised Model, Machine Learning, Deep Learning
National Category
Computer Sciences
Research subject
Cyber Security
Identifiers
URN: urn:nbn:se:ltu:diva-111134DOI: 10.1109/ICCCNT61001.2024.10724830Scopus ID: 2-s2.0-85211187836OAI: oai:DiVA.org:ltu-111134DiVA, id: diva2:1924246
Conference
The 15th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Himachal Pradesh, India, June 24-28, 2024
Note
ISBN for host publication: 979-8-3503-7024-9;
2025-01-032025-01-032025-10-21Bibliographically approved