Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explainable Text Classification Model for COVID-19 Fake News Detection
Department of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0002-3090-7645
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0003-0244-3561
2022 (English)In: Journal of Internet Services and Information Security (JISIS), ISSN 2182-2069, E-ISSN 2182-2077, Vol. 12, no 2, p. 51-69Article in journal (Refereed) Published
Abstract [en]

Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.

Place, publisher, year, edition, pages
Innovative Information Science & Technology Research Group , 2022. Vol. 12, no 2, p. 51-69
Keywords [en]
fake news, COVID-19, Explainable AI, LIME, BiLSTM
National Category
Business Administration Computer Sciences
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-92146DOI: 10.22667/JISIS.2022.05.31.051Scopus ID: 2-s2.0-85132973856OAI: oai:DiVA.org:ltu-92146DiVA, id: diva2:1682947
Note

Validerad;2022;Nivå 1;2022-07-13 (joosat);

Available from: 2022-07-13 Created: 2022-07-13 Last updated: 2023-09-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Islam, Raihan UlAndersson, Karl

Search in DiVA

By author/editor
Islam, Raihan UlAndersson, Karl
By organisation
Computer Science
In the same journal
Journal of Internet Services and Information Security (JISIS)
Business AdministrationComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 391 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf