1516171819202118 of 21
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Novel Explainable Artificial Intelligence Framework with Improved Learning Mechanisms
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0001-5283-6641
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial Intelligence (AI) models are increasingly being deployed in various critical domains, such as healthcare, finance, law, and autonomous systems, resulting in remarkable benefits for the society. Due to black-box (sub-symbolic) nature, these AI models do not provide explanation of the predictive output. This lack of explanation results in lack of transparency between human and machine. Such opacity of black-box AI models, particularly deep learning architectures, is a significant concern. To address this concern, Explainable Artificial Intelligence (XAI) has emerged as a vital research area. The aim of XAI is to enhance transparency, trust, and accountability of AI models. Various post-hoc XAI tools explain the outputs of black-box AI models based on training datasets. However, the use of training datasets, rather than domain knowledge, makes such explanations proxy. A biased training dataset may lead to a misleading post-hoc explanation as well. In contrast, an explanation based on domain knowledge will be more trustworthy to an end user. Motivated by this, we propose a novel XAI framework, consisting of Belief Rule Based Expert System (BRBES), to predict output and explain it with reference to domain knowledge. BRBES represents domain knowledge with its rule base. It outperforms other knowledge-driven AI models to handle uncertainty due to ignorance. To improve the accuracy of BRBES, we fine-tune its parameters and structure through evolutionary learning. Moreover, to overcome the scarcity of labeled dataset in this learning mechanism, we integrate semi-supervised and self-supervised learning with BRBES. We explain the output of BRBES with respect to the most influential rule of the rule base of BRBES. Thus, the output of our proposed XAI framework becomes not only accurate, but also explainable. 

      This doctoral thesis delves into the challenges and opportunities surrounding XAI in order to provide a comprehensive understanding of the AI output. It presents a novel XAI framework to provide domain knowledge based energy consumption prediction with improved accuracy, and explain this predictive output in non-technical human language. Then it deals with multi-modal air quality data by integrating deep learning model with BRBES. Moreover, to reduce dependence on labeled data for evolutionary learning, this thesis integrates semi-supervised and self-supervised learning with BRBES. 

       This thesis presents six significant contributions. First, we conduct a Systematic Literature Review (SLR) on XAI using the PRISMA guidelines, delving into the numerous challenges and opportunities of XAI. Extensive research is conducted to explore the definition, terminologies, taxonomy, and application domains of XAI. It highlights various challenges of XAI, such as, no universal definition, trade-off between accuracy and explainability, and lack of standardized evaluation metrics. To address this lack of standardized evaluation metrics, we also propose a unified framework to evaluate XAI. Secondly, we present an innovative explainable BRBES (eBRBES) framework, which offers accurate prediction of building energy consumption phenomenon, while providing insightful explanation and counterfactual based on domain knowledge. As part of eBRBES framework, we also present a novel Belief Rule Based adaptive Balance Determination (BRBaBD) algorithm to assess the optimal balance of the proposed framework between explainability and accuracy. Thirdly, we propose a mathematical model to integrate BRBES with the Convolutional Neural Network (CNN). We leverage the domain knowledge of BRBES, and image data patterns discovered by CNN with this integrated approach. We predict air quality with this integrated model using outdoor ground images and numerical sensor data. Fourth, we integrate two-layer BRBES with CNN to monitor air quality using satellite images, and relevant environmental parameters, such as, cloud, relative humidity, temperature, and wind speed. The two-layer BRBES showcases the strength of BRBES in conducting reasoning across multiple layers. Fifth, we enhance the learning mechanism of BRBES by utilizing the extensive unlabeled energy data, along with limited labeled data. For this purpose, we synthetically generate unlabeled data through weak and strong augmentation. We then pseudo-label these unlabeled data by integrating semi-supervised self-training model with BRBES. By learning from both labeled and pseudo-labeled data, initial BRBES transitions to semi-supervised BRBES. Sixth, to conduct the learning mechanism of BRBES without relying on any labeled data, we integrate self-supervised learning with it. To this effect, we pseudo-label the synthetically generated unlabeled energy data with BRBES in the pretext tasks of self-supervised learning. Then, the initial BRBES learns deeper representation from these pseudo-labeled data, and transitions to self-supervised BRBES. We then transfer this BRBES, learned in a self-supervised approach, to the downstream task for performing regression of energy consumption. 

      Based on the research findings of this thesis applied on two different phenomena, it can be argued that the proposed XAI framework provides prediction with greater precision, and explanation with higher interpretability.

Place, publisher, year, edition, pages
Luleå tekniska universitet, 2025.
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
Keywords [en]
accuracy, augmentation, domain knowledge, evolutionary learning, explainability, explainable artificial intelligence (XAI), pseudo-labeled data, self-training, self-supervised learning, semi-supervised learning, trust, uncertainties, unlabeled data
National Category
Computer Sciences
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-115137ISBN: 978-91-8048-935-5 (print)ISBN: 978-91-8048-936-2 (electronic)OAI: oai:DiVA.org:ltu-115137DiVA, id: diva2:2007851
Public defence
2025-12-19, A193, Luleå University of Technology, Skellefteå, 08:30 (English)
Opponent
Supervisors
Funder
Vinnova, 2022-01188
Note

Funder: Stiftelsen Rönnbäret

Available from: 2025-10-21 Created: 2025-10-21 Last updated: 2025-11-14Bibliographically approved
List of papers
1. A Review of Explainable Artificial Intelligence from the Perspectives of Challenges and Opportunities
Open this publication in new window or tab >>A Review of Explainable Artificial Intelligence from the Perspectives of Challenges and Opportunities
2025 (English)In: Algorithms, E-ISSN 1999-4893, Vol. 18, no 9, article id 556Article, review/survey (Refereed) Published
Abstract [en]

The widespread adoption of Artificial Intelligence (AI) in critical domains, such as healthcare, finance, law, and autonomous systems, has brought unprecedented societal benefits. Its black-box (sub-symbolic) nature allows AI to compute prediction without explaining the rationale to the end user, resulting in lack of transparency between human and machine. Concerns are growing over the opacity of such complex AI models, particularly deep learning architectures. To address this concern, explainability is of paramount importance, which has triggered the emergence of Explainable Artificial Intelligence (XAI) as a vital research area. XAI is aimed at enhancing transparency, trust, and accountability of AI models. This survey presents a comprehensive overview of XAI from the dual perspectives of challenges and opportunities. We analyze the foundational concepts, definitions, terminologies, and taxonomy of XAI methods. We then review several application domains of XAI. Special attention is given to various challenges of XAI, such as no universal definition, trade-off between accuracy and interpretability, and lack of standardized evaluation metrics. We conclude by outlining the future research directions of human-centric design, interactive explanation, and standardized evaluation frameworks. This survey serves as a resource for researchers, practitioners, and policymakers to navigate the evolving landscape of interpretable and responsible AI.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
accuracy, evaluation metrics, explainable artificial intelligence (XAI), human-centered design, interpretability, post hoc explanation, transparency, trust
National Category
Computer Sciences
Research subject
Pervasive Mobile Computing; Cyber Security
Identifiers
urn:nbn:se:ltu:diva-114542 (URN)10.3390/a18090556 (DOI)
Funder
Vinnova, 2022-01188
Note

Validerad;2025;Nivå 1;2025-09-04 (u2);

Full text: CC BY license;

Available from: 2025-09-04 Created: 2025-09-04 Last updated: 2025-10-21Bibliographically approved
2. An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings
Open this publication in new window or tab >>An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings
2024 (English)In: Energies, E-ISSN 1996-1073, Vol. 17, no 8, article id 1797Article in journal (Refereed) Published
Abstract [en]

The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System (eBRBES) with domain knowledge-based explanations for the accurate prediction of energy consumption. We optimize BRBES’s parameters and structure to improve prediction accuracy while dealing with data uncertainties using its inference engine. To predict energy consumption, we take into account floor area, daylight, indoor occupancy, and building heating method. We also describe how a counterfactual output on energy consumption could have been achieved. Furthermore, we propose a novel Belief Rule-Based adaptive Balance Determination (BRBaBD) algorithm for determining the optimal balance between explainability and accuracy. To validate the proposed eBRBES framework, a case study based on Skellefteå, Sweden, is used. BRBaBD results show that our proposed eBRBES framework outperforms state-of-the-art machine learning algorithms in terms of optimal balance between explainability and accuracy by 85.08%.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
accuracy, building energy, explainability, trust, uncertainties
National Category
Computer Sciences
Research subject
Cyber Security; Pervasive Mobile Computing
Identifiers
urn:nbn:se:ltu:diva-105059 (URN)10.3390/en17081797 (DOI)001210843300001 ()2-s2.0-85191378367 (Scopus ID)
Funder
Vinnova, 2022-01188
Note

Validerad;2024;Nivå 2;2024-04-11 (signyg);

Full text license: CC BY

Available from: 2024-04-11 Created: 2024-04-11 Last updated: 2025-10-21Bibliographically approved
3. An Integrated Approach of Belief Rule Base and Deep Learning to Predict Air Pollution
Open this publication in new window or tab >>An Integrated Approach of Belief Rule Base and Deep Learning to Predict Air Pollution
2020 (English)In: Sensors, E-ISSN 1424-8220, Vol. 20, no 7, p. 1-25, article id 1956Article in journal (Refereed) Published
Abstract [en]

Sensor data are gaining increasing global attention due to the advent of Internet of Things (IoT). Reasoning is applied on such sensor data in order to compute prediction. Generating a health warning that is based on prediction of atmospheric pollution, planning timely evacuation of people from vulnerable areas with respect to prediction of natural disasters, etc., are the use cases of sensor data stream where prediction is vital to protect people and assets. Thus, prediction accuracy is of paramount importance to take preventive steps and avert any untoward situation. Uncertainties of sensor data is a severe factor which hampers prediction accuracy. Belief Rule Based Expert System (BRBES), a knowledge-driven approach, is a widely employed prediction algorithm to deal with such uncertainties based on knowledge base and inference engine. In connection with handling uncertainties, it offers higher accuracy than other such knowledge-driven techniques, e.g., fuzzy logic and Bayesian probability theory. Contrarily, Deep Learning is a data-driven technique, which constitutes a part of Artificial Intelligence (AI). By applying analytics on huge amount of data, Deep Learning learns the hidden representation of data. Thus, Deep Learning can infer prediction by reasoning over available data, such as historical data and sensor data streams. Combined application of BRBES and Deep Learning can compute prediction with improved accuracy by addressing sensor data uncertainties while utilizing its discovered data pattern. Hence, this paper proposes a novel predictive model that is based on the integrated approach of BRBES and Deep Learning. The uniqueness of this model lies in the development of a mathematical model to combine Deep Learning with BRBES and capture the nonlinear dependencies among the relevant variables. We optimized BRBES further by applying parameter and structure optimization on it. Air pollution prediction has been taken as use case of our proposed combined approach. This model has been evaluated against two different datasets. One dataset contains synthetic images with a corresponding label of PM2.5 concentrations. The other one contains real images, PM2.5 concentrations, and numerical weather data of Shanghai, China. We also distinguished a hazy image between polluted air and fog through our proposed model. Our approach has outperformed only BRBES and only Deep Learning in terms of prediction accuracy.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI, 2020
Keywords
BRBES, Deep Learning, integration, sensor data, predict
National Category
Computer Sciences Computer and Information Sciences
Research subject
Pervasive Mobile Computing
Identifiers
urn:nbn:se:ltu:diva-78258 (URN)10.3390/s20071956 (DOI)000537110500152 ()32244380 (PubMedID)2-s2.0-85083042302 (Scopus ID)
Projects
A belief-rule-based DSS to assess flood risks by using wireless sensor networksPERvasive Computing and COMmunications for sustainable development
Funder
Swedish Research Council, 2014-4251
Note

Validerad;2020;Nivå 2;2020-04-01 (cisjan)

Available from: 2020-03-31 Created: 2020-03-31 Last updated: 2025-10-22Bibliographically approved
4. An Integrated Approach of Belief Rule Base and Convolutional Neural Network to Monitor Air Quality in Shanghai
Open this publication in new window or tab >>An Integrated Approach of Belief Rule Base and Convolutional Neural Network to Monitor Air Quality in Shanghai
2022 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 206, article id 117905Article in journal (Refereed) Published
Abstract [en]

Accurate monitoring of air quality can reduce its adverse impact on earth. Ground-level sensors can provide fine particulate matter (PM2.5) concentrations and ground images. But, such sensors have limited spatial coverage and require deployment cost. PM2.5 can be estimated from satellite-retrieved Aerosol Optical Depth (AOD) too. However, AOD is subject to uncertainties associated with its retrieval algorithms and constrain the spatial resolution of estimated PM2.5. AOD is not retrievable under cloudy weather as well. In contrast, satellite images provide continuous spatial coverage with no separate deployment cost. Accuracy of monitoring from such satellite images is hindered due to uncertainties of sensor data of relevant enviromental parameters, such as, relative humidity, temperature, wind speed and wind direction . Belief Rule Based Expert System (BRBES) is an efficient algorithm to address these uncertainties. Convolutional Neural Network (CNN) is suitable for image analytics. Hence, we propose a novel model by integrating CNN with BRBES to monitor air quality from satellite images with improved accuracy. We customized CNN and optimized BRBES to increase monitoring accuracy further. An obscure image has been differentiated between polluted air and cloud in our model. Valid environmental data (temperature, wind speed and wind direction) have been adopted to further strengthen the monitoring performance of our proposed model. Three-year observation data (satellite images and environmental parameters) from 2014 to 2016 of Shanghai have been employed to analyze and design our proposed model. The results conclude that the accuracy of our model to monitor PM2.5 of Shanghai is higher than only CNN and other conventional Machine Learning methods. Real-time validation of our model on near real-time satellite images of April-2021 of Shanghai shows average difference between our calculated PM2.5 concentrations and the actual one within ±5.51.

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Air quality monitoring, Belief Rule Based Expert System (BRBES), Convolutional Neural Network (CNN), Uncertainty
National Category
Environmental Sciences Other Computer and Information Science Computer Sciences
Research subject
Pervasive Mobile Computing
Identifiers
urn:nbn:se:ltu:diva-91874 (URN)10.1016/j.eswa.2022.117905 (DOI)000832953800008 ()2-s2.0-85132745326 (Scopus ID)
Note

Validerad;2022;Nivå 2;2022-07-05 (joosat);

Available from: 2022-06-23 Created: 2022-06-23 Last updated: 2025-10-21Bibliographically approved
5. A Semi-Supervised-Learning-Aided Explainable Belief Rule-Based Approach to Predict the Energy Consumption of Buildings
Open this publication in new window or tab >>A Semi-Supervised-Learning-Aided Explainable Belief Rule-Based Approach to Predict the Energy Consumption of Buildings
2025 (English)In: Algorithms, E-ISSN 1999-4893, Vol. 18, no 6, article id 305Article in journal (Refereed) Published
Abstract [en]

Predicting the energy consumption of buildings plays a critical role in supporting utility providers, users, and facility managers in minimizing energy waste and optimizing operational efficiency. However, this prediction becomes difficult because of the limited availability of supervised labeled data to train Artificial Intelligence (AI) models. This data availability becomes either expensive or difficult due to privacy protection. To overcome the scarcity of balanced labeled data, semi-supervised learning utilizes extensive unlabeled data. Motivated by this, we propose semi-supervised learning to train AI model. For the AI model, we employ the Belief Rule-Based Expert System (BRBES) because of its domain knowledge-based prediction and uncertainty handling mechanism. For improved accuracy of the BRBES, we utilize initial labeled data to optimize BRBES’ parameters and structure through evolutionary learning until its accuracy reaches the confidence threshold. As semi-supervised learning, we employ a self-training model to assign pseudo-labels, predicted by the BRBES, to unlabeled data generated through weak and strong augmentation. We reoptimize the BRBES with labeled and pseudo-labeled data, resulting in a semi-supervised BRBES. Finally, this semi-supervised BRBES explains its prediction to the end-user in nontechnical human language, resulting in a trust relationship. To validate our proposed semi-supervised explainable BRBES framework, a case study based on Skellefteå, Sweden, is used to predict and explain energy consumption of buildings. Experimental results show 20 ± 0.71% higher accuracy of the semi-supervised BRBES than state-of-the-art semi-supervised machine learning models. Moreover, the semi-supervised BRBES framework turns out to be 29 ± 0.67% more explainable than these semi-supervised machine learning models.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
accuracy, augmentation, building energy, explainability, pseudo-labeled data, self-training, semi-supervised learning, uncertainties, unlabeled data
National Category
Computer Sciences
Research subject
Pervasive Mobile Computing; Cyber Security
Identifiers
urn:nbn:se:ltu:diva-112869 (URN)10.3390/a18060305 (DOI)
Funder
Vinnova, grant number 2022-01188
Note

Validerad;2025;Nivå 1;2025-06-02 (u2);

Full text: CC BY license;

Available from: 2025-06-02 Created: 2025-06-02 Last updated: 2025-10-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Kabir, Sami

Search in DiVA

By author/editor
Kabir, Sami
By organisation
Computer Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 499 hits
1516171819202118 of 21
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf