Artificial Intelligence (AI) models are increasingly being deployed in various critical domains, such as healthcare, finance, law, and autonomous systems, resulting in remarkable benefits for the society. Due to black-box (sub-symbolic) nature, these AI models do not provide explanation of the predictive output. This lack of explanation results in lack of transparency between human and machine. Such opacity of black-box AI models, particularly deep learning architectures, is a significant concern. To address this concern, Explainable Artificial Intelligence (XAI) has emerged as a vital research area. The aim of XAI is to enhance transparency, trust, and accountability of AI models. Various post-hoc XAI tools explain the outputs of black-box AI models based on training datasets. However, the use of training datasets, rather than domain knowledge, makes such explanations proxy. A biased training dataset may lead to a misleading post-hoc explanation as well. In contrast, an explanation based on domain knowledge will be more trustworthy to an end user. Motivated by this, we propose a novel XAI framework, consisting of Belief Rule Based Expert System (BRBES), to predict output and explain it with reference to domain knowledge. BRBES represents domain knowledge with its rule base. It outperforms other knowledge-driven AI models to handle uncertainty due to ignorance. To improve the accuracy of BRBES, we fine-tune its parameters and structure through evolutionary learning. Moreover, to overcome the scarcity of labeled dataset in this learning mechanism, we integrate semi-supervised and self-supervised learning with BRBES. We explain the output of BRBES with respect to the most influential rule of the rule base of BRBES. Thus, the output of our proposed XAI framework becomes not only accurate, but also explainable.
This doctoral thesis delves into the challenges and opportunities surrounding XAI in order to provide a comprehensive understanding of the AI output. It presents a novel XAI framework to provide domain knowledge based energy consumption prediction with improved accuracy, and explain this predictive output in non-technical human language. Then it deals with multi-modal air quality data by integrating deep learning model with BRBES. Moreover, to reduce dependence on labeled data for evolutionary learning, this thesis integrates semi-supervised and self-supervised learning with BRBES.
This thesis presents six significant contributions. First, we conduct a Systematic Literature Review (SLR) on XAI using the PRISMA guidelines, delving into the numerous challenges and opportunities of XAI. Extensive research is conducted to explore the definition, terminologies, taxonomy, and application domains of XAI. It highlights various challenges of XAI, such as, no universal definition, trade-off between accuracy and explainability, and lack of standardized evaluation metrics. To address this lack of standardized evaluation metrics, we also propose a unified framework to evaluate XAI. Secondly, we present an innovative explainable BRBES (eBRBES) framework, which offers accurate prediction of building energy consumption phenomenon, while providing insightful explanation and counterfactual based on domain knowledge. As part of eBRBES framework, we also present a novel Belief Rule Based adaptive Balance Determination (BRBaBD) algorithm to assess the optimal balance of the proposed framework between explainability and accuracy. Thirdly, we propose a mathematical model to integrate BRBES with the Convolutional Neural Network (CNN). We leverage the domain knowledge of BRBES, and image data patterns discovered by CNN with this integrated approach. We predict air quality with this integrated model using outdoor ground images and numerical sensor data. Fourth, we integrate two-layer BRBES with CNN to monitor air quality using satellite images, and relevant environmental parameters, such as, cloud, relative humidity, temperature, and wind speed. The two-layer BRBES showcases the strength of BRBES in conducting reasoning across multiple layers. Fifth, we enhance the learning mechanism of BRBES by utilizing the extensive unlabeled energy data, along with limited labeled data. For this purpose, we synthetically generate unlabeled data through weak and strong augmentation. We then pseudo-label these unlabeled data by integrating semi-supervised self-training model with BRBES. By learning from both labeled and pseudo-labeled data, initial BRBES transitions to semi-supervised BRBES. Sixth, to conduct the learning mechanism of BRBES without relying on any labeled data, we integrate self-supervised learning with it. To this effect, we pseudo-label the synthetically generated unlabeled energy data with BRBES in the pretext tasks of self-supervised learning. Then, the initial BRBES learns deeper representation from these pseudo-labeled data, and transitions to self-supervised BRBES. We then transfer this BRBES, learned in a self-supervised approach, to the downstream task for performing regression of energy consumption.
Based on the research findings of this thesis applied on two different phenomena, it can be argued that the proposed XAI framework provides prediction with greater precision, and explanation with higher interpretability.
Luleå tekniska universitet, 2025.
accuracy, augmentation, domain knowledge, evolutionary learning, explainability, explainable artificial intelligence (XAI), pseudo-labeled data, self-training, self-supervised learning, semi-supervised learning, trust, uncertainties, unlabeled data
2025-12-19, A193, Luleå University of Technology, Skellefteå, 08:30 (English)