Change search
Link to record
Permanent link

Direct link
Publications (10 of 50) Show all publications
Al-Hashimi, Z., Khamis, T., Al Kouzbary, M., Arifin, N., Mokayed, H. & Abu Osman, N. A. (2025). A decade of machine learning in lithium-ion battery state estimation: a systematic review. Ionics (Kiel), 31, 2351-2377
Open this publication in new window or tab >>A decade of machine learning in lithium-ion battery state estimation: a systematic review
Show others...
2025 (English)In: Ionics (Kiel), ISSN 0947-7047, E-ISSN 1862-0760, Vol. 31, p. 2351-2377Article, review/survey (Refereed) Published
Abstract [en]

Lithium-ion batteries are central to contemporary energy storage systems, yet the precise estimation of critical states—state of charge (SOC), state of health (SOH), and remaining useful life (RUL)—remains a complex challenge under dynamic and varied conditions. Conventional methodologies often fail to meet the required adaptability and precision, leading to a growing emphasis on the application of machine learning (ML) techniques to enhance battery management systems (BMS). This review examines a decade of progress (2013–2024) in ML-based state estimation, meticulously analysing 58 pivotal publications selected from an initial corpus of 2414 studies. Unlike existing reviews, this work uniquely emphasizes the integration of novel frameworks such as Tiny Machine Learning (TinyML) and Scientific Machine Learning (SciML), which address critical limitations by offering resource-efficient and interpretable solutions. Through detailed comparative analyses, the review explores the strengths, weaknesses, and practical considerations of various ML methodologies, focusing on trade-offs in computational complexity, real-time implementation, and generalization across diverse datasets. Persistent barriers, including the absence of standardized datasets, stagnation in innovation, and scalability constraints, are identified alongside targeted recommendations. By synthesizing past advancements and proposing forward-thinking approaches, this review provides valuable insights and actionable strategies to drive the development of robust, scalable, and efficient energy storage technologies.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Lithium-ion batteries, Machine learning, Battery management systems, State of charge, State of health, Remaining useful life
National Category
Computer Systems
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111836 (URN)10.1007/s11581-024-06049-4 (DOI)001399464800001 ()2-s2.0-85217199319 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-03-24 (u5);

Funder: University of Malaya (BKS003-2023);

Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-10-21Bibliographically approved
Al Kouzbary, H., Al Kouzbary, M., Liu, J., Khamis, T., Arifin, N., Mokayed, H. & Abu Osman, N. A. (2025). ANOVA and linear regression feature selection for GRU-based foot position prediction in powered prostheses. Computer Methods in Biomechanics and Biomedical Engineering
Open this publication in new window or tab >>ANOVA and linear regression feature selection for GRU-based foot position prediction in powered prostheses
Show others...
2025 (English)In: Computer Methods in Biomechanics and Biomedical Engineering, ISSN 1025-5842, E-ISSN 1476-8259Article in journal (Refereed) Epub ahead of print
Abstract [en]

This study evaluates feature selection using ANOVA and Linear Regression to optimize GRU-based models for predicting foot position in powered prostheses across varied terrains. Kinematic data from ten healthy participants during walking, stair ascend/descend, and standing were processed in MATLAB. Selected features, compared with Recursive Feature Elimination, trained GRU networks on mixed datasets and were tested on independent subjects. Results showed ANOVA and regression efficiently selected features with reduced computation and comparable performance. The GRU achieved RMSE as low as 0.066 radians, demonstrating robust generalization. While promising, clinical validation on amputee subjects remains necessary.

Place, publisher, year, edition, pages
Taylor & Francis, 2025
Keywords
Powered ankle-foot, artificial neural network, pattern generator, Gated Recurrent Unit, analysis of variance, linear regression analysis
National Category
Control Engineering
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-115042 (URN)10.1080/10255842.2025.2558026 (DOI)001574785800001 ()40970703 (PubMedID)2-s2.0-105016735241 (Scopus ID)
Note

Funder: Ministry of Higher Education Malaysia (FP104- 2022)

Available from: 2025-10-08 Created: 2025-10-08 Last updated: 2025-10-21
Ke, Q., Hum, Y. C., Yap, W.-S., Tan, T. S., Nisar, H., Mokayed, H., . . . Gan, Y. (2025). Histopathological classification of colorectal cancer based on domain-specific transfer learning and multi-model feature fusion. Scientific Reports, 15, Article ID 35155.
Open this publication in new window or tab >>Histopathological classification of colorectal cancer based on domain-specific transfer learning and multi-model feature fusion
Show others...
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, article id 35155Article in journal (Refereed) Published
Abstract [en]

Colorectal cancer (CRC) poses a significant global health burden, where early and accurate diagnosis is vital to improving patient outcomes. However, the structural complexity of CRC histopathological images renders manual analysis time-consuming and error-prone. This study aims to develop an automated deep learning framework that enhances classification accuracy and efficiency in CRC diagnosis. The proposed model integrates domain-specific transfer learning and multi-model feature fusion to address challenges such as multi-scale structures, noisy labels, class imbalance, and fine-grained subtype classification. The model first applies domain-specific transfer learning to extract highly relevant features from histopathological images. A multi-head self-attention mechanism then fuses features from multiple pre-trained models, followed by a multilayer perceptron (MLP) classifier for final prediction. The framework was evaluated on three publicly available CRC datasets: EBHI, Chaoyang, and COAD. The model achieved a classification accuracy of 99.68% on the EBHI dataset (200 × subset), 86.72% on the Chaoyang dataset, and 99.44% on the COAD dataset. These results demonstrate strong generalization across diverse and complex histopathological image conditions. This study highlights the effectiveness of combining domain-specific transfer learning with multi-model feature fusion and attention mechanisms for CRC classification. The proposed model offers a reliable and efficient tool to support pathologists in diagnostic workflows, with the potential to reduce manual workload and improve diagnostic consistency.

Place, publisher, year, edition, pages
Nature Research, 2025
Keywords
Histopathological classification, Colorectal cancer, Transfer learning, Domain-specific models, Feature fusion
National Category
Medical Imaging Artificial Intelligence
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-115411 (URN)10.1038/s41598-025-19134-z (DOI)001590983000015 ()41062589 (PubMedID)2-s2.0-105018289333 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-11-20 (u5);

Full text license: CC BY-NC-ND 4.0;

Funder: Guangxi First-Class Discipline Statistics Construction Project Fund; Guangxi Higher Education Institutions Young and Middle-aged Teachers’ Basic Research Capacity Enhancement Project (2024KY0669)

Available from: 2025-11-20 Created: 2025-11-20 Last updated: 2025-12-04Bibliographically approved
Marashli, M. A., Ho Lai, H. L., Mokayed, H., Sandin, F., Liwicki, M., Tang, H.-K. & Yu, W. C. (2025). Identifying quantum phase transitions with minimal prior knowledge by unsupervised learning. SciPost Physics Core, 8, Article ID 029.
Open this publication in new window or tab >>Identifying quantum phase transitions with minimal prior knowledge by unsupervised learning
Show others...
2025 (English)In: SciPost Physics Core, E-ISSN 2666-9366, Vol. 8, article id 029Article in journal (Refereed) Published
Abstract [en]

In this work, we proposed a novel approach for identifying quantum phase transitions in one-dimensional quantum many-body systems using AutoEncoder (AE), an unsupervised machine learning technique, with minimal prior knowledge. The training of the AEs is done with reduced density matrix (RDM) data obtained by Exact Diagonalization (ED) across the entire range of the driving parameter and thus no prior knowledge of the phase diagram is required. With this method, we successfully detect the phase transitions in a wide range of models with multiple phase transitions of different types, including the topological and the Berezinskii-Kosterlitz-Thouless transitions by tracking the changes in the reconstruction loss of the AE. The learned representation of the AE is used to characterize the physical phenomena underlying different quantum phases. Our methodology demonstrates a new approach to studying quantum phase transitions with minimal knowledge, small amount of needed data, and produces compressed representations of the quantum states.

Place, publisher, year, edition, pages
SciPost Foundation, 2025
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-112019 (URN)10.21468/scipostphyscore.8.1.029 (DOI)001439922700002 ()2-s2.0-86000573356 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-03-17 (u5);

Full text license: CC BY 4.0;

Funder: Research Grants Council ofHong Kong (CityU 11318722); National Natural Science Foundation of China (12204130); Shenzhen Start-Up Research Funds (HA11409065); City University of Hong Kong (9610438, 7005610, 9680320); HITSZ Start-Up Funds (X2022000);

Available from: 2025-03-17 Created: 2025-03-17 Last updated: 2025-10-21Bibliographically approved
Mokayed, H., Palaiahnakote, S., Alkhaled, L. & AL-Masri, A. N. (2025). License Plate Number Detection in Drone Images. Artificial Intelligence and Applications, 3(1), 1-9
Open this publication in new window or tab >>License Plate Number Detection in Drone Images
2025 (English)In: Artificial Intelligence and Applications, E-ISSN 2811-0854, Vol. 3, no 1, p. 1-9Article in journal (Refereed) Published
Abstract [en]

For an intelligent transportation system, identifying license plate numbers in drone photos is difficult, and it is used in practical applications like parking management, traffic management, automatically organizing parking spots, etc. The primary goal of the work that is being presented is to demonstrate how to extract robust and invariant features from PCM that can withstand the difficulties posed by drone images. After that, the work will take advantage of a fully connected neural network to tackle the difficulties of fixing precise bounding boxes regardless of orientations, shapes, and text sizes. The proposed work will be able to find the detected text for both license plate numbers and natural scene images which will lead to a better recognition stage. Both our drone dataset (Mimos) and the benchmark license plate dataset (Medialab) are used to assess the effectiveness of the study that has been done. To show that the suggested system can detect text of natural scenes in a wide variety of situations. Four benchmark datasets, namely, SVT, MSRA-TD-500, ICDAR 2017 MLT, and Total Text are used for the experimental results. We also describe trials that demonstrate robustness to varying height distances and angles. This work's code and data will be made publicly available on GitHub.

Place, publisher, year, edition, pages
Bon View Publishing Pte Ltd, 2025
Keywords
phase congruency, text detection, natural scene images
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108380 (URN)10.47852/bonviewaia2202421 (DOI)2-s2.0-105005967103 (Scopus ID)
Projects
processIT
Note

Godkänd;2025;Nivå 0;2025-04-15 (u8);

Fulltext license: CC BY

Available from: 2024-07-23 Created: 2024-07-23 Last updated: 2025-10-21Bibliographically approved
Günther, C., Simán, F., Mokayed, H., Liwicki, M., Jansson, N., McDonnell, P., . . . Liwicki, F. S. (2025). Machine learning for drill core image analysis: A review. Ore Geology Reviews, 187, Article ID 106974.
Open this publication in new window or tab >>Machine learning for drill core image analysis: A review
Show others...
2025 (English)In: Ore Geology Reviews, ISSN 0169-1368, E-ISSN 1872-7360, Vol. 187, article id 106974Article, review/survey (Refereed) Published
Abstract [en]

With the growing demand for raw materials, there is also an increased need in faster and more efficient processes of mineral exploration, especially locating possible materials to mine. Machine learning (ML) for supporting the labor-intensive and interpretive process of drill core logging becomes increasingly relevant to address these needs, since drill cores are the most direct and physically preserved record of subsurface geology available during mineral exploration. This paper reviews the current state of the art in ML-based drill core analysis, including its capabilities, limitations, and specific challenges related to generalization and practical deployment within geological workflows. The review focuses specifically on photographic images of drill core, which have been routinely used in mineral exploration for decades. This paper presents several major contributions: It offers a structured overview of current methods, organized around three key geological tasks, which are lithology prediction, geotechnical analysis, and mineralogical prediction. Additionally, it identifies potential research gaps and proposes directions for future work, concluding with an emphasis on advancing context-aware machine learning in drill core analysis through a human-in-the-loop approach.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Drill core images, Machine learning, Lithology prediction, Geotechnical analysis, Mineralogical prediction, Human-in-the-loop
National Category
Geology Computer Sciences
Research subject
Machine Learning; Ore Geology
Identifiers
urn:nbn:se:ltu:diva-115306 (URN)10.1016/j.oregeorev.2025.106974 (DOI)001611235100001 ()2-s2.0-105020672423 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-11-04 (u8);

Funder: Boliden AB;

Full text license: CC BY

Available from: 2025-11-04 Created: 2025-11-04 Last updated: 2025-12-04Bibliographically approved
Voon, W., Hum, Y. C., Tee, Y. K., Yap, W.-S., Lai, K. W., Nisar, H. & Mokayed, H. (2025). Trapezoidal Step Scheduler for Model-Agnostic Meta-Learning in Medical Imaging. Pattern Recognition, 161, Article ID 111316.
Open this publication in new window or tab >>Trapezoidal Step Scheduler for Model-Agnostic Meta-Learning in Medical Imaging
Show others...
2025 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 161, article id 111316Article in journal (Refereed) Published
Abstract [en]

Model-Agnostic Meta-learning (MAML) is a widely adopted few-shot learning (FSL) method designed to mitigate the dependency on large, labeled datasets of deep learning-based methods in medical imaging analysis. However, MAML's reliance on a fixed number of gradient descent (GD) steps for task adaptation results in computational inefficiency and task-level overfitting. To address this issue, we introduce Tra-MAML, which optimizes the balance between model adaptation capacity and computational efficiency through a trapezoidal step scheduler (TRA). The TRA scheduler dynamically adjusts the number of GD steps in the inner optimization loop: initially increasing the steps uniformly to reduce variance, maintaining the maximum number of steps to enhance adaptation capacity, and finally decreasing the steps uniformly to mitigate overfitting. Our evaluation of Tra-MAML against selected FSL methods across four medical imaging datasets demonstrates its superior performance. Notably, Tra-MAML outperforms MAML by 13.36% on the BreaKHis40X dataset in the 3-way 10-shot scenario.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Few-shot learning, Medical image classification, Trapezoidal step scheduler, Model-agnostic meta-learning
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111276 (URN)10.1016/j.patcog.2024.111316 (DOI)001394295500001 ()2-s2.0-85214252576 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-01-13 (signyg);

Funder: Universiti Tunku Abdul Rahman Research Fund (IPSR/RMC/UTARRF/2022-C1/H01)

Available from: 2025-01-13 Created: 2025-01-13 Last updated: 2025-10-21Bibliographically approved
Mokayed, H., Saini, R., Adewumi, O., Alkhaled, L., Backe, B., Shivakumara, P., . . . Hum, Y. C. (2025). Vehicle Detection Performance in Nordic Region. In: Apostolos Antonacopoulos, Subhasis Chaudhuri, Rama Chellappa, Cheng-Lin Liu, Saumik Bhattacharya, Umapada Pal (Ed.), Pattern Recognition: 27th International Conference, ICPR 2024, Kolkata, India, December 1–5, 2024, Proceedings, Part XXII. Paper presented at 27th International Conference on Pattern Recognition (ICPR 2024), Kolkata, India, December 1-5, 2024 (pp. 62-77). Springer Science and Business Media Deutschland GmbH
Open this publication in new window or tab >>Vehicle Detection Performance in Nordic Region
Show others...
2025 (English)In: Pattern Recognition: 27th International Conference, ICPR 2024, Kolkata, India, December 1–5, 2024, Proceedings, Part XXII / [ed] Apostolos Antonacopoulos, Subhasis Chaudhuri, Rama Chellappa, Cheng-Lin Liu, Saumik Bhattacharya, Umapada Pal, Springer Science and Business Media Deutschland GmbH , 2025, p. 62-77Conference paper, Published paper (Refereed)
Abstract [en]

This paper addresses the critical challenge of vehicle detection in the harsh winter conditions in the Nordic regions, characterized by heavy snowfall, reduced visibility, and low lighting. Due to their susceptibility to environmental distortions and occlusions, traditional vehicle detection methods have struggled in these adverse conditions. The advanced proposed deep learning architectures brought promise, yet the unique difficulties of detecting vehicles in Nordic winters remain inadequately addressed. This study uses the Nordic Vehicle Dataset (NVD), which contains UAV (unmanned aerial vehicle) images from northern Sweden, to evaluate the performance of state-of-the-art vehicle detection algorithms under challenging weather conditions. Our methodology includes a comprehensive evaluation of single-stage, two-stage, segmentation-based, and transformer-based detectors against the NVD. We propose a series of enhancements tailored to each detection framework, including data augmentation, hyperparameter tuning, transfer learning, and Specifically implementing and enhancing the Detection Transformer (DETR). A novel architecture is proposed that leverages self-attention mechanisms with the help of MSER (maximally stable extremal regions) and RST (Rough Set Theory) to identify and filter the region that model long-range dependencies and complex scene contexts. Our findings not only highlight the limitations of current detection systems in the Nordic environment but also offer promising directions for enhancing these algorithms for improved robustness and accuracy in vehicle detection amidst the complexities of winter landscapes. The code and the dataset are available at https://nvd.ltu-ai.dev.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15322
Keywords
Vehicle detection, Nordic region, DETR, MSER, Roughset, YOLO (You only look once), Faster-RCNN (regions with convolutional neural networks), SSD (Single Shot MultiBox), U-Net
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111232 (URN)10.1007/978-3-031-78312-8_5 (DOI)2-s2.0-85212264328 (Scopus ID)
Conference
27th International Conference on Pattern Recognition (ICPR 2024), Kolkata, India, December 1-5, 2024
Note

ISBN for host publication: 978-3-031-78311-1, 978-3-031-78312-8

Available from: 2025-01-08 Created: 2025-01-08 Last updated: 2025-10-21Bibliographically approved
Kim, W. Y., Hum, Y. C., Tee, Y. K., Yap, W.-S., Mokayed, H. & Lai, K. W. (2024). A modified single image dehazing method for autonomous driving vision system. Multimedia tools and applications, 83(9), 25867-25899
Open this publication in new window or tab >>A modified single image dehazing method for autonomous driving vision system
Show others...
2024 (English)In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 83, no 9, p. 25867-25899Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Computer graphics and computer vision
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-101210 (URN)10.1007/s11042-023-16547-8 (DOI)001060201500002 ()2-s2.0-85168983978 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-04-08 (signyg);

Funder: Universiti Tunku Abdul Rahman (IPSR/RMC/UTARRF/2022-C1/H02)

Available from: 2023-09-05 Created: 2023-09-05 Last updated: 2025-10-21Bibliographically approved
Wang, J., Adelani, D. I., Agrawal, S., Masiak, M., Rei, R., Briakou, E., . . . Stenetorp, P. (2024). AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages. In: Duh K.; Gomez H.; Bethard S. (Ed.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024: . Paper presented at 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024 (pp. 5997-6023). Association for Computational Linguistics (ACL), Article ID 200463.
Open this publication in new window or tab >>AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Show others...
2024 (English)In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 / [ed] Duh K.; Gomez H.; Bethard S., Association for Computational Linguistics (ACL) , 2024, p. 5997-6023, article id 200463Conference paper, Published paper (Refereed)
Abstract [en]

Despite the recent progress on scaling multilingual machine translation (MT) to severalunder-resourced African languages, accuratelymeasuring this progress remains challenging,since evaluation is often performed on n-grammatching metrics such as BLEU, which typically show a weaker correlation with humanjudgments. Learned metrics such as COMEThave higher correlation; however, the lack ofevaluation data with human ratings for underresourced languages, complexity of annotationguidelines like Multidimensional Quality Metrics (MQM), and limited language coverageof multilingual encoders have hampered theirapplicability to African languages. In this paper, we address these challenges by creatinghigh-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AFRICOMET: COMETevaluation metrics for African languages byleveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-theart MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2024
National Category
Natural Language Processing
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108639 (URN)10.18653/v1/2024.naacl-long.334 (DOI)2-s2.0-85199581086 (Scopus ID)
Conference
2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024
Note

Funder: UTTER (101070631); Portuguese Recovery and Resilience Plan (C645008882-00000055); Landmark Development Initiative Africa; European Commission; Fundação para a Ciência e a Tecnologia;

ISBN for host publication: 979-889176114-8; 

Fulltext license: CC BY Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License

Available from: 2024-08-20 Created: 2024-08-20 Last updated: 2025-10-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6158-3543

Search in DiVA

Show all publications