Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Link to record
Permanent link

Direct link
Publications (10 of 38) Show all publications
Wang, J., Adelani, D. I., Agrawal, S., Masiak, M., Rei, R., Briakou, E., . . . Stenetorp, P. (2024). AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages. In: Duh K.; Gomez H.; Bethard S. (Ed.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024: . Paper presented at 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024 (pp. 5997-6023). Association for Computational Linguistics (ACL), Article ID 200463.
Open this publication in new window or tab >>AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Show others...
2024 (English)In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 / [ed] Duh K.; Gomez H.; Bethard S., Association for Computational Linguistics (ACL) , 2024, p. 5997-6023, article id 200463Conference paper, Published paper (Refereed)
Abstract [en]

Despite the recent progress on scaling multilingual machine translation (MT) to severalunder-resourced African languages, accuratelymeasuring this progress remains challenging,since evaluation is often performed on n-grammatching metrics such as BLEU, which typically show a weaker correlation with humanjudgments. Learned metrics such as COMEThave higher correlation; however, the lack ofevaluation data with human ratings for underresourced languages, complexity of annotationguidelines like Multidimensional Quality Metrics (MQM), and limited language coverageof multilingual encoders have hampered theirapplicability to African languages. In this paper, we address these challenges by creatinghigh-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AFRICOMET: COMETevaluation metrics for African languages byleveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-theart MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2024
National Category
Language Technology (Computational Linguistics)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108639 (URN)10.18653/v1/2024.naacl-long.334 (DOI)2-s2.0-85199581086 (Scopus ID)
Conference
2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024
Note

Funder: UTTER (101070631); Portuguese Recovery and Resilience Plan (C645008882-00000055); Landmark Development Initiative Africa; European Commission; Fundação para a Ciência e a Tecnologia;

ISBN for host publication: 979-889176114-8; 

Fulltext license: CC BY Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License

Available from: 2024-08-20 Created: 2024-08-20 Last updated: 2024-11-27Bibliographically approved
Vacalopoulou, A., Gardelli, V., Karafyllidis, T., Liwicki, F., Mokayed, H., Papaevripidou, M., . . . Katsouros, V. (2024). AI4EDU: An Innovative Conversational Ai Assistant For Teaching And Learning. In: Luis Gómez Chova; Chelo González Martínez; Joanna Lees (Ed.), INTED2024 Conference Proceedings: . Paper presented at 18th annual International Technology, Education and Development Conference (INTED 2024), Valencia, Spain, March 4-6, 2024 (pp. 7119-7127). IATED Academy
Open this publication in new window or tab >>AI4EDU: An Innovative Conversational Ai Assistant For Teaching And Learning
Show others...
2024 (English)In: INTED2024 Conference Proceedings / [ed] Luis Gómez Chova; Chelo González Martínez; Joanna Lees, IATED Academy , 2024, p. 7119-7127Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IATED Academy, 2024
Series
INTED Proceedings, ISSN 2340-1079
National Category
Pedagogy Computer Sciences
Research subject
Education; Machine Learning
Identifiers
urn:nbn:se:ltu:diva-104617 (URN)10.21125/inted.2024.1877 (DOI)
Conference
18th annual International Technology, Education and Development Conference (INTED 2024), Valencia, Spain, March 4-6, 2024
Note

Funder: European Commission (Project 101087451 – AI4EDU – ERASMUS-EDU-2022-PI-FORWARD);

ISBN for host publication: 978-84-09-59215-9;

Available from: 2024-03-18 Created: 2024-03-18 Last updated: 2024-03-18Bibliographically approved
Khamis, T., Khamis, A. A., Al Kouzbary, M., Al Kouzbary, H., Mokayed, H., AbdRazak, N. A. & AbuOsman, N. A. (2024). Automated transtibial prosthesis alignment: A systematic review. Artificial Intelligence in Medicine, 156, Article ID 102966.
Open this publication in new window or tab >>Automated transtibial prosthesis alignment: A systematic review
Show others...
2024 (English)In: Artificial Intelligence in Medicine, ISSN 0933-3657, E-ISSN 1873-2860, Vol. 156, article id 102966Article, review/survey (Refereed) Published
Abstract [en]

This comprehensive systematic review critically analyzes the current progress and challenges in automating transtibial prosthesis alignment. The manual identification of alignment changes in prostheses has been found to lack reliability, necessitating the development of automated processes. Through a rigorous systematic search across major electronic databases, this review includes the highly relevant studies out of an initial pool of 2111 records. The findings highlight the urgent need for automated alignment systems in individuals with transtibial amputation. The selected studies represent cutting-edge research, employing diverse approaches such as advanced machine learning algorithms and innovative alignment tools, to automate the detection and adjustment of prosthesis alignment. Collectively, this review emphasizes the immense potential of automated transtibial prosthesis alignment systems to enhance alignment accuracy and significantly reduce human error. Furthermore, it identifies important limitations in the reviewed studies, serving as a catalyst for future research to address these gaps and explore alternative machine learning algorithms. The insights derived from this systematic review provide valuable guidance for researchers, clinicians, and developers aiming to propel the field of automated transtibial prosthesis alignment forward.

Place, publisher, year, edition, pages
Elsevier B.V., 2024
Keywords
Transtibial prosthesis, Automated alignment, Alignment, Below knee prosthesis, Prosthetic alignment
National Category
Orthopaedics Robotics
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-109655 (URN)10.1016/j.artmed.2024.102966 (DOI)001302797800001 ()39197376 (PubMedID)2-s2.0-85202159952 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-09-04 (hanlid);

Funder: Ministry of Science, Technology, and Innovation, Malaysia (NTIS 098773)

Available from: 2024-09-04 Created: 2024-09-04 Last updated: 2024-11-20Bibliographically approved
Mokayed, H., Nayebiastaneh, A., Alkhaled, L., Sozos, S., Hagner, O. & Backe, B. (2024). Challenging YOLO and Faster RCNN in Snowy Conditions: UAV Nordic Vehicle Dataset (NVD) as an Example. In: Aliya Al-Hashim; Tasneem Pervez; Lazhar Khriji; Muhammad Bilal Waris (Ed.), 2nd International Conference on Unmanned Vehicle Systems: . Paper presented at 2nd International Conference on Unmanned Vehicle Systems UVS-Oman 2024), Muscat, Oman, February 12-14, 2024. IEEE
Open this publication in new window or tab >>Challenging YOLO and Faster RCNN in Snowy Conditions: UAV Nordic Vehicle Dataset (NVD) as an Example
Show others...
2024 (English)In: 2nd International Conference on Unmanned Vehicle Systems / [ed] Aliya Al-Hashim; Tasneem Pervez; Lazhar Khriji; Muhammad Bilal Waris, IEEE, 2024Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-105095 (URN)10.1109/UVS59630.2024.10467166 (DOI)001192218700019 ()2-s2.0-85189633947 (Scopus ID)
Conference
2nd International Conference on Unmanned Vehicle Systems UVS-Oman 2024), Muscat, Oman, February 12-14, 2024
Note

ISBN for host publication: 979-8-3503-7255-7;

Available from: 2024-04-15 Created: 2024-04-15 Last updated: 2024-04-15Bibliographically approved
Zarris, D., Sozos, S., Simistira Liwicki, F., Gardelli, V., Karafyllidis, T., Stamouli, S., . . . Mokayed, H. (2024). Enhancing Educational Paradigms with Large Language Models: From Teacher to Study Assistants in Personalized Learning. In: Luis Gómez Chova; Chelo González Martínez; Joanna Lees (Ed.), EDULEARN24 Proceedings: 16th International Conference on Education and New Learning Technologies 1-3 July, 2024, Palma, Spain. Paper presented at 16th International Conference on Education and New Learning Technologies (EDULEARN24), Palma, Spain, July 1-3, 2024 (pp. 1295-1303). IATED Academy
Open this publication in new window or tab >>Enhancing Educational Paradigms with Large Language Models: From Teacher to Study Assistants in Personalized Learning
Show others...
2024 (English)In: EDULEARN24 Proceedings: 16th International Conference on Education and New Learning Technologies 1-3 July, 2024, Palma, Spain / [ed] Luis Gómez Chova; Chelo González Martínez; Joanna Lees, IATED Academy , 2024, p. 1295-1303Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the application of large language models (LLMs) in the educational field, specifically focusing on roles like "Teacher Assistant" and "Study Assistant" to enhance personalized and adaptive learning. The significance of integrating AI in educational frameworks is underscored, given the shift towards AI-powered educational tools. The methodology of this research is structured and multifaceted, examining the dynamics between prompt engineering, methodological approaches, and LLM outputs with the help of indexed documents. The study bifurcates its approach into prompt structuring and advanced prompt engineering techniques. Initial investigations revolve around persona and template prompts to evaluate their individual and collective effects on LLM outputs. Advanced techniques, including few-shot and chain-of-thought prompting, are analyzed for their potential to elevate the quality and specificity of LLM responses. The "Study Assistant" aspect of the study involves applying these techniques to educational content across disciplines such as biology, mathematics, and physics. Findings from this research are poised to contribute significantly to the evolution of AI in education, offering insights into the variables that enhance LLM performance. This paper not only enriches the academic discourse on LLMs but also provides actionable insights for the development of sophisticated AI-based educational tools. As the educational landscape continues to evolve, this research underscores the imperative for continuous exploration and refinement in the application of AI to fully realize its benefits in education.

Place, publisher, year, edition, pages
IATED Academy, 2024
Keywords
Teacher assistant, Student assistant, Large language model, AI4Education
National Category
Pedagogy Computer Systems
Research subject
Machine Learning; Education
Identifiers
urn:nbn:se:ltu:diva-108936 (URN)10.21125/edulearn.2024.0435 (DOI)
Conference
16th International Conference on Education and New Learning Technologies (EDULEARN24), Palma, Spain, July 1-3, 2024
Projects
AI4EDU
Funder
European Commission, 101087451
Note

ISBN for host publication: 978-84-09-62938-1

Available from: 2024-08-24 Created: 2024-08-24 Last updated: 2024-08-29Bibliographically approved
Mokayed, H., Alsayed, G., Lodin, F., Hagner, O. & Backe, B. (2024). Enhancing Object Detection in Snowy Conditions: Evaluating YOLO v9 Models with Augmentation Techniques. In: Muhannad Quwaider, Fahed Alkhabbas, Yaser Jararweh (Ed.), 2024 11th International Conference on Internet of Things: Systems, Management and Security, IOTSMS 2024: . Paper presented at 11 th International Conference on Internet of Things: Systems, Management & Security (IOTSMS 2024), Malmö, Sweden, September 2 - 5, 2024 (pp. 198-203). IEEE
Open this publication in new window or tab >>Enhancing Object Detection in Snowy Conditions: Evaluating YOLO v9 Models with Augmentation Techniques
Show others...
2024 (English)In: 2024 11th International Conference on Internet of Things: Systems, Management and Security, IOTSMS 2024 / [ed] Muhannad Quwaider, Fahed Alkhabbas, Yaser Jararweh, IEEE, 2024, p. 198-203Conference paper, Published paper (Refereed)
Abstract [en]

In the pursuit of enhancing smart city infrastructure, computer vision serves as a pivotal element for traffic management, scene understanding, and security applications. This research investigates the performance of the YOLO v9-c and YOLO v9-e object detection models in identifying vehicles under snowy weather conditions, leveraging various data augmentation techniques. The study highlights that, historically, object detection relied on complex, handcrafted features, but deep learning advancements have enabled more efficient and accurate end-to-end learning directly from raw data. Despite these advancements, detecting objects in adverse weather conditions like snow remains challenging, affecting the safety and effectiveness of autonomous systems. The study examines the performance of YOLO v9-c and YOLO v9-e under four different scenarios: no augmentation, snow accumulation, snow overlay, and snow depth mapping. Results indicate that both models achieve their highest precision without augmentation, with YOLO v9-c and YOLO v9-e reaching precisions of 82% and 80%, respectively. However, the snow accumulation method severely impacts detection accuracy, with precision dropping to 36% for YOLO v9-c and 43% for YOLO v9-e. Snow overlay augmentation shows better adaptability, with YOLO v9-c achieving 68% and YOLO v9-e 76% precision. Snow depth mapping results in moderate impacts, with precisions of 59% for YOLO v9-c and 61% for YOLO v9-e. The findings emphasize the importance of careful selection and tuning of augmentation techniques to improve object detection models’ robustness under snowy weather conditions, thereby enhancing the safety and efficiency of autonomous systems. The study suggests a tunned augmentation that helps YOLO v9-c and YOLO v9-e reach precisions of 85% and 83%. Future research should focus more on optimizing augmentation parameters, diversifying training data, and employing domain randomization to further enhance the robustness and generalization capabilities of these models. This approach aims to ensure more reliable performance of autonomous systems in real-world conditions where adverse weather is a common occurrence. The code and the dataset will be available at https://nvd.Itu-ai.dev/

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Vehicle detection, Snowy Weather Conditions, Snow augmentation, Autonomous Systems
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-110854 (URN)10.1109/IOTSMS62296.2024.10710270 (DOI)2-s2.0-85208058240 (Scopus ID)
Conference
11 th International Conference on Internet of Things: Systems, Management & Security (IOTSMS 2024), Malmö, Sweden, September 2 - 5, 2024
Note

ISBN for host publication: 979-8-3503-6650-1

Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-11-27Bibliographically approved
Mokayed, H., Ulehla, C., Shurdhaj, E., Nayebiastaneh, A., Alkhaled, L., Hagner, O. & Hum, Y. C. (2024). Fractional B-Spline Wavelets and U-Net Architecture for Robust and Reliable Vehicle Detection in Snowy Conditions. Sensors, 24(12), Article ID 3938.
Open this publication in new window or tab >>Fractional B-Spline Wavelets and U-Net Architecture for Robust and Reliable Vehicle Detection in Snowy Conditions
Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 12, article id 3938Article in journal (Refereed) Published
Abstract [en]

This paper addresses the critical need for advanced real-time vehicle detection methodologies in Vehicle Intelligence Systems (VIS), especially in the context of using Unmanned Aerial Vehicles (UAVs) for data acquisition in severe weather conditions, such as heavy snowfall typical of the Nordic region. Traditional vehicle detection techniques, which often rely on custom-engineered features and deterministic algorithms, fall short in adapting to diverse environmental challenges, leading to a demand for more precise and sophisticated methods. The limitations of current architectures, particularly when deployed in real-time on edge devices with restricted computational capabilities, are highlighted as significant hurdles in the development of efficient vehicle detection systems. To bridge this gap, our research focuses on the formulation of an innovative approach that combines the fractional B-spline wavelet transform with a tailored U-Net architecture, operational on a Raspberry Pi 4. This method aims to enhance vehicle detection and localization by leveraging the unique attributes of the NVD dataset, which comprises drone-captured imagery under the harsh winter conditions of northern Sweden. The dataset, featuring 8450 annotated frames with 26,313 vehicles, serves as the foundation for evaluating the proposed technique. The comparative analysis of the proposed method against state-of-the-art detectors, such as YOLO and Faster RCNN, in both accuracy and efficiency on constrained devices, emphasizes the capability of our method to balance the trade-off between speed and accuracy, thereby broadening its utility across various domains.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
fractional B-spline, harsh weathers, U-Net, vehicle detection
National Category
Computer Sciences Computer Systems
Research subject
Machine Learning; Centre - ProcessIT Innovations
Identifiers
urn:nbn:se:ltu:diva-108316 (URN)10.3390/s24123938 (DOI)001255850100001 ()38931720 (PubMedID)2-s2.0-85197187443 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-09-03 (joosat);

Full text: CC BY License

Available from: 2024-07-09 Created: 2024-07-09 Last updated: 2024-09-03Bibliographically approved
Saleh, Y. S., Mokayed, H., Nikolaidou, K., Alkhaled, L. & Hum, Y. C. (2024). How GANs assist in Covid-19 pandemic era: a review. Multimedia tools and applications, 83(10), 29915-29944
Open this publication in new window or tab >>How GANs assist in Covid-19 pandemic era: a review
Show others...
2024 (English)In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 83, no 10, p. 29915-29944Article, review/survey (Refereed) Published
Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-101620 (URN)10.1007/s11042-023-16597-y (DOI)001182559400057 ()2-s2.0-85170832380 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-04-09 (hanlid)

Available from: 2023-10-11 Created: 2023-10-11 Last updated: 2024-11-20Bibliographically approved
Voon, W., Chai Hum, Y., Kai Tee, Y., Yap, W.-S., Wee Lai, K., Nisar, H. & Mokayed, H. (2024). IMAML-IDCG: Optimization-based meta-learning with ImageNet feature reusing for few-shot invasive ductal carcinoma grading. Expert systems with applications, 257, Article ID 124969.
Open this publication in new window or tab >>IMAML-IDCG: Optimization-based meta-learning with ImageNet feature reusing for few-shot invasive ductal carcinoma grading
Show others...
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 257, article id 124969Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Elsevier, 2024
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108468 (URN)10.1016/j.eswa.2024.124969 (DOI)001295417200001 ()2-s2.0-85200987898 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-08-15 (signyg);

Funder: Universiti Tunku Abdul Rahman Research Fund (Reference: IPSR/RMC/UTARRF/2022-C1/H01)

Available from: 2024-08-06 Created: 2024-08-06 Last updated: 2024-11-20Bibliographically approved
Mishra, A. R., Kumar, R., Gupta, V., Prabhu, S., Upadhyay, R., Chhipa, P. C., . . . Saini, R. (2024). SignEEG v1.0: Multimodal Dataset with Electroencephalography and Hand-written Signature for Biometric Systems. Scientific Data, 11, Article ID 718.
Open this publication in new window or tab >>SignEEG v1.0: Multimodal Dataset with Electroencephalography and Hand-written Signature for Biometric Systems
Show others...
2024 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 11, article id 718Article in journal (Refereed) Published
Abstract [en]

Handwritten signatures in biometric authentication leverage unique individual characteristics for identification, offering high specificity through dynamic and static properties. However, this modality faces significant challenges from sophisticated forgery attempts, underscoring the need for enhanced security measures in common applications. To address forgery in signature-based biometric systems, integrating a forgery-resistant modality, namely, noninvasive electroencephalography (EEG), which captures unique brain activity patterns, can significantly enhance system robustness by leveraging multimodality’s strengths. By combining EEG, a physiological modality, with handwritten signatures, a behavioral modality, our approach capitalizes on the strengths of both, significantly fortifying the robustness of biometric systems through this multimodal integration. In addition, EEG’s resistance to replication offers a high-security level, making it a robust addition to user identification and verification. This study presents a new multimodal SignEEG v1.0 dataset based on EEG and hand-drawn signatures from 70 subjects. EEG signals and hand-drawn signatures have been collected with Emotiv Insight and Wacom One sensors, respectively. The multimodal data consists of three paradigms based on mental, & motor imagery, and physical execution: i) thinking of the signature’s image, (ii) drawing the signature mentally, and (iii) drawing a signature physically. Extensive experiments have been conducted to establish a baseline with machine learning classifiers. The results demonstrate that multimodality in biometric systems significantly enhances robustness, achieving high reliability even with limited sample sizes. We release the raw, pre-processed data and easy-to-follow implementation details.

Place, publisher, year, edition, pages
Nature Research, 2024
National Category
Computer Sciences Signal Processing
Research subject
Operation and Maintenance Engineering; Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108479 (URN)10.1038/s41597-024-03546-z (DOI)001261561300002 ()38956046 (PubMedID)2-s2.0-85197457964 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-08-07 (hanlid);

Full text license: CC BY

Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6158-3543

Search in DiVA

Show all publications