Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 38) Show all publications
Ogun, S., Owodunni, A. T., Olatunji, T., Alese, E., Oladimeji, B., Afonja, T., . . . Adewumi, T. (2024). 1000 African Voices: Advancing inclusive multi-speaker multi-accent speech synthesis. In: Itshak Lapidot, Sharon Gannot (Ed.), Interspeech 2024: . Paper presented at Interspeech 2024, 1-5 September 2024, Kos, Greece, (pp. 1855-1859). International Speech Communication Association
Open this publication in new window or tab >>1000 African Voices: Advancing inclusive multi-speaker multi-accent speech synthesis
Show others...
2024 (English)In: Interspeech 2024 / [ed] Itshak Lapidot, Sharon Gannot, International Speech Communication Association , 2024, p. 1855-1859Conference paper, Published paper (Refereed)
Abstract [en]

Recent advances in speech synthesis have enabled many useful applications like audio directions in Google Maps, screen readers, and automated content generation on platforms like TikTok. However, these systems are mostly dominated by voices sourced from data-rich geographies with personas representative of their source data. Although 3000 of the world's languages are domiciled in Africa, African voices and personas are under-represented in these systems. As speech synthesis becomes increasingly democratized, it is desirable to increase the representation of African English accents. We present Afro-TTS, the first pan-African accented English speech synthesis system able to generate speech in 86 African accents, with 1000 personas representing the rich phonological diversity across the continent for downstream application in Education, Public Health, and Automated Content Creation. Speaker interpolation retains naturalness and accentedness, enabling the creation of new voices.

Place, publisher, year, edition, pages
International Speech Communication Association, 2024
Keywords
text-to-speech, African-accented TTS, accented speech, multi-accent TTS, multi-speaker TTS
National Category
Language Technology (Computational Linguistics) Specific Languages Human Computer Interaction
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-110840 (URN)10.21437/interspeech.2024-2281 (DOI)
Conference
Interspeech 2024, 1-5 September 2024, Kos, Greece,
Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-11-27Bibliographically approved
Wang, J., Adelani, D. I., Agrawal, S., Masiak, M., Rei, R., Briakou, E., . . . Stenetorp, P. (2024). AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages. In: Duh K.; Gomez H.; Bethard S. (Ed.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024: . Paper presented at 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024 (pp. 5997-6023). Association for Computational Linguistics (ACL), Article ID 200463.
Open this publication in new window or tab >>AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Show others...
2024 (English)In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 / [ed] Duh K.; Gomez H.; Bethard S., Association for Computational Linguistics (ACL) , 2024, p. 5997-6023, article id 200463Conference paper, Published paper (Refereed)
Abstract [en]

Despite the recent progress on scaling multilingual machine translation (MT) to severalunder-resourced African languages, accuratelymeasuring this progress remains challenging,since evaluation is often performed on n-grammatching metrics such as BLEU, which typically show a weaker correlation with humanjudgments. Learned metrics such as COMEThave higher correlation; however, the lack ofevaluation data with human ratings for underresourced languages, complexity of annotationguidelines like Multidimensional Quality Metrics (MQM), and limited language coverageof multilingual encoders have hampered theirapplicability to African languages. In this paper, we address these challenges by creatinghigh-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AFRICOMET: COMETevaluation metrics for African languages byleveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-theart MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2024
National Category
Language Technology (Computational Linguistics)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108639 (URN)10.18653/v1/2024.naacl-long.334 (DOI)2-s2.0-85199581086 (Scopus ID)
Conference
2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 16-21, 2024
Note

Funder: UTTER (101070631); Portuguese Recovery and Resilience Plan (C645008882-00000055); Landmark Development Initiative Africa; European Commission; Fundação para a Ciência e a Tecnologia;

ISBN for host publication: 979-889176114-8; 

Fulltext license: CC BY Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License

Available from: 2024-08-20 Created: 2024-08-20 Last updated: 2024-11-27Bibliographically approved
Pagliai, I., van Boven, G., Adewumi, T., Alkhaled, L., Gurung, N., Södergren, I. & Barney, E. (2024). Data Bias According to Bipol: Men are Naturally Right and It is the Role ofWomen to Follow Their Lead. In: Mourad Abbas; Abed Alhakim Freihat (Ed.), Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP-2024): . Paper presented at 7th International Conference on Natural Language and Speech Processing (ICNLSP 2024), Trento, Italy, October 19-20, 2024 (pp. 34-46). Association for Computational Linguistics, Article ID 2024.icnlsp-1.5.
Open this publication in new window or tab >>Data Bias According to Bipol: Men are Naturally Right and It is the Role ofWomen to Follow Their Lead
Show others...
2024 (English)In: Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP-2024) / [ed] Mourad Abbas; Abed Alhakim Freihat, Association for Computational Linguistics , 2024, p. 34-46, article id 2024.icnlsp-1.5Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Association for Computational Linguistics, 2024
National Category
Computer and Information Sciences General Language Studies and Linguistics
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-110841 (URN)
Conference
7th International Conference on Natural Language and Speech Processing (ICNLSP 2024), Trento, Italy, October 19-20, 2024
Note

ISBN for host publication: 9798891761650;

Funder: Wallenberg AI, Autonomous Systems and Software Program (WASP); Knut and Alice Wallenberg Foundation; Luleå University of Technology (LTU);

Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-11-27Bibliographically approved
Adewumi, O., Gerdes, M., Chaltikyan, G., Fernandes, F., Lindsköld, L., Liwicki, M. & Catta-Preta, M. (2024). DigiHealth-AI: Outcomes of the First Blended Intensive Programme (BIP) on AI for Health – a Cross-Disciplinary Multi-Institutional Short Teaching Course. In: JAIR - Journal of Applied Interdisciplinary Research Special Issue (2024): Proceedings of the DigiHealthDay 2023. Paper presented at DigiHealthDay-2023, International Scientific Symposium, Pfarrkirchen, Germany, Nov 10, 2023 (pp. 75-85). Deggendorf Institute of Technology
Open this publication in new window or tab >>DigiHealth-AI: Outcomes of the First Blended Intensive Programme (BIP) on AI for Health – a Cross-Disciplinary Multi-Institutional Short Teaching Course
Show others...
2024 (English)In: JAIR - Journal of Applied Interdisciplinary Research Special Issue (2024): Proceedings of the DigiHealthDay 2023, Deggendorf Institute of Technology , 2024, p. 75-85Conference paper, Published paper (Refereed)
Abstract [en]

We reflect on the experiences in organizing and implementing a high-quality Blended Intensive Programme (BIP) as a joint international event. A BIP is a short programme that combines physical mobility with a virtual part. The 6-day event, titled “DigiHealth-AI: Practice, Research, Ethics, and Regulation”, was organized in collaboration with partners from five European nations and support from the EU’s ERASMUS+ programme in November 2023. We introduced a new learning method called ProCoT, involving large language models (LLMs), for preventing cheating by students in writing. We designed an online survey of key questions, which was conducted at the beginning and the end of the BIP. The highlights of the survey are as follows: By the end of the BIP, 84% of the respondents agreed that the intended learning outcomes (ILOs) were fulfilled, 100% strongly agreed that artificial intelligence (AI) benefits the healthcare sector, 62% disagree that they are concerned about AI potentially eliminating jobs in the healthcare sector (compared to 57% initially), 60% were concerned about their privacy when using AI, and 56% could identify, at least, two known sources of bias in AI systems (compared to only 43% prior to the BIP). A total of 541 votes were cast by 40 students, who were the respondents. The minimum and maximum numbers of students who answered any particular survey question at a given period are 25 and 40, respectively.

Place, publisher, year, edition, pages
Deggendorf Institute of Technology, 2024
Keywords
Machine learning, healthcare, pedagogy
National Category
Educational Sciences Health Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-110792 (URN)10.25929/dcmwch54 (DOI)
Conference
DigiHealthDay-2023, International Scientific Symposium, Pfarrkirchen, Germany, Nov 10, 2023
Note

Full text license: CC BY-SA 4.0;

Funder: Knut and Alice Wallenberg Foundations; LTU counterpart fund;

Available from: 2024-11-25 Created: 2024-11-25 Last updated: 2024-11-25Bibliographically approved
Pettersson, J., Hult, E., Eriksson, T. & Adewumi, O. (2024). Generative AI and Teachers - For Us or Against Us? A Case Study. In: Florian Westphal, Einav Peretz-Andersson, Maria Riveiro, Kerstin Bach, Fredrik Heintz (Ed.), 14th Scandinavian Conference on Artificial Intelligence SCAI 2024: . Paper presented at Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden (pp. 37-43). Linköping University Electronic Press
Open this publication in new window or tab >>Generative AI and Teachers - For Us or Against Us? A Case Study
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence SCAI 2024 / [ed] Florian Westphal, Einav Peretz-Andersson, Maria Riveiro, Kerstin Bach, Fredrik Heintz, Linköping University Electronic Press , 2024, p. 37-43Conference paper, Published paper (Refereed)
Abstract [en]

We present insightful results of a survey on the adoption of generative artificial intelligence (GenAI) by university teachers in their teaching activities. The transformation of education by GenAI, particularly large language models (LLMs), has been presenting both opportunities and challenges, including cheating by students. We prepared the online survey according to best practices and the questions were created by the authors, who have pedagogy experience. The survey contained 12 questions and a pilot study was first conducted. The survey was then sent to all teachers in multiple departments across different campuses of the university of interest in Sweden: Luleå University of Technology. The survey was available in both Swedish and English. The results show that 35 teachers (more than half) use GenAI out of 67 respondents. Preparation is the teaching activity with the most frequency that GenAI is used for and ChatGPT is the most commonly used GenAI. 59% say it has impacted their teaching, however, 55% say there should be legislation around the use of GenAI, especially as inaccuracies and cheating are the biggest concerns.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2024
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 208
National Category
Didactics
Research subject
Machine Elements
Identifiers
urn:nbn:se:ltu:diva-110803 (URN)10.3384/ecp208005 (DOI)
Conference
Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden
Note

Fuulltext license: CC BY;

ISBN for host publication: 978-91-8075-709-6

Available from: 2024-11-25 Created: 2024-11-25 Last updated: 2024-11-25Bibliographically approved
Adewumi, T., Habib, N., Alkhaled, L. & Barney, E. (2024). Instruction Makes a Difference. In: Giorgos Sfikas; George Retsinas (Ed.), Document Analysis Systems: 16th IAPR International Workshop, DAS 2024, Athens, Greece, August 30–31, 2024, Proceedings. Paper presented at 16th IAPR International Workshop on Document Analysis Systems (DAS 2024), Athens, Greece, August 30-31, 2024 (pp. 71-88). Springer Science and Business Media Deutschland GmbH
Open this publication in new window or tab >>Instruction Makes a Difference
2024 (English)In: Document Analysis Systems: 16th IAPR International Workshop, DAS 2024, Athens, Greece, August 30–31, 2024, Proceedings / [ed] Giorgos Sfikas; George Retsinas, Springer Science and Business Media Deutschland GmbH , 2024, p. 71-88Conference paper, Published paper (Refereed)
Abstract [en]

We introduce the Instruction Document Visual Question Answering (iDocVQA) dataset and the Large Language Document (LLaDoc) model, for training Language-Vision (LV) models for document analysis and predictions on document images, respectively. Usually, deep neural networks for the DocVQA task are trained on datasets lacking instructions. We show that using instruction-following datasets improves performance. We compare performance across document-related datasets using the recent state-of-the-art (SotA) Large Language and Vision Assistant (LLaVA)1.5 as the base model. We also evaluate the performance of the derived models for object hallucination using the Polling-based Object Probing Evaluation (POPE) dataset. The results show that instruction-tuning performance ranges from 11x to 32x of zero-shot performance and from 0.1% to 4.2% over non-instruction (traditional task) finetuning. Despite the gains, these still fall short of human performance (94.36%), implying there’s much room for improvement.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14994
Keywords
DocVQA, instruction-tuning, LLM, LMM
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-110169 (URN)10.1007/978-3-031-70442-0_5 (DOI)2-s2.0-85204640516 (Scopus ID)
Conference
16th IAPR International Workshop on Document Analysis Systems (DAS 2024), Athens, Greece, August 30-31, 2024
Funder
Knut and Alice Wallenberg Foundation
Note

Funder: Wallenberg AI, AutonomousSystems and Software Program (WASP)

ISBN for host publication: 978-3-031-70441-3, 978-3-031-70442-0

Available from: 2024-10-04 Created: 2024-10-04 Last updated: 2024-10-04Bibliographically approved
Abid, N., Noman, M. K., Kovács, G., Islam, S. M., Adewumi, T., Lavery, P., . . . Liwicki, M. (2024). Seagrass classification using unsupervised curriculum learning (UCL). Ecological Informatics, 83, Article ID 102804.
Open this publication in new window or tab >>Seagrass classification using unsupervised curriculum learning (UCL)
Show others...
2024 (English)In: Ecological Informatics, ISSN 1574-9541, E-ISSN 1878-0512, Vol. 83, article id 102804Article in journal (Refereed) Published
Abstract [en]

Seagrass ecosystems are pivotal in marine environments, serving as crucial habitats for diverse marine species and contributing significantly to carbon sequestration. Accurate classification of seagrass species from underwater images is imperative for monitoring and preserving these ecosystems. This paper introduces Unsupervised Curriculum Learning (UCL) to seagrass classification using the DeepSeagrass dataset. UCL progressively learns from simpler to more complex examples, enhancing the model's ability to discern seagrass features in a curriculum-driven manner. Experiments employing state-of-the-art deep learning architectures, convolutional neural networks (CNNs), show that UCL achieved overall 90.12 % precision and 89 % recall, which significantly improves classification accuracy and robustness, outperforming some traditional supervised learning approaches like SimCLR, and unsupervised approaches like Zero-shot CLIP. The methodology of UCL involves four main steps: high-dimensional feature extraction, pseudo-label generation through clustering, reliable sample selection, and fine-tuning the model. The iterative UCL framework refines CNN's learning of underwater images, demonstrating superior accuracy, generalization, and adaptability to unseen seagrass and background samples of undersea images. The findings presented in this paper contribute to the advancement of seagrass classification techniques, providing valuable insights into the conservation and management of marine ecosystems. The code and dataset are made publicly available and can be assessed here: https://github.com/nabid69/Unsupervised-Curriculum-Learning—UCL.

 

Place, publisher, year, edition, pages
Elsevier B.V., 2024
Keywords
Seagrass, Deep learning, Unsupervised classification, Curriculum learning, Unsupervised curriculum learning, Underwater digital imaging
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-109778 (URN)10.1016/j.ecoinf.2024.102804 (DOI)001307982900001 ()2-s2.0-85202895926 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-09-09 (hanlid);

Full text license: CC BY

Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-11-20Bibliographically approved
Adewumi, T., Adeyemi, M., Anuoluwapo, A., Peters, B., Buzaaba, H., Samuel, O., . . . Liwicki, M. (2023). AfriWOZ: Corpus for Exploiting Cross-Lingual Transfer for Dialogue Generation in Low-Resource, African Languages. In: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings: . Paper presented at 2023 International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, June 18-23, 2023. Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>AfriWOZ: Corpus for Exploiting Cross-Lingual Transfer for Dialogue Generation in Low-Resource, African Languages
Show others...
2023 (English)In: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2023Conference paper, Published paper (Refereed)
Abstract [en]

Dialogue generation is an important NLP task fraught with many challenges. The challenges become more daunting for low-resource African languages. To enable the creation of dialogue agents for African languages, we contribute the first high-quality dialogue datasets for 6 African languages: Swahili, Wolof, Hausa, Nigerian Pidgin English, Kinyarwanda & Yorùbá. There are a total of 9,000 turns, each language having 1,500 turns, which we translate from a portion of the English multi-domain MultiWOZ dataset. Subsequently, we benchmark by investigating & analyzing the effectiveness of modelling through transfer learning by utilziing state-of-the-art (SoTA) deep monolingual models: DialoGPT and BlenderBot. We compare the models with a simple seq2seq baseline using perplexity. Besides this, we conduct human evaluation of single-turn conversations by using majority votes and measure inter-annotator agreement (IAA). We find that the hypothesis that deep monolingual models learn some abstractions that generalize across languages holds. We observe human-like conversations, to different degrees, in 5 out of the 6 languages. The language with the most transferable properties is the Nigerian Pidgin English, with a human-likeness score of 78.1%, of which 34.4% are unanimous. We freely provide the datasets and host the model checkpoints/demos on the HuggingFace hub for public access.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
Series
Proceedings of the International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407
Keywords
crosslingual, dialogue systems, low-resource, multilingual, NLG
National Category
Language Technology (Computational Linguistics) Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-101305 (URN)10.1109/IJCNN54540.2023.10191208 (DOI)001046198701045 ()2-s2.0-85169561924 (Scopus ID)978-1-6654-8868-6 (ISBN)978-1-6654-8867-9 (ISBN)
Conference
2023 International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, June 18-23, 2023
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-03-07Bibliographically approved
Alkhaled, L., Adewumi, O. & Sabry, S. S. (2023). Bipol: A novel multi-axes bias evaluation metric with explainability for NLP. Natural Language Processing Journal, 4, Article ID 100030.
Open this publication in new window or tab >>Bipol: A novel multi-axes bias evaluation metric with explainability for NLP
2023 (English)In: Natural Language Processing Journal, ISSN 2949-7191, Vol. 4, article id 100030Article in journal (Refereed) Published
Abstract [en]

We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Bipol, MAB dataset, NLP, Bias
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-102419 (URN)10.1016/j.nlp.2023.100030 (DOI)
Note

Godkänd;2023;Nivå 0;2023-11-13 (joosat);

CC BY 4.0 License

Available from: 2023-11-13 Created: 2023-11-13 Last updated: 2023-11-13Bibliographically approved
Adewumi, T., Södergren, I., Alkhaled, L., Sabry, S. S., Liwicki, F. & Liwicki, M. (2023). Bipol: Multi-axes Evaluation of Bias with Explainability in BenchmarkDatasets. In: Galia Angelova, Maria Kunilovskaya and Ruslan Mitkov (Ed.), Proceedings of Recent Advances in Natural Language Processing: . Paper presented at International Conference Recent Advances In Natural Language Processing (RANLP 2023), Varna, Bulgaria, September 4-6, 2023 (pp. 1-10). Incoma Ltd.
Open this publication in new window or tab >>Bipol: Multi-axes Evaluation of Bias with Explainability in BenchmarkDatasets
Show others...
2023 (English)In: Proceedings of Recent Advances in Natural Language Processing / [ed] Galia Angelova, Maria Kunilovskaya and Ruslan Mitkov, Incoma Ltd. , 2023, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

We investigate five English NLP benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), Recognising Textual Entailment (RTE), Swedish CB, and SWEDN. Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labeled dataset (of 2 million samples), translated from the English version and train the SotA mT5 model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.

Place, publisher, year, edition, pages
Incoma Ltd., 2023
Series
International conference Recent advances in natural language processing, E-ISSN 2603-2813 ; 2023
National Category
Language Technology (Computational Linguistics)
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-103097 (URN)10.26615/978-954-452-092-2_001 (DOI)2-s2.0-85179181932 (Scopus ID)
Conference
International Conference Recent Advances In Natural Language Processing (RANLP 2023), Varna, Bulgaria, September 4-6, 2023
Note

ISBN for host publication: 978-954-452-092-2

Available from: 2023-11-30 Created: 2023-11-30 Last updated: 2024-11-20Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-5582-2031

Search in DiVA

Show all publications