Endre søk
Link to record
Permanent link

Direct link
Publikasjoner (10 av 161) Visa alla publikasjoner
Dorigo, T., Brown, G. D., Casonato, C., Cerda, A., Ciarrochi, J., Lio, M. D., . . . Yazdanpanah, N. (2025). Artificial Intelligence in Science and Society: the Vision of USERN. IEEE Access
Åpne denne publikasjonen i ny fane eller vindu >>Artificial Intelligence in Science and Society: the Vision of USERN
Vise andre…
2025 (engelsk)Inngår i: IEEE Access, E-ISSN 2169-3536Artikkel, forskningsoversikt (Fagfellevurdert) Epub ahead of print
Abstract [en]

The recent rise in relevance and diffusion of Artificial Intelligence (AI)-based systems and the increasing number and power of applications of AI methods invites a profound reflection on the impact of these innovative systems on scientific research and society at large. The Universal Scientific Education and Research Network (USERN), an organization that promotes initiatives to support interdisciplinary science and education across borders and actively works to improve science policy, collects here the vision of its Advisory Board members, together with a selection of AI experts, to summarize how we see developments in this exciting technology impacting science and society in the foreseeable future. In this review, we first attempt to establish clear definitions of intelligence and consciousness, then provide an overviewof AI’s state of the art and its applications. A discussion of the implications, opportunities, and liabilities of the diffusion of AI for research in a few representative fields of science follows this. Finally, we address the potential risks of AI to modern society, suggest strategies for mitigating those risks, and present our conclusions and recommendations.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers Inc., 2025
HSV kategori
Forskningsprogram
Maskininlärning; Distribuerade datorsystem
Identifikatorer
urn:nbn:se:ltu:diva-111443 (URN)10.1109/ACCESS.2025.3529357 (DOI)2-s2.0-85215252598 (Scopus ID)
Merknad

Full text license: CC BY;

For funding information, see: 10.1109/ACCESS.2025.3529357

Tilgjengelig fra: 2025-01-28 Laget: 2025-01-28 Sist oppdatert: 2025-01-28
Chhipa, P. C., Vashishtha, G., Anantha sai Settur, J., Saini, R., Shah, M. & Liwicki, M. (2025). ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks. In: ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks: . Paper presented at International Conference on Learning Representations (ICLR) 2025.
Åpne denne publikasjonen i ny fane eller vindu >>ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks
Vise andre…
2025 (engelsk)Inngår i: ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks, 2025Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Existing self-supervised adversarial training (self-AT) methods rely on hand-crafted adversarial attack strategies for PGD attacks, which fail to adapt to the evolving learning dynamics of the model and do not account for instance-specific characteristics of images. This results in sub-optimal adversarial robustness and limits the alignment between clean and adversarial data distributions. To address this, we propose ASTrA (Adversarial Self-supervised Training with Adaptive-Attacks), a novel framework introducing a learnable, self-supervised attack strategy network that autonomously discovers optimal attack parameters through exploration-exploitation in a single training episode. ASTrA leverages a reward mechanism based on contrastive loss, optimized with REINFORCE, enabling adaptive attack strategies without labeled data or additional hyperparameters. We further introduce a mixed contrastive objective to align the distribution of clean and adversarial examples in representation space. ASTrA achieves state-of-the-art results on CIFAR10, CIFAR100, and STL10 while integrating seamlessly as a plug-and-play module for other self-AT methods. ASTrA shows scalability to larger datasets, demonstrates strong semi-supervised performance, and is resilient to robust overfitting, backed by explainability analysis on optimal attack strategies. Project page for source code and other details at https://prakashchhipa.github.io/projects/ASTrA.

HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-111564 (URN)
Konferanse
International Conference on Learning Representations (ICLR) 2025
Tilgjengelig fra: 2025-02-07 Laget: 2025-02-07 Sist oppdatert: 2025-02-07
Nikolaidou, K., Retsinas, G., Sfikas, G. & Liwicki, M. (2025). DiffusionPen: Towards Controlling the Style of Handwritten Text Generation. In: Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol (Ed.), Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXXXV. Paper presented at 18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024 (pp. 417-434). Springer Science and Business Media Deutschland GmbH, LXXXV
Åpne denne publikasjonen i ny fane eller vindu >>DiffusionPen: Towards Controlling the Style of Handwritten Text Generation
2025 (engelsk)Inngår i: Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part LXXXV / [ed] Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol, Springer Science and Business Media Deutschland GmbH , 2025, Vol. LXXXV, s. 417-434Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Handwritten Text Generation (HTG) conditioned on text and style is a challenging task due to the variability of inter-user characteristics and the unlimited combinations of characters that form new words unseen during training. Diffusion Models have recently shown promising results in HTG but still remain under-explored. We present DiffusionPen (DiffPen), a 5-shot style handwritten text generation approach based on Latent Diffusion Models. By utilizing a hybrid style extractor that combines metric learning and classification, our approach manages to capture both textual and stylistic characteristics of seen and unseen words and styles, generating realistic handwritten samples. Moreover, we explore several variation strategies of the data with multi-style mixtures and noisy embeddings, enhancing the robustness and diversity of the generated data. Extensive experiments using IAM offline handwriting database show that our method outperforms existing methods qualitatively and quantitatively, and its additional generated data can improve the performance of Handwriting Text Recognition (HTR) systems.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2025
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15143
Emneord
Handwriting Generation, Latent Diffusion Models, Few-shot Style Representation
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-111074 (URN)10.1007/978-3-031-73013-9_24 (DOI)2-s2.0-85211230972 (Scopus ID)
Konferanse
18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024
Merknad

ISBN for host publication: 978-3-031-73012-2, 978-3-031-73013-9

Tilgjengelig fra: 2024-12-17 Laget: 2024-12-17 Sist oppdatert: 2025-02-01bibliografisk kontrollert
Chippa, M. S., Chhipa, P. C., De, K., Liwicki, M. & Saini, R. (2025). LCM: Log Conformal Maps for Robust Representation Learning to Mitigate Perspective Distortion. In: Minsu Cho; Ivan Laptev; Du Tran; Angela Yao; Hongbin Zha (Ed.), Computer Vision – ACCV 2024: 17th Asian Conference on Computer VisionHanoi, Vietnam, December 8–12, 2024 Proceedings, Part VIII. Paper presented at 17th Asian Conference on Computer Vision (ACCV 2024), Hanoi, Vietnam, December 8-12, 2024 (pp. 175-191). Springer Nature
Åpne denne publikasjonen i ny fane eller vindu >>LCM: Log Conformal Maps for Robust Representation Learning to Mitigate Perspective Distortion
Vise andre…
2025 (engelsk)Inngår i: Computer Vision – ACCV 2024: 17th Asian Conference on Computer VisionHanoi, Vietnam, December 8–12, 2024 Proceedings, Part VIII / [ed] Minsu Cho; Ivan Laptev; Du Tran; Angela Yao; Hongbin Zha, Springer Nature, 2025, s. 175-191Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Perspective distortion (PD) leads to substantial alterations in the shape, size, orientation, angles, and spatial relationships of visual elements in images. Accurately determining camera intrinsic and extrinsic parameters is challenging, making it hard to synthesize perspective distortion effectively. The current distortion correction methods involve removing distortion and learning vision tasks, thus making it a multi-step process, often compromising performance. Recent work leverages the Möbius transform for mitigating perspective distortions (MPD) to synthesize perspective distortions without estimating camera parameters. Möbius transform requires tuning multiple interdependent and interrelated parameters and involving complex arithmetic operations, leading to substantial computational complexity. To address these challenges, we propose Log Conformal Maps (LCM), a method leveraging the logarithmic function to approximate perspective distortions with fewer parameters and reduced computational complexity. We provide a detailed foundation complemented with experiments to demonstrate that LCM with fewer parameters approximates the MPD. We show that LCM integrates well with supervised and self-supervised representation learning, outperform standard models, and matches the state-of-the-art performance in mitigating perspective distortion over multiple benchmarks, namely Imagenet-PD, Imagenet-E, and Imagenet-X. Further LCM demonstrate seamless integration with person re-identification and improved the performance. Source code is made publicly available at https://github.com/meenakshi23/Log-Conformal-Maps. 

sted, utgiver, år, opplag, sider
Springer Nature, 2025
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15479
Emneord
Perspective Distortion, Robust Representation Learning, Self-supervised Learning
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-111235 (URN)10.1007/978-981-96-0966-6_11 (DOI)2-s2.0-85212922792 (Scopus ID)
Konferanse
17th Asian Conference on Computer Vision (ACCV 2024), Hanoi, Vietnam, December 8-12, 2024
Forskningsfinansiär
Knut and Alice Wallenberg Foundation
Merknad

ISBN for host publication: 978-981-96-0965-9;

Tilgjengelig fra: 2025-01-08 Laget: 2025-01-08 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Chhipa, P. C., Chippa, M. S., De, K., Saini, R., Liwicki, M. & Shah, M. (2025). Möbius Transform for Mitigating Perspective Distortions in Representation Learning. In: Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol (Ed.), Computer Vision – ECCV 2024: 18th European Conference Milan, Italy, September 29–October 4, 2024 Proceedings, Part LXXIII. Paper presented at 18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024 (pp. 345-363). Springer Science and Business Media Deutschland GmbH
Åpne denne publikasjonen i ny fane eller vindu >>Möbius Transform for Mitigating Perspective Distortions in Representation Learning
Vise andre…
2025 (engelsk)Inngår i: Computer Vision – ECCV 2024: 18th European Conference Milan, Italy, September 29–October 4, 2024 Proceedings, Part LXXIII / [ed] Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol, Springer Science and Business Media Deutschland GmbH , 2025, s. 345-363Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Perspective distortion (PD) causes unprecedented changes in shape, size, orientation, angles, and other spatial relationships of visual concepts in images. Precisely estimating camera intrinsic and extrinsic parameters is a challenging task that prevents synthesizing perspective distortion. Non-availability of dedicated training data poses a critical barrier to developing robust computer vision methods. Additionally, distortion correction methods make other computer vision tasks a multi-step approach and lack performance. In this work, we propose mitigating perspective distortion (MPD) by employing a fine-grained parameter control on a specific family of Möbius transform to model real-world distortion without estimating camera intrinsic and extrinsic parameters and without the need for actual distorted data. Also, we present a dedicated perspectively distorted benchmark dataset, ImageNet-PD, to benchmark the robustness of deep learning models against this new dataset. The proposed method outperforms existing benchmarks, ImageNet-E and ImageNet-X. Additionally, it significantly improves performance on ImageNet-PD while consistently performing on standard data distribution. Notably, our method shows improved performance on three PD-affected real-world applications—crowd counting, fisheye image recognition, and person re-identification—and one PD-affected challenging CV task: object detection. The source code, dataset, and models are available on the project webpage at https://prakashchhipa.github.io/projects/mpd.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2025
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15131
Emneord
Perspective Distortion, Self-supervised Learning, Robust Representation Learning
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-111233 (URN)10.1007/978-3-031-73464-9_21 (DOI)2-s2.0-85212279211 (Scopus ID)
Konferanse
18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024
Forskningsfinansiär
Knut and Alice Wallenberg Foundation
Merknad

ISBN for host publication: 978-3-031-73463-2, 978-3-031-73464-9

Tilgjengelig fra: 2025-01-08 Laget: 2025-01-08 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Belay, B. H., Guyon, I., Mengiste, T., Tilahun, B., Liwicki, M., Tegegne, T. & Egele, R. (2024). A Historical Handwritten Dataset for Ethiopic OCR with Baseline Models and Human-Level Performance. In: Elisa H. Barney Smith; Marcus Liwicki; Liangrui Peng (Ed.), Document Analysis and Recognition, ICDAR 2024: 18th International Conference, Athens, Greece, August 30 – September 4, 2024, Proceedings, Part III. Paper presented at 18th International Conference on Document Analysis and Recognition (ICDAR 2024), Athens, Greece, August 30–September 4, 2024 (pp. 23-38). Springer Science and Business Media Deutschland GmbH, 3
Åpne denne publikasjonen i ny fane eller vindu >>A Historical Handwritten Dataset for Ethiopic OCR with Baseline Models and Human-Level Performance
Vise andre…
2024 (engelsk)Inngår i: Document Analysis and Recognition, ICDAR 2024: 18th International Conference, Athens, Greece, August 30 – September 4, 2024, Proceedings, Part III / [ed] Elisa H. Barney Smith; Marcus Liwicki; Liangrui Peng, Springer Science and Business Media Deutschland GmbH , 2024, Vol. 3, s. 23-38Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper introduces a new OCR dataset for historical handwritten Ethiopic script, characterized by a unique syllabic writing system, low-resource availability, and complex orthographic diacritics. The dataset consists of roughly 80,000 annotated text-line images from 1700 pages of 18th to 20th century documents, including a training set with text-line images from the 19th to 20th century and two test sets. One is distributed similarly to the training set with nearly 6,000 text-line images, and the other contains only images from the 18th century manuscripts, with around 16,000 images. The former test set allows us to check baseline performance in the classical IID setting (Independently and Identically Distributed), while the latter addresses a more realistic setting in which the test set is drawn from a different distribution than the training set (Out-Of-Distribution or OOD). Multiple annotators labeled all text-line images for the HHD-Ethiopic dataset, and an expert supervisor double-checked them. We assessed human-level recognition performance and compared it with state-of-the-art (SOTA) OCR models using the Character Error Rate (CER) and Normalized Edit Distance (NED) metrics. Our results show that the model performed comparably to human-level recognition on the 18th century test set and outperformed humans on the IID test set. However, the unique challenges posed by the Ethiopic script, such as detecting complex diacritics, still present difficulties for the models. Our baseline evaluation and dataset will encourage further research on Ethiopic script recognition. The dataset and source code can be accessed at https://github.com/bdu-birhanu/HHD-Ethiopic.

sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2024
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14806
Emneord
Historical Ethiopic script, Human-level recognition performance, HHD-Ethiopic, Normalized edit distance, Text recognition
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-110171 (URN)10.1007/978-3-031-70543-4_2 (DOI)001336394400002 ()2-s2.0-85204650159 (Scopus ID)
Konferanse
18th International Conference on Document Analysis and Recognition (ICDAR 2024), Athens, Greece, August 30–September 4, 2024
Forskningsfinansiär
EU, Horizon 2020, 952215
Merknad

Funder: ANR Chair of ArtificialIntelligence HUMANIA (ANR-19-CHIA-0022); ChaLearn; ICT4D Research Center of Bahir Dar Institute of Technology;

ISBN for host publication: 978-3-031-70542-7, 978-3-031-70543-4

Tilgjengelig fra: 2024-10-02 Laget: 2024-10-02 Sist oppdatert: 2025-02-01bibliografisk kontrollert
Nilsson, J., Javed, S., Albertsson, K., Delsing, J., Liwicki, M. & Sandin, F. (2024). AI Concepts for System of Systems Dynamic Interoperability. Sensors, 24(9), Article ID 2921.
Åpne denne publikasjonen i ny fane eller vindu >>AI Concepts for System of Systems Dynamic Interoperability
Vise andre…
2024 (engelsk)Inngår i: Sensors, E-ISSN 1424-8220, Vol. 24, nr 9, artikkel-id 2921Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Interoperability is a central problem in digitization and sos engineering, which concerns the capacity of systems to exchange information and cooperate. The task to dynamically establish interoperability between heterogeneous cps at run-time is a challenging problem. Different aspects of the interoperability problem have been studied in fields such as sos, neural translation, and agent-based systems, but there are no unifying solutions beyond domain-specific standardization efforts. The problem is complicated by the uncertain and variable relations between physical processes and human-centric symbols, which result from, e.g., latent physical degrees of freedom, maintenance, re-configurations, and software updates. Therefore, we surveyed the literature for concepts and methods needed to automatically establish sos with purposeful cps communication, focusing on machine learning and connecting approaches that are not integrated in the present literature. Here, we summarize recent developments relevant to the dynamic interoperability problem, such as representation learning for ontology alignment and inference on heterogeneous linked data; neural networks for transcoding of text and code; concept learning-based reasoning; and emergent communication. We find that there has been a recent interest in deep learning approaches to establishing communication under different assumptions about the environment, language, and nature of the communicating entities. Furthermore, we present examples of architectures and discuss open problems associated with ai-enabled solutions in relation to sos interoperability requirements. Although these developments open new avenues for research, there are still no examples that bridge the concepts necessary to establish dynamic interoperability in complex sos, and realistic testbeds are needed.

sted, utgiver, år, opplag, sider
MDPI, 2024
Emneord
system of systems, dynamic interoperability, AI for cyber-physical systems, representation learning
HSV kategori
Forskningsprogram
Cyberfysiska system; Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-87246 (URN)10.3390/s24092921 (DOI)001219942200001 ()38733028 (PubMedID)2-s2.0-85192703355 (Scopus ID)
Merknad

Validerad;2024;Nivå 2;2024-05-03 (joosat);

Funder: European Commission and Arrowhead Tools project (ECSEL JU grant agreement No. 826452);

Full text: CC BY License

Tilgjengelig fra: 2021-09-28 Laget: 2021-09-28 Sist oppdatert: 2024-11-20bibliografisk kontrollert
Saini, R., Liwicki, M. & Jara-Valera, A. J. (2024). Data Analytics and Artificial Intelligence. In: Sébastien Ziegler, Renáta Radócz, Adrian Quesada Rodriguez, Sara Nieves Matheu Garcia (Ed.), Springer Handbooks: (pp. 427-442). Springer Science and Business Media Deutschland GmbH, Part F3575
Åpne denne publikasjonen i ny fane eller vindu >>Data Analytics and Artificial Intelligence
2024 (engelsk)Inngår i: Springer Handbooks / [ed] Sébastien Ziegler, Renáta Radócz, Adrian Quesada Rodriguez, Sara Nieves Matheu Garcia, Springer Science and Business Media Deutschland GmbH , 2024, Vol. Part F3575, s. 427-442Kapittel i bok, del av antologi (Annet vitenskapelig)
sted, utgiver, år, opplag, sider
Springer Science and Business Media Deutschland GmbH, 2024
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-111223 (URN)10.1007/978-3-031-39650-2_18 (DOI)2-s2.0-85212114810 (Scopus ID)
Tilgjengelig fra: 2025-01-07 Laget: 2025-01-07 Sist oppdatert: 2025-02-05
Pihlgren, G. G., Sandin, F. & Liwicki, M. (2024). Deep Perceptual Similarity is Adaptable to Ambiguous Contexts. In: Tetiana Lutchyn; Adin Ramirez Rivera; Benjamin Ricaud (Ed.), Proceedings of Machine Learning Research, PMLR: Volume 233: Northern Lights Deep Learning Conference, 9-11 January 2024, UiT The Arctic University, Tromsø, Norway. Paper presented at 5th Northern Lights Deep Learning Conference (NLDL 2024), Tromsø, Norway, January 9-11, 2024 (pp. 212-219). Proceedings of Machine Learning Research
Åpne denne publikasjonen i ny fane eller vindu >>Deep Perceptual Similarity is Adaptable to Ambiguous Contexts
2024 (engelsk)Inngår i: Proceedings of Machine Learning Research, PMLR: Volume 233: Northern Lights Deep Learning Conference, 9-11 January 2024, UiT The Arctic University, Tromsø, Norway / [ed] Tetiana Lutchyn; Adin Ramirez Rivera; Benjamin Ricaud, Proceedings of Machine Learning Research , 2024, s. 212-219Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This work examines the adaptability of Deep Perceptual Similarity (DPS) metrics to context beyond those that align with average human perception and contexts in which the standard metrics have been shown to perform well. Prior works have shown that DPS metrics are good at estimating human perception of similarity, so-called perceptual similarity. However, it remains unknown whether such metrics can be adapted to other contexts. In this work, DPS metrics are evaluated for their adaptability to different contradictory similarity contexts. Such contexts are created by randomly ranking six image distortions. Metrics are adapted to consider distortions more or less disruptive to similarity depending on their place in the random rankings. This is done by training pretrained CNNs to measure similarity according to given contexts. The adapted metrics are also evaluated on a perceptual similarity dataset to evaluate whether adapting to a ranking affects their prior performance. The findings show that DPS metrics can be adapted with high performance. While the adapted metrics have difficulties with the same contexts as baselines, performance is improved in 99% of cases. Finally, it is shown that the adaption is not significantly detrimental to prior performance on perceptual similarity. The implementation of this work is available online.

sted, utgiver, år, opplag, sider
Proceedings of Machine Learning Research, 2024
Serie
Proceedings of Machine Learning Research, E-ISSN 2640-3498 ; 233
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-105093 (URN)2-s2.0-85189301791 (Scopus ID)
Konferanse
5th Northern Lights Deep Learning Conference (NLDL 2024), Tromsø, Norway, January 9-11, 2024
Merknad

Full text license: CC BY 4.0; 

Tilgjengelig fra: 2024-04-15 Laget: 2024-04-15 Sist oppdatert: 2024-04-15bibliografisk kontrollert
Adewumi, O., Gerdes, M., Chaltikyan, G., Fernandes, F., Lindsköld, L., Liwicki, M. & Catta-Preta, M. (2024). DigiHealth-AI: Outcomes of the First Blended Intensive Programme (BIP) on AI for Health – a Cross-Disciplinary Multi-Institutional Short Teaching Course. In: JAIR - Journal of Applied Interdisciplinary Research Special Issue (2024): Proceedings of the DigiHealthDay 2023. Paper presented at DigiHealthDay-2023, International Scientific Symposium, Pfarrkirchen, Germany, Nov 10, 2023 (pp. 75-85). Deggendorf Institute of Technology
Åpne denne publikasjonen i ny fane eller vindu >>DigiHealth-AI: Outcomes of the First Blended Intensive Programme (BIP) on AI for Health – a Cross-Disciplinary Multi-Institutional Short Teaching Course
Vise andre…
2024 (engelsk)Inngår i: JAIR - Journal of Applied Interdisciplinary Research Special Issue (2024): Proceedings of the DigiHealthDay 2023, Deggendorf Institute of Technology , 2024, s. 75-85Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We reflect on the experiences in organizing and implementing a high-quality Blended Intensive Programme (BIP) as a joint international event. A BIP is a short programme that combines physical mobility with a virtual part. The 6-day event, titled “DigiHealth-AI: Practice, Research, Ethics, and Regulation”, was organized in collaboration with partners from five European nations and support from the EU’s ERASMUS+ programme in November 2023. We introduced a new learning method called ProCoT, involving large language models (LLMs), for preventing cheating by students in writing. We designed an online survey of key questions, which was conducted at the beginning and the end of the BIP. The highlights of the survey are as follows: By the end of the BIP, 84% of the respondents agreed that the intended learning outcomes (ILOs) were fulfilled, 100% strongly agreed that artificial intelligence (AI) benefits the healthcare sector, 62% disagree that they are concerned about AI potentially eliminating jobs in the healthcare sector (compared to 57% initially), 60% were concerned about their privacy when using AI, and 56% could identify, at least, two known sources of bias in AI systems (compared to only 43% prior to the BIP). A total of 541 votes were cast by 40 students, who were the respondents. The minimum and maximum numbers of students who answered any particular survey question at a given period are 25 and 40, respectively.

sted, utgiver, år, opplag, sider
Deggendorf Institute of Technology, 2024
Emneord
Machine learning, healthcare, pedagogy
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
urn:nbn:se:ltu:diva-110792 (URN)10.25929/dcmwch54 (DOI)
Konferanse
DigiHealthDay-2023, International Scientific Symposium, Pfarrkirchen, Germany, Nov 10, 2023
Merknad

Full text license: CC BY-SA 4.0;

Funder: Knut and Alice Wallenberg Foundations; LTU counterpart fund;

Tilgjengelig fra: 2024-11-25 Laget: 2024-11-25 Sist oppdatert: 2024-11-25bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-4029-6574