Open this publication in new window or tab >>Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 1, article id 185Article in journal (Refereed) Published
Abstract [en]
Direct policy learning (DPL) is a widely used approach in imitation learning for time-efficient and effective convergence when training mobile robots. However, using DPL in real-world applications is not sufficiently explored due to the inherent challenges of mobilizing direct human expertise and the difficulty of measuring comparative performance. Furthermore, autonomous systems are often resource-constrained, thereby limiting the potential application and implementation of highly effective deep learning models. In this work, we present a lightweight DPL-based approach to train mobile robots in navigational tasks. We integrated a safety policy alongside the navigational policy to safeguard the robot and the environment. The approach was evaluated in simulations and real-world settings and compared with recent work in this space. The results of these experiments and the efficient transfer from simulations to real-world settings demonstrate that our approach has improved performance compared to its hardware-intensive counterparts. We show that using the proposed methodology, the training agent achieves closer performance to the expert within the first 15 training iterations in simulation and real-world settings.
Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
autonomous navigation, direct policy learning, imitation learning, mobile robots
National Category
Robotics and automation Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103858 (URN)10.3390/s24010185 (DOI)001140602200001 ()38203047 (PubMedID)2-s2.0-85181972214 (Scopus ID)
Note
Validerad;2024;Nivå 2;2024-01-22 (joosat);
Full text license: CC BY
2024-01-222024-01-222025-02-05Bibliographically approved