Open this publication in new window or tab >>Show others...
2024 (English)In: 2024 24th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2024, p. 1212-1217Conference paper, Published paper (Refereed)
Abstract [en]
Visual servoing plays a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to aerial manipulation. This article proposes TFMarker, a novel tangible fiducial pattern for enabling camera-assisted guided landing of UAVs by using the visual features from color markers as the main source of information. TFMarker is structured around a 4-point fiducial marker, allowing for accurate, precise, and consistent pose estimation in different environments and lighting conditions, while also offering resilience to motion blur. The presented detection framework is based on a three-step architecture, where the first step uses Gaussian and color filtering in addition to morphological operation in order to generate a robust detection of the markers. The second step uses the Gift Wrapping Algorithm, to organize the same-color markers based on their relative positioning with respect to the off-color marker. Finally, the Perspective-n-Point optimization problem is solved in order to extract the pose (i.e. position and orientation) of the proposed pattern with respect to the vision sensor. The efficacy of the proposed scheme has been extensively validated in indoor and SubT environments for the task of autonomous landing using a custom-made UAV. The experimental results showcase the performance of the proposed method, which presents a better detection rate in both environments while retaining similar accuracy and precision to the baseline approach. For the video of the experimental evaluation please refer to the following link: https://youtu.be/Zh13OObp15Q
Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Autonomous Drone Landing, Perception in Perceptually Degraded Conditions, Pose-base Visual Servoing
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111328 (URN)10.23919/ICCAS63016.2024.10773374 (DOI)2-s2.0-85214368108 (Scopus ID)
Conference
24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024
Note
ISBN for host publication: 978-89-93215-38-0;
2025-01-202025-01-202025-01-20Bibliographically approved