Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
TFMarker: A Tangible Fiducial Pattern for Enabling Camera-assisted Guided Landing in SubT Environments
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8132-4178
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0002-0020-6020
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0003-3498-3765
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-6762-7172
Show others and affiliations
2024 (English)In: 2024 24th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2024, p. 1212-1217Conference paper, Published paper (Refereed)
Abstract [en]

Visual servoing plays a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to aerial manipulation. This article proposes TFMarker, a novel tangible fiducial pattern for enabling camera-assisted guided landing of UAVs by using the visual features from color markers as the main source of information. TFMarker is structured around a 4-point fiducial marker, allowing for accurate, precise, and consistent pose estimation in different environments and lighting conditions, while also offering resilience to motion blur. The presented detection framework is based on a three-step architecture, where the first step uses Gaussian and color filtering in addition to morphological operation in order to generate a robust detection of the markers. The second step uses the Gift Wrapping Algorithm, to organize the same-color markers based on their relative positioning with respect to the off-color marker. Finally, the Perspective-n-Point optimization problem is solved in order to extract the pose (i.e. position and orientation) of the proposed pattern with respect to the vision sensor. The efficacy of the proposed scheme has been extensively validated in indoor and SubT environments for the task of autonomous landing using a custom-made UAV. The experimental results showcase the performance of the proposed method, which presents a better detection rate in both environments while retaining similar accuracy and precision to the baseline approach. For the video of the experimental evaluation please refer to the following link: https://youtu.be/Zh13OObp15Q

Place, publisher, year, edition, pages
IEEE, 2024. p. 1212-1217
Keywords [en]
Autonomous Drone Landing, Perception in Perceptually Degraded Conditions, Pose-base Visual Servoing
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
URN: urn:nbn:se:ltu:diva-111328DOI: 10.23919/ICCAS63016.2024.10773374Scopus ID: 2-s2.0-85214368108OAI: oai:DiVA.org:ltu-111328DiVA, id: diva2:1929392
Conference
24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024
Note

ISBN for host publication: 978-89-93215-38-0;

Available from: 2025-01-20 Created: 2025-01-20 Last updated: 2025-10-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Saucedo, Mario A. V.Patel, AkashDahlquist, NiklasBai, YifanLindqvist, BjörnKanellakis, ChristoforosNikolakopoulos, George

Search in DiVA

By author/editor
Saucedo, Mario A. V.Patel, AkashDahlquist, NiklasBai, YifanLindqvist, BjörnKanellakis, ChristoforosNikolakopoulos, George
By organisation
Signals and Systems
Computer graphics and computer visionComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 71 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf