System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Experimental Evaluation of a Geometry-Aware Aerial Visual Inspection Framework in a Constrained Environment
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0002-4383-7316
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0003-1437-1809
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0003-3922-1735
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8870-6718
Show others and affiliations
2022 (English)In: 2022 30th Mediterranean Conference on Control and Automation (MED), IEEE, 2022, p. 468-474Conference paper, Published paper (Refereed)
Abstract [en]

This article aims to present an experimental evaluation of an offline, geometry-aware aerial visual inspection framework, specifically in constrained environment, established for geometrically fractured objects, by employing an autonomous unmanned aerial vehicle (UAV), equipped with on-board sensors. Based on a model-centric approach, the proposed inspection framework, generates inspection viewpoints around the geometrically fractured object, subject to the augmented static bounds to prevent collisions. The novel framework of visual inspection, presented in this article, aims to mitigate challenges arising due to the spatially-constrained environment, such as limited configuration space and collision with the object under inspection, by accounting for the geometrical information of the vehicle to be inspected. The efficacy of the proposed scheme is experimentally evaluated through large scale field trials with a mining machine.

Place, publisher, year, edition, pages
IEEE, 2022. p. 468-474
Series
Mediterranean Conference on Control and Automation (MED), ISSN 2325-369X, E-ISSN 2473-3504
Keywords [en]
geometry-aware, visual inspection, constrained environment, autonomous unmanned aerial vehicle, geometrically fractured object
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
URN: urn:nbn:se:ltu:diva-92699DOI: 10.1109/MED54222.2022.9837166ISI: 000854013700078Scopus ID: 2-s2.0-85136307583ISBN: 978-1-6654-0673-4 (electronic)ISBN: 978-1-6654-0674-1 (print)OAI: oai:DiVA.org:ltu-92699DiVA, id: diva2:1691348
Conference
30th Mediterranean Conference on Control and Automation (MED), Vouliagmeni, Greece, June 28 - July 1, 2022
Available from: 2022-08-30 Created: 2022-08-30 Last updated: 2025-02-09Bibliographically approved
In thesis
1. One Image, Many Insights: A Synergistic Approach Towards Enabling Autonomous Visual Inspection
Open this publication in new window or tab >>One Image, Many Insights: A Synergistic Approach Towards Enabling Autonomous Visual Inspection
2023 (English)Licentiate thesis, comprehensive summary (Other academic)
Alternative title[sv]
En bild, många insikter: ett synergistiskt tillvägagångssätt för att möjliggöra autonom visuell inspektion
Abstract [en]

Visual inspection in autonomous robotics is a task in which autonomous agents are required to gather visual information of objects of interest, in a manner that ensures safety, efficiency and comprehensive coverage. It is, therefore, crucial for identifying key landmarks, detecting cracks or defects, or reconstructing the observed object for detailed analysis. This thesis delves into the  challenges encountered by autonomous agents in executing such tasks and presents frameworks for scenarios ranging from operations by multiple spacecrafts in close proximity to celestial bodies in Deep Space to terrestrial deployments of Unmanned Aerial Vehicles (UAVs) for inspection of large-scale infrastructures. The research thus pursues two main directions: Firstly, a novel formation control strategy is developed to enable autonomous agents to perform proximity operations safely, efficiently, and accurately in order to map the surface of Small Celestial Bodies (SCBs). This investigation encompasses control and coordination strategies, leveraging a realistic astrodynamic model of the orbital environment to navigate safely around SCBs. Along this direction, the contributions focus on enabling a distributed autonomy framework in the form of a cooperative stereo configuration between two spacecraft, allowing acquisition of 3D topological information of the candidate SCB. The framework employs a Leader-Follower approach, treating the maintenance of the desired stereo-formation as a 6 Degree-of-Freedom (DoF) nonlinear model predictive control (NMPC) problem.

The second research direction focuses on addressing the problem of enabling robotic inspection for terrestrial applications. With the growing demand for efficient and reliable inspection techniques to improve in-situ situational awareness, the research concentrates on addressing the problem of obtaining detailed visual scan of available structures without any a priori knowledge of either the environment nor the structures. Thus, the key contributions of the presented work reside in the implementation of a unified autonomy, with the unification drawing it's root from the merging of two distinct research perspectives: Inspection and Exploration planning. The contribution establishes a novel solution by introducing a map-independent approach with a synergistic formulation of a reactive profile-adaptive view-planner coupled with a hierarchical exploration strategy and an environment-invariant scene recognition module. By integrating exploration and inspection methodologies, this research seeks to enhance the capabilities of UAVs in navigating and inspecting unknown structures in unfamiliar environments. 

Through theoretical developments, extensive simulations and experimental validations, this thesis contributes to the advancement of the state-of-the-art in visual inspection with autonomous robots. Moreover, the findings extend current capabilities of autonomous agents in the field of space exploration as well as in disaster response and complex infrastructure inspection.

Place, publisher, year, edition, pages
Luleå: Luleå University of Technology, 2023
Series
Licentiate thesis / Luleå University of Technology, ISSN 1402-1757
Keywords
Unified autonomy, aerial robotics, space robotics
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-101301 (URN)978-91-8048-367-4 (ISBN)978-91-8048-368-1 (ISBN)
Presentation
2023-11-09, A1545, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kottayam Viswanathan, VigneshSatpute, Sumeet GajananLindqvist, BjörnKanellakis, ChristoforosNikolakopoulos, George

Search in DiVA

By author/editor
Kottayam Viswanathan, VigneshSatpute, Sumeet GajananLindqvist, BjörnKanellakis, ChristoforosNikolakopoulos, George
By organisation
Signals and Systems
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 94 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf