Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MSL3D: Pointcloud-based muck pile Segmentation and Localization in Unknown SubT Environments
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8132-4178
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8870-6718
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0003-0126-1897
2023 (English)In: 2023 31st Mediterranean Conference on Control and Automation, MED 2023, Institute of Electrical and Electronics Engineers Inc. , 2023, p. 269-274Conference paper, Published paper (Refereed)
Abstract [en]

This article presents MSL3D, a novel framework for pointcloud-based muck pile Segmentation and Localization in unknown Sub-Terranean (Sub-T) environments. The proposed framework is capable of progressively segmenting the muck piles and extracting their location in a global constructed point cloud map, using the autonomy sensor payload of mining or robotic platforms. MSL3D is structured in a two layer novel architecture that relies on the geometric properties of muck piles in underground tunnels, where the first layer extracts a local Volume Of Interest (VOI) proposal area out of the registered point cloud and the second layer is refining the muck pile extraction of each VOI proposal in the global optimized point cloud map. The first layer of MSL3D is extracting local VOIs bounded in the look-ahead surroundings of the platform. More specifically, the ceiling, left and right walls as well as the ground are continuously segmented using progessive RANSAC, searching for inclination in the segmented ground area to keep as the next-best local VOI. Once a local VOI is extracted, it is transmitted to the second layer, where it is converted to the world frame coordinates. In the sequel, a morphological filter is applied, in order to segment ground and nonground points, followed by RANSAC once again to extract the remaining points corresponding to the right and left walls. In this approach, Euclidean clustering is utilized to keep the cluster with the majority of points, which is assumed to belong to the muck pile. The efficacy of the proposed novel scheme was successfully and experimentally validated in real and large scale SubT environments by utilizing a custom-made UAV.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2023. p. 269-274
Series
Mediterranean Conference on Control and Automation, ISSN 2325-369X, E-ISSN 2473-3504
Keywords [en]
Automatic muckl pile extraction, Muck pile Localization, Muck pile segmentation, Pointcloud processing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer and Information Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
URN: urn:nbn:se:ltu:diva-101102DOI: 10.1109/MED59994.2023.10185912ISI: 001042336800045Scopus ID: 2-s2.0-85167798931ISBN: 979-8-3503-1544-8 (print)ISBN: 979-8-3503-1543-1 (electronic)OAI: oai:DiVA.org:ltu-101102DiVA, id: diva2:1792765
Conference
31st Mediterranean Conference on Control and Automation, MED 2023, Limassol, Cyprus, June 26-29, 2023
Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2024-05-03Bibliographically approved
In thesis
1. Towards human-inspired perception in robotic systems by leveraging computational methods for semantic understanding
Open this publication in new window or tab >>Towards human-inspired perception in robotic systems by leveraging computational methods for semantic understanding
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis presents a recollection of developments and results towards the research of human-like semantic understanding of the environment for robotics systems. Achieving a level of understanding in robots comparable to humans has proven to be a significant challenge in robotics, although modern sensors like stereo cameras and neuromorphic cameras enable robots to perceive the world in a manner akin to human senses, extracting and interpreting semantic information proves to be significantly inefficient by comparison. This thesis explores different aspects of the machine vision field to level computational methods in order to address real-life challenges for the task of semantic scene understanding in both everyday environments as well as challenging unstructured environments. 

The works included in this thesis present key contributions towards three main research directions. The first direction establishes novel perception algorithms for object detection and localization, aimed at real-life deployments in onboard mobile devices for %perceptually degraded unstructured environments. Along this direction, the contributions focus on the development of robust detection pipelines as well as fusion strategies for different sensor modalities including stereo cameras, neuromorphic cameras, and LiDARs. 

The second research direction establishes a computational method for levering semantic information into meaningful knowledge representations to enable human-inspired behaviors for the task of traversability estimation for reactive navigation. The contribution presents a novel decay function for traversability soft image generation based on exponential decay, by fusing semantic and geometric information to obtain density images that represent the pixel-wise traversability of the scene. Additionally, it presents a novel Encoder-Decoder lightweight network architecture for coarse semantic segmentation of terrain, integrated with a memory module based on a dynamic certainty filter.

Finally, the third research direction establishes the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information.The research thus presents an approach to meaningfully incorporate unobserved objects as nodes into an incomplete 3D scene graph using the proposed method Computation of Expectation based on Correlation Information (CECI), to reasonably approximate the probability distribution of the scene by learning histograms from available training data. Extensive simulations and real-life experimental setups support the results and assumptions presented in this work.

Place, publisher, year, edition, pages
Luleå: Luleå University of Technology, 2024
Series
Licentiate thesis / Luleå University of Technology, ISSN 1402-1757
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105329 (URN)978-91-8048-568-5 (ISBN)978-91-8048-569-2 (ISBN)
Presentation
2024-06-17, A117, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Saucedo, Mario Alberto ValdesKanellakis, ChristoforosNikolakopoulos, George

Search in DiVA

By author/editor
Saucedo, Mario Alberto ValdesKanellakis, ChristoforosNikolakopoulos, George
By organisation
Signals and Systems
Electrical Engineering, Electronic Engineering, Information EngineeringComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 98 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf