Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
RecNet: An Invertible Point Cloud Encoding through Range Image Embeddings for Multi-Robot Map Sharing and Reconstruction
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0002-0108-6286
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8132-4178
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0001-8235-2728
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.ORCID iD: 0000-0003-0126-1897
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 4883-4889Conference paper, Published paper (Refereed)
Abstract [en]

In the field of resource-constrained robots and the need for effective place recognition in multi-robotic systems, this article introduces RecNet, a novel approach that concurrently addresses both challenges. The core of RecNet’s methodology involves a transformative process: it projects 3D point clouds into range images, compresses them using an encoder-decoder framework, and subsequently reconstructs the range image, restoring the original point cloud. Additionally, RecNet utilizes the latent vector extracted from this process for efficient place recognition tasks. This approach not only achieves comparable place recognition results but also maintains a compact representation, suitable for sharing among robots to reconstruct their collective maps. The evaluation of RecNet encompasses an array of metrics, including place recognition performance, the structural similarity of the reconstructed point clouds, and the bandwidth transmission advantages, derived from sharing only the latent vectors. Our proposed approach is assessed using both a publicly available dataset and field experiments 1 confirming its efficacy and potential for real-world applications.

Place, publisher, year, edition, pages
IEEE, 2024. p. 4883-4889
National Category
Computer graphics and computer vision Computer Sciences Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
URN: urn:nbn:se:ltu:diva-108594DOI: 10.1109/ICRA57147.2024.10611602ISI: 001294576203112Scopus ID: 2-s2.0-85193767104OAI: oai:DiVA.org:ltu-108594DiVA, id: diva2:1889791
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, May 13-17, 2024
Note

ISBN for host publication: 979-8-3503-8457-4

Available from: 2024-08-16 Created: 2024-08-16 Last updated: 2025-06-24Bibliographically approved
In thesis
1. On Autonomous Map-merging for Multi-Robot Systems
Open this publication in new window or tab >>On Autonomous Map-merging for Multi-Robot Systems
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Three-dimensional (3D) point cloud map merging is a pivotal technology in robotics and automation, enabling the integration of multiple 3D point cloud maps into a single, comprehensive representation of the environment. This technique is particularly advantageous in multi-robot coordination, where multiple robots collaborate to explore and map extensive areas. Each robot generates a local map within its local frame, which serves as crucial data for localization, collision avoidance, navigation, and path planning, and can later be shared and fused into a global map. In addition, human operators in the industry can find this advantageous, as it allows for faster and more efficient inspections without the need for manual map alignment. This thesis introduces a modular framework for autonomous 3D point cloud map merging in multi-robot systems, addressing the challenge of aligning local maps by identifying acceptable spatial coordinate transformations. This framework facilitates real-time map merging during multi-robot exploration, enhancing mapping efficiency by preventing redundant exploration of already mapped areas. The first contribution stems from formulating and addressing the map merging problem through a modular pipeline and evaluating each component. Then two methods are presented that improve place recognition performance, a fundamental aspect of the process. The first method extends the place recognition pipeline with a topological classification module, enhancing performance in challenging environments and autonomously triggering the map merging pipeline for higher success rates. The second method integrates additional data modalities, such as an inexpensive Wi-Fi module, to enhance place recognition performance. Furthermore, the thesis addresses communication challenges in multi-robot systems. A solution for centralized systems is proposed, where a control mechanism regulates map data transmission to ensure critical information is preserved and the map merging process is not compromised. Additionally, a combined solution for place recognition descriptors is presented, which compresses LiDAR data to improve transmission efficiency. Finally, the map merging framework serves as the backbone of a change detection algorithm. Both the map merging framework and the change detection algorithm are evaluated through a series of use-case deployments, including autonomous Unmanned Aerial Vehicles (UAVs) operations in mining areas and a safety inspection mission following a real blast. 

Place, publisher, year, edition, pages
Luleå University of Technology, 2024. p. 234
Series
Licentiate thesis / Luleå University of Technology, ISSN 1402-1757
Keywords
Robotics, Multi-Robot Systems, Map merging
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-110608 (URN)978-91-8048-696-5 (ISBN)978-91-8048-697-2 (ISBN)
Presentation
2024-12-06, A3024, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2024-10-31 Created: 2024-10-31 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Stathoulopoulos, NikolaosSaucedo, Mario Alberto ValdesKoval, AntonNikolakopoulos, George

Search in DiVA

By author/editor
Stathoulopoulos, NikolaosSaucedo, Mario Alberto ValdesKoval, AntonNikolakopoulos, George
By organisation
Signals and Systems
Computer graphics and computer visionComputer SciencesRobotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 102 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf