Change search
Link to record
Permanent link

Direct link
Kanellakis, ChristoforosORCID iD iconorcid.org/0000-0001-8870-6718
Publications (10 of 89) Show all publications
Stathoulopoulos, N., Kanellakis, C. & Nikolakopoulos, G. (2026). A Minimal Subset Approach for Informed Keyframe Sampling in Large-Scale SLAM. IEEE Robotics and Automation Letters, 11(1), 738-745
Open this publication in new window or tab >>A Minimal Subset Approach for Informed Keyframe Sampling in Large-Scale SLAM
2026 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 11, no 1, p. 738-745Article in journal (Refereed) Published
Abstract [en]

Typical LiDAR SLAM architectures feature a front-end for odometry estimation and a back-end for refining and optimizing the trajectory and map, commonly through loop closures. However, loop closure detection in large-scale missions presents significant computational challenges due to the need to identify, verify, and process numerous candidate pairs for pose graph optimization. Keyframe sampling bridges the front-end and back-end by selecting frames for storing and processing during global optimization. This article proposes an online keyframe sampling approach that constructs the pose graph using the most impactful keyframes for loop closure. We introduce the Minimal Subset Approach (MSA), which optimizes two key objectives: redundancy minimization and information preservation, implemented within a sliding window framework. By operating in the feature space rather than 3-D space, MSA efficiently reduces redundant keyframes while retaining essential information. Evaluations on diverse public datasets show that the proposed approach outperforms naive methods in reducing false positive rates in place recognition, while delivering superior ATE and RPE in metric localization, without the need for manual parameter tuning. Additionally, MSA demonstrates efficiency and scalability by reducing memory usage and computational overhead during loop closure detection and pose graph optimization.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2026
Keywords
SLAM, Place Recognition, Loop Closures
National Category
Robotics and automation Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-115724 (URN)10.1109/LRA.2025.3636035 (DOI)2-s2.0-105023167593 (Scopus ID)
Funder
EU, Horizon Europe, 101138330
Note

Validerad;2025;Nivå 2;2025-12-08 (u8);

Available from: 2025-12-08 Created: 2025-12-08 Last updated: 2025-12-08Bibliographically approved
Patel, A., Saucedo, M. A. .., Stathoulopoulos, N., Sankaranarayanan, V. N., Tevetzidis, I., Kanellakis, C. & Nikolakopoulos, G. (2025). A Hierarchical Graph-Based Terrain-Aware Autonomous Navigation Approach for Complementary Multimodal Ground-Aerial Exploration. In: 2025 IEEE International Conference on Robotics and Automation, (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation, (ICRA 2025), May 19-23, 2025, Atlanta, USA (pp. 15879-15885). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>A Hierarchical Graph-Based Terrain-Aware Autonomous Navigation Approach for Complementary Multimodal Ground-Aerial Exploration
Show others...
2025 (English)In: 2025 IEEE International Conference on Robotics and Automation, (ICRA), Institute of Electrical and Electronics Engineers Inc. , 2025, p. 15879-15885Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-115007 (URN)10.1109/ICRA55743.2025.11128079 (DOI)2-s2.0-105016572481 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation, (ICRA 2025), May 19-23, 2025, Atlanta, USA
Note

ISBN for host publication: 979-8-3315-4139-2;

Funder: European Unions Horizon 2020 Research and Innovation Programme (Grant Agreement No. 101138451 PERSEPHONE);

Available from: 2025-10-06 Created: 2025-10-06 Last updated: 2025-10-21Bibliographically approved
Saucedo, M. A. .., Kottayam Viswanathan, V., Kanellakis, C. & Nikolakopoulos, G. (2025). Estimating Commonsense Scene Composition on Belief Scene Graphs. In: 2025 IEEE International Conference on Robotics and Automation, ICRA 2025: . Paper presented at 2025 IEEE International Conference on Robotics & Automation (ICRA 2025), Atlanta, USA, May 19-23, 2025 (pp. 2861-2867). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Estimating Commonsense Scene Composition on Belief Scene Graphs
2025 (English)In: 2025 IEEE International Conference on Robotics and Automation, ICRA 2025, Institute of Electrical and Electronics Engineers Inc. , 2025, p. 2861-2867Conference paper, Published paper (Refereed)
Abstract [en]

This work establishes the concept of commonsense scene composition, with a focus on extending Belief Scene Graphs by estimating the spatial distribution of unseen objects. Specifically, the commonsense scene composition capability refers to the understanding of the spatial relationships among related objects in the scene, which in this article is modeled as a joint probability distribution for all possible locations of the semantic object class. The proposed framework includes two variants of a Correlation Information (CECI) model for learning probability distributions: (i) a baseline approach based on a Graph Convolutional Network, and (ii) a neuro-symbolic extension that integrates a spatial ontology based on Large Language Models (LLMs). Furthermore, this article provides a detailed description of the dataset generation process for such tasks. Finally, the framework has been validated through multiple runs on simulated data, as well as in a real-world indoor environment, demonstrating its ability to spatially interpret scenes across different room types. For a video of the article, showcasing the experimental demonstration, please refer to the following link: https://youtu.be/f0tqtPVFZ2A

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-115068 (URN)10.1109/ICRA55743.2025.11127920 (DOI)2-s2.0-105016688189 (Scopus ID)
Conference
2025 IEEE International Conference on Robotics & Automation (ICRA 2025), Atlanta, USA, May 19-23, 2025
Funder
EU, Horizon Europe, 101119774 SPEAR
Note

ISBN for host publication: 979-8-3315-4139-2

Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-21Bibliographically approved
Stathoulopoulos, N., Kanellakis, C. & Nikolakopoulos, G. (2025). Have We Scene It All? Scene Graph-Aware Deep Point Cloud Compression. IEEE Robotics and Automation Letters, 10(12), 12477-12484
Open this publication in new window or tab >>Have We Scene It All? Scene Graph-Aware Deep Point Cloud Compression
2025 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 10, no 12, p. 12477-12484Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-115374 (URN)10.1109/LRA.2025.3623045 (DOI)001604871400013 ()2-s2.0-105019596880 (Scopus ID)
Funder
EU, Horizon Europe, 101138451
Note

Validerad;2025;Nivå 2;2025-11-12 (u2);

Available from: 2025-11-12 Created: 2025-11-12 Last updated: 2025-12-04Bibliographically approved
Bai, Y., Kotpalliwar, S., Kanellakis, C. & Nikolakopoulos, G. (2025). Multi-agent Path Planning Based on Conflict-Based Search (CBS) Variations for Heterogeneous Robots. Journal of Intelligent and Robotic Systems, 111(1), Article ID 26.
Open this publication in new window or tab >>Multi-agent Path Planning Based on Conflict-Based Search (CBS) Variations for Heterogeneous Robots
2025 (English)In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 111, no 1, article id 26Article in journal (Refereed) Published
Abstract [en]

This article introduces a novel Multi-agent path planning scheme based on Conflict Based Search (CBS) for heterogeneous holonomic and non-holonomic agents, designated as Heterogeneous CBS (HCBS). The proposed methodology employs a hybrid A∗ algorithm for non-holonomic car-like robots and a conventional A∗ algorithm for holonomic robots. Following this, a body conflict detection strategy is utilized to construct the conflict tree, bridging the initial path planning with the resolution of conflicts among agents. Moreover, we present two variants of HCBS: the Enhanced Conflict-Based Search (EHCBS) and the Depth-First Conflict-Based Search (DFHCBS). We evaluate the efficacy of our proposed algorithms—HCBS, EHCBS, and DFHCBS—against a standard prioritized planning algorithm, focusing on success rates and computational efficiency in environments with varying numbers of agents and obstacles. The empirical results demonstrate that EHCBS exhibits superior computational efficiency in small, dense environments, while DFHCBS performs well in larger-scale environments. This highlights the adaptability of our proposed approaches in various settings, proving the computational advantage of EHCBS and DFHCBS over traditional methods.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Autonomous robots, Multi-robot systems, Multi-agent path-finding, Conflict-based search
National Category
Robotics and automation Computer Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-112025 (URN)10.1007/s10846-025-02229-0 (DOI)001508745400002 ()2-s2.0-85219748748 (Scopus ID)
Funder
Swedish Energy Agency, SUM
Note

Validerad;2025;Nivå 2;2025-03-19 (u5);

Full text license: CC BY 4.0;

Funder: LKAB (SUM);

Available from: 2025-03-19 Created: 2025-03-19 Last updated: 2025-12-04Bibliographically approved
Calzolari, G., Sumathy, V., Kanellakis, C. & Nikolakopoulos, G. (2025). Reinforcement Learning Driven Multi-Robot Exploration via Explicit Communication and Density-Based Frontier Search. In: 2025 IEEE International Conference on Robotics and Automation, (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation, (ICRA 2025), May 19-23, 2025, Atlanta, USA (pp. 11406-11412). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Reinforcement Learning Driven Multi-Robot Exploration via Explicit Communication and Density-Based Frontier Search
2025 (English)In: 2025 IEEE International Conference on Robotics and Automation, (ICRA), Institute of Electrical and Electronics Engineers Inc. , 2025, p. 11406-11412Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-115008 (URN)10.1109/ICRA55743.2025.11128566 (DOI)2-s2.0-105016579834 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation, (ICRA 2025), May 19-23, 2025, Atlanta, USA
Note

ISBN for host publication: 979-8-3315-4139-2;

Funder: Wallenberg AI, Autonomous Systems and Software Program (WASP), Knut and Alice Wallenberg Foundation; European Union’s Horizon Europe Research and Innovation Program, (Grant Agreement No. 101119774 SPEAR);

Available from: 2025-10-06 Created: 2025-10-06 Last updated: 2025-10-21Bibliographically approved
Nordström, S., Stathoulopoulos, N., Dahlquist, N., Lindqvist, B., Tevetzidis, I., Kanellakis, C. & Nikolakopoulos, G. (2025). Safety Inspections and Gas Monitoring in Hazardous Mining Areas Shortly After Blasting Using Autonomous UAVs. Journal of Field Robotics, 42(5), 2076-2094
Open this publication in new window or tab >>Safety Inspections and Gas Monitoring in Hazardous Mining Areas Shortly After Blasting Using Autonomous UAVs
Show others...
2025 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 42, no 5, p. 2076-2094Article in journal (Refereed) Published
Abstract [en]

This article presents the first ever fully autonomous UAV (Unmanned Aerial Vehicle) mission to perform gas measurements after a real blast in an underground mine. The demonstration mission was deployed around 40 minutes after the blast took place, and as such realistic gas levels were measured. We also present multiple field robotics experiments in different mines detailing the development process. The presented novel autonomy stack, denoted as the Routine Inspection Autonomy (RIA) framework, combines a risk-aware 3D path planning D + ∗ , with 3D LiDAR-based global relocalization on a known map, and it is integrated on a custom hardware and a sensing stack with an onboard gas sensing device. In the presented framework, the autonomous UAV can be deployed in incredibly harsh conditions (dust, significant deformations of the map) shortly after blasting to perform inspections of lingering gases that present a significant safety risk to workers. We also present a change detection framework that can extract and visualize the areas that were changed in the blasting procedure, a critical parameter for planning the extraction of materials, and for updating existing mine maps. As will be demonstrated, the RIA stack can enable robust autonomy in harsh conditions, and provides reliable and safe navigation behavior for autonomous Routine Inspection missions.

Place, publisher, year, edition, pages
John Wiley & Sons, 2025
Keywords
Field Robotics, Mining Robotics, Unmanned Areal Vehicles, Gas Monitoring, Change Detection
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111152 (URN)10.1002/rob.22500 (DOI)001536479200038 ()2-s2.0-85215285844 (Scopus ID)
Funder
EU, Horizon 2020, 101003591
Note

Validerad;2025;Nivå 2;2025-08-07 (u4);

Funder: Sustainable Underground Mining, SUM (SP14);

Full text license: CC BY-NC 4.0

Available from: 2024-12-30 Created: 2024-12-30 Last updated: 2025-12-10Bibliographically approved
Kottayam Viswanathan, V., Sumathy, V., Kanellakis, C. & Nikolakopoulos, G. (2024). A Surface Adaptive First-Look Inspection Planner for Autonomous Remote Sensing of Open-Pit Mines. In: 2024 IEEE International Conference on Robotics and Biomimetics (ROBIO): . Paper presented at 2024 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2024), December 10-14, Bangkok, Thailand (pp. 280-285). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>A Surface Adaptive First-Look Inspection Planner for Autonomous Remote Sensing of Open-Pit Mines
2024 (English)In: 2024 IEEE International Conference on Robotics and Biomimetics (ROBIO), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 280-285Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we present an autonomous inspection framework for remote sensing tasks in active open-pit mines. Specifically, the contributions are focused towards developing a methodology where an initial approximate operator-defined inspection plan is exploited by an online view-planner to predict an inspection path that can adapt to changes in the current mine-face morphology caused by route mining activities. The proposed inspection framework leverages instantaneous 3D LiDAR and localization measurements coupled with modelled sensor footprint for view-planning satisfying desired viewing and photogrammetric conditions. The efficacy of the proposed framework has been demonstrated through simulation in Feiring-Bruk open-pit mine environment and hardware-based outdoor experimental trials. The video show-casing the performance of the proposed work can be found here: https://youtu.be/uWWbDfoBvFc

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2024
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-112345 (URN)10.1109/ROBIO64047.2024.10907551 (DOI)001480002600046 ()2-s2.0-105001475600 (Scopus ID)
Conference
2024 IEEE International Conference on Robotics and Biomimetics (IEEE ROBIO 2024), December 10-14, Bangkok, Thailand
Funder
EU, Horizon Europe, 101091462
Note

ISBN for host publication: 979-8-3315-0964-4

Available from: 2025-04-11 Created: 2025-04-11 Last updated: 2025-11-28Bibliographically approved
Morrell, B., Otsu, K., Agha, A., Fan, D. D., Kim, S.-K., Ginting, M. F., . . . Burdick, J. (2024). An Addendum to NeBula: Toward Extending Team CoSTAR’s Solution to Larger Scale Environments. IEEE Transactions on Field Robotics, 1, 476-526
Open this publication in new window or tab >>An Addendum to NeBula: Toward Extending Team CoSTAR’s Solution to Larger Scale Environments
Show others...
2024 (English)In: IEEE Transactions on Field Robotics, E-ISSN 2997-1101, Vol. 1, p. 476-526Article in journal (Refereed) Published
Abstract [en]

This article presents an appendix to the original NeBula autonomy solution developed by the Team Collaborative SubTerranean Autonomous Robots (CoSTAR), participating in the DARPA Subterranean Challenge. Specifically, this article presents extensions to NeBula’s hardware, software, and algorithmic components that focus on increasing the range and scale of the exploration environment. From the algorithmic perspective, we discuss the following extensions to the original NeBula framework: 1) large-scale geometric and semantic environment mapping; 2) an adaptive positioning system; 3) probabilistic traversability analysis and local planning; 4) large-scale partially observable Markov decision process (POMDP)-based global motion planning and exploration behavior; 5) large-scale networking and decentralized reasoning; 6) communicationaware mission planning; and 7) multimodal ground–aerial exploration solutions.We demonstrate the application and deployment of the presented systems and solutions in various large-scale underground environments, including limestone mine exploration scenarios as well as deployment in the DARPA Subterranean challenge.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Aerial autonomy, communication-aware mission planning, exploration planning, geometric and semantic mapping, multirobot systems, probabilistic traversability analysis
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-114066 (URN)10.1109/tfr.2024.3430891 (DOI)
Note

Validerad;2025;Nivå 1;2025-11-24 (u8);

Funder: Jet Propulsion Laboratory, California Institute of Technology, through the National Aeronautics and Space Administration (80NM0018D0004); Defense Advanced Research Projects Agency (DARPA); Spanish Ministry of Science and Innovation funded by Ministerio de Ciencia e Innovacion (MCIN)/ Agencia Estatal de Investigación (AEI); European Union NextGenerationEU/Plan de Recuperación, Transformación y Resiliencia (PRTR)’’ through the AUDEL Project (TED2021-131759A-I00); Consolidated Research Group Mobile Robotics and Artificial Intelligence Group (RAIG) of the Departament de Recerca i Universitats, Generalitat de Catalunya (SGR 00510)

Available from: 2025-07-11 Created: 2025-07-11 Last updated: 2025-11-24Bibliographically approved
Saucedo, M. A., Patel, A., Saradagi, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation. In: : . Paper presented at The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024 (pp. 9441-9447). IEEE
Open this publication in new window or tab >>Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information. We propose a graph-based learning methodology for the computation of belief (also referred to as expectation) on any given 3D scene graph, which is then used to strategically add new nodes (referred to as blind nodes) that are relevant to a robotic mission. We propose the method of Computation of Expectation based on Correlation Information (CECI), to reasonably approximate real Belief/Expectation, by learning histograms from available training data. A novel Graph Convolutional Neural Network (GCN) model is developed, to learn CECI from a repository of 3D scene graphs. As no database of 3D scene graphs exists for the training of the novel CECI model, we present a novel methodology for generating a 3D scene graph dataset based on semantically annotated real-life 3D spaces. The generated dataset is then utilized to train the proposed CECI model and for extensive validation of the proposed method. We establish the novel concept of \textit{Belief Scene Graphs} (BSG), as a core component to integrate expectations into abstract representations. This new concept is an evolution of the classical 3D scene graph concept and aims to enable high-level reasoning for task planning and optimization of a variety of robotics missions. The efficacy of the overall framework has been evaluated in an object search scenario, and has also been tested in a real-life experiment to emulate human common sense of unseen-objects. 

For a video of the article, showcasing the experimental demonstration, please refer to the following link: \url{https://youtu.be/hsGlSCa12iY}

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105326 (URN)10.1109/ICRA57147.2024.10611352 (DOI)001369728000049 ()2-s2.0-85202433848 (Scopus ID)
Conference
The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024
Note

Funder: European Union’s HorizonEurope Research and Innovation Programme (101119774 SPEAR);

ISBN for host publication: 979-8-3503-8457-4;

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-10-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8870-6718

Search in DiVA

Show all publications