Change search
Link to record
Permanent link

Direct link
Kanellakis, ChristoforosORCID iD iconorcid.org/0000-0001-8870-6718
Publications (10 of 80) Show all publications
Nordström, S., Stathoulopoulos, N., Dahlquist, N., Lindqvist, B., Tevetzidis, I., Kanellakis, C. & Nikolakopoulos, G. (2025). Safety Inspections and Gas Monitoring in Hazardous Mining Areas Shortly After Blasting Using Autonomous UAVs. Journal of Field Robotics
Open this publication in new window or tab >>Safety Inspections and Gas Monitoring in Hazardous Mining Areas Shortly After Blasting Using Autonomous UAVs
Show others...
2025 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967Article in journal (Refereed) Epub ahead of print
Abstract [en]

This article presents the first ever fully autonomous UAV (Unmanned Aerial Vehicle) mission to perform gas measurements after a real blast in an underground mine. The demonstration mission was deployed around 40 minutes after the blast took place, and as such realistic gas levels were measured. We also present multiple field robotics experiments in different mines detailing the development process. The presented novel autonomy stack, denoted as the Routine Inspection Autonomy (RIA) framework, combines a risk-aware 3D path planning D + ∗ , with 3D LiDAR-based global relocalization on a known map, and it is integrated on a custom hardware and a sensing stack with an onboard gas sensing device. In the presented framework, the autonomous UAV can be deployed in incredibly harsh conditions (dust, significant deformations of the map) shortly after blasting to perform inspections of lingering gases that present a significant safety risk to workers. We also present a change detection framework that can extract and visualize the areas that were changed in the blasting procedure, a critical parameter for planning the extraction of materials, and for updating existing mine maps. As will be demonstrated, the RIA stack can enable robust autonomy in harsh conditions, and provides reliable and safe navigation behavior for autonomous Routine Inspection missions.

Place, publisher, year, edition, pages
John Wiley & Sons, 2025
Keywords
Field Robotics, Mining Robotics, Unmanned Areal Vehicles, Gas Monitoring, Change Detection
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111152 (URN)10.1002/rob.22500 (DOI)2-s2.0-85215285844 (Scopus ID)
Funder
EU, Horizon 2020, 101003591
Note

Full text license: CC BY-NC 4.0; 

Funder: Sustainable Underground Mining, SUM (SP14)

Available from: 2024-12-30 Created: 2024-12-30 Last updated: 2025-02-09
Saucedo, M. A., Patel, A., Saradagi, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation. In: : . Paper presented at The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024 (pp. 9441-9447). IEEE
Open this publication in new window or tab >>Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information. We propose a graph-based learning methodology for the computation of belief (also referred to as expectation) on any given 3D scene graph, which is then used to strategically add new nodes (referred to as blind nodes) that are relevant to a robotic mission. We propose the method of Computation of Expectation based on Correlation Information (CECI), to reasonably approximate real Belief/Expectation, by learning histograms from available training data. A novel Graph Convolutional Neural Network (GCN) model is developed, to learn CECI from a repository of 3D scene graphs. As no database of 3D scene graphs exists for the training of the novel CECI model, we present a novel methodology for generating a 3D scene graph dataset based on semantically annotated real-life 3D spaces. The generated dataset is then utilized to train the proposed CECI model and for extensive validation of the proposed method. We establish the novel concept of \textit{Belief Scene Graphs} (BSG), as a core component to integrate expectations into abstract representations. This new concept is an evolution of the classical 3D scene graph concept and aims to enable high-level reasoning for task planning and optimization of a variety of robotics missions. The efficacy of the overall framework has been evaluated in an object search scenario, and has also been tested in a real-life experiment to emulate human common sense of unseen-objects. 

For a video of the article, showcasing the experimental demonstration, please refer to the following link: \url{https://youtu.be/hsGlSCa12iY}

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105326 (URN)10.1109/ICRA57147.2024.10611352 (DOI)2-s2.0-85202433848 (Scopus ID)
Conference
The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024
Note

Funder: European Union’s HorizonEurope Research and Innovation Programme (101119774 SPEAR);

ISBN for host publication: 979-8-3503-8457-4;

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved
Saucedo, M. A., Stathoulopoulos, N., Mololoth, V., Kanellakis, C. & Nikolakopoulos, G. (2024). BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization. In: 2024 32nd Mediterranean Conference on Control and Automation (MED): . Paper presented at The 32nd Mediterranean Conference on Control and Automation (MED2024), Chania, Crete, Greece, June 11-14, 2024. IEEE
Open this publication in new window or tab >>BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization
Show others...
2024 (English)In: 2024 32nd Mediterranean Conference on Control and Automation (MED), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Object detection and global localization play a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to multi-layered 3D Scene Graphs for semantic scene understanding. This article proposes BOX3D, a novel multi-modal and lightweight scheme for localizing objects of interest by fusing the information from RGB camera and 3D LiDAR. BOX3D is structured around a three-layered architecture, building up from the local perception of the incoming sequential sensor data to the global perception refinement that covers for outliers and the general consistency of each object's observation. More specifically, the first layer handles the low-level fusion of camera and LiDAR data for initial 3D bounding box extraction. The second layer converts each LiDAR's scan 3D bounding boxes to the world coordinate frame and applies a spatial pairing and merging mechanism to maintain the uniqueness of objects observed from different viewpoints. Finally, BOX3D integrates the third layer that supervises the consistency of the results on the global map iteratively, using a point-to-voxel comparison for identifying all points in the global map that belong to the object. Benchmarking results of the proposed novel architecture are showcased in multiple experimental trials on public state-of-the-art large-scale dataset of urban environments.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105327 (URN)10.1109/MED61351.2024.10566236 (DOI)2-s2.0-85198221796 (Scopus ID)
Conference
The 32nd Mediterranean Conference on Control and Automation (MED2024), Chania, Crete, Greece, June 11-14, 2024
Funder
Swedish Energy AgencyEU, Horizon Europe, 101091462 m4mining
Note

Funder: SP14 ‘Autonomous Drones for Underground Mining Operations’;

ISBN for host publication: 979-8-3503-9545-7; 979-8-3503-9544-0

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved
Bai, Y., Lindqvist, B., Nordström, S., Kanellakis, C. & Nikolakopoulos, G. (2024). Cluster-based Multi-Robot Task Assignment, Planning, and Control. International Journal of Control, Automation and Systems, 22(8), 2537-2550
Open this publication in new window or tab >>Cluster-based Multi-Robot Task Assignment, Planning, and Control
Show others...
2024 (English)In: International Journal of Control, Automation and Systems, ISSN 1598-6446, E-ISSN 2005-4092, Vol. 22, no 8, p. 2537-2550Article in journal (Refereed) Published
Abstract [en]

This paper presents a complete system architecture for multi-robot coordination for unbalanced task assignments, where a number of robots are supposed to visit and accomplish missions at different locations. The proposed method first clusters tasks into clusters according to the number of robots, then the assignment is done in the form of one-cluster-to-one-robot, followed by solving the traveling salesman problem (TSP) to determine the visiting order of tasks within each cluster. A nonlinear model predictive controller (NMPC) is designed for robots to navigate to their assigned tasks while avoiding colliding with other robots. Several simulations are conducted to evaluate the feasibility of the proposed architecture. Video examples of the simulations can be viewed at https://youtu.be/5C7zTnv2sfo and https://youtu.be/-JtSg5V2fTI?si=7PfzZbleOOsRdzRd. Besides, we compare the cluster-based assignment with a simulated annealing (SA) algorithm, one of the typical solutions for the multiple traveling salesman problem (mTSP), and the result reveals that with a similar optimization effect, the cluster-based assignment demonstrates a notable reduction in computation time. This efficiency becomes increasingly pronounced as the task-to-agent ratio grows.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Autonomous robots, Hungarian algorithm, multi-robot systems, task assignment
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103890 (URN)10.1007/s12555-023-0745-4 (DOI)001282795800023 ()2-s2.0-85200365780 (Scopus ID)
Projects
Autonomous Drones for Underground Mining Operations
Funder
Swedish Energy AgencyEU, Horizon 2020, 101003591
Note

Validerad;2024;Nivå 2;2024-08-12 (hanlid);

Funder: LKAB

Available from: 2024-01-23 Created: 2024-01-23 Last updated: 2025-02-09Bibliographically approved
Calzolari, G., Sumathy, V., Kanellakis, C. & Nikolakopoulos, G. (2024). Decentralized Multi-Agent Reinforcement Learning Exploration with Inter-Agent Communication-Based Action Space. In: 2024 24th International Conference on Control, Automation and Systems (ICCAS): . Paper presented at 24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024 (pp. 319-324). IEEE
Open this publication in new window or tab >>Decentralized Multi-Agent Reinforcement Learning Exploration with Inter-Agent Communication-Based Action Space
2024 (English)In: 2024 24th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2024, p. 319-324Conference paper, Published paper (Refereed)
Abstract [en]

A new challenging area of research in autonomous systems focuses on the collaborative multi-agent exploration of unknown environments where a reliable communication infrastructure among the robotic platforms is absent. Factors like the proximity between agents, the characteristics of the network nodes, and environmental conditions can significantly impact data transmission in real-world applications. We present a novel decentralized collaborative architecture based on multi-agent reinforcement learning to address this challenge. In this framework, homogeneous agents autonomously decide to communicate or not, that is whether to share locally collected maps with other agents in the same communication networks or to navigate and explore the environment further. The agents' policies are trained using the heterogeneous-agent proximal policy optimization (HAPPO) algorithm and through a novel reward function that balances inter-agent communication and exploratory behaviors. The proposed architecture enhances mapping efficiency and robustness while minimizing inter-agent redundant data transmission. Finally, this paper demonstrates the advantages of the investigated approach compared to a strategy that does not incentivize communicative behaviors.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Collaborative Multi-Agent Exploration, Action Space with Communication, Communication-Constrained Environments, HAPPO
National Category
Robotics and automation Computer Sciences Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-110754 (URN)10.23919/ICCAS63016.2024.10773383 (DOI)2-s2.0-85214352354 (Scopus ID)
Conference
24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 3750102EU, Horizon Europe, 375224, 101119774 SPEAR
Note

ISBN for host publication: 978-89-93215-38-0;

Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2025-02-05Bibliographically approved
Calzolari, G., Sumathy, V., Kanellakis, C. & Nikolakopoulos, G. (2024). D-MARL: A Dynamic Communication-Based Action Space Enhancement for Multi Agent Reinforcement Learning Exploration of Large Scale Unknown Environments. In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024 (pp. 3470-3475). IEEE
Open this publication in new window or tab >>D-MARL: A Dynamic Communication-Based Action Space Enhancement for Multi Agent Reinforcement Learning Exploration of Large Scale Unknown Environments
2024 (English)In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2024, p. 3470-3475Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose a novel communication-based action space enhancement for the D-MARL exploration algorithm to improve the efficiency of mapping an unknown environment, represented by an occupancy grid map. In general, communication between autonomous systems is crucial when exploring large and unstructured environments. In such real-world scenarios, data transmission is limited and relies heavily on inter-agent proximity and the attributes of the autonomous platforms. In the proposed approach, each agent's policy is optimized by utilizing the heterogeneous-agent proximal policy optimization algorithm to autonomously choose whether to communicate or explore the environment. To accomplish this, multiple novel reward functions are formulated by integrating inter-agent communication and exploration. The investigated approach aims to increase efficiency and robustness in the mapping process, minimize exploration overlap, and prevent agent collisions. The D-MARL policies trained on different reward functions have been compared to understand the effect of different reward terms on the collaborative attitude of the homogeneous agents. Finally, multiple simulation results are provided to prove the efficacy of the proposed scheme.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Robotics and automation Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-110753 (URN)10.1109/IROS58592.2024.10801319 (DOI)2-s2.0-85215988132 (Scopus ID)
Conference
The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024
Funder
EU, Horizon Europe, 375224, 101119774 SPEARWallenberg AI, Autonomous Systems and Software Program (WASP), 3750102
Note

ISBN for host publication: 979-8-3503-7770-5

Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2025-02-17Bibliographically approved
Stathoulopoulos, N., Sumathy, V., Kanellakis, C. & Nikolakopoulos, G. (2024). Does Sample Space Matter? Preliminary Results on Keyframe Sampling Optimization for LiDAR-based Place Recognition. In: : . Paper presented at Standing the Test of Time Workshop: Retrospective and Future of World Representations for Lifelong Robotics at 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, United Arab Emirates, October 14-15, 2024.
Open this publication in new window or tab >>Does Sample Space Matter? Preliminary Results on Keyframe Sampling Optimization for LiDAR-based Place Recognition
2024 (English)Conference paper, Published paper (Other academic)
National Category
Robotics and automation Computer Sciences Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-110585 (URN)
Conference
Standing the Test of Time Workshop: Retrospective and Future of World Representations for Lifelong Robotics at 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, United Arab Emirates, October 14-15, 2024
Available from: 2024-10-29 Created: 2024-10-29 Last updated: 2025-02-05Bibliographically approved
Patel, A., Fredriksson, S., Nordström, S., Pagliari, E., Tevetzidis, I., Kanellakis, C. & Nikolakopoulos, G. (2024). DRONECOM: Dynamic Relay Operations for Network Efficient Communication in Mines. In: 2024 24th International Conference on Control, Automation and Systems (ICCAS): . Paper presented at 24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024 (pp. 1206-1211). IEEE
Open this publication in new window or tab >>DRONECOM: Dynamic Relay Operations for Network Efficient Communication in Mines
Show others...
2024 (English)In: 2024 24th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2024, p. 1206-1211Conference paper, Published paper (Refereed)
Abstract [en]

As the routine operations are starting to become highly automated, it is crucial to develop autonomous solutions that are infrastructure independent. Achieving this is challenging due to the ever-changing landscape of mines, which complicates infrastructure development. In response, this paper introduces a robust framework employing drones to gather data from hard-to-access areas in mines and deliver the data back to the base station for routine monitoring purposes. These tasks include gathering data from operational vehicles (mine trucks, loaders etc.), as well as various sensors (e.g. monitoring rock bolts) and relaying the data to the mine’s base station for monitoring purposes. The proposed framework is based on autonomous navigation using a known point cloud map of the mine, proximity detection via Ultra WideBand (UWB) radios and the data transfer is accomplished through the IEEE 802.15.4 communication standard, operating in the 868 MHz ISM band, with the aim to guarantee long range operation. On the mission level, the drones act as data mules capable of autonomously extracting data from operating vehicles, storing the data onboard and eventually delivering the data to the base station, which is enabled through a Point and Click (PAC) autonomy framework based on global planning, reactive navigation, communication link and behavior management. The efficacy of this framework has been demonstrated through real-world experiments conducted at a test mine in Sweden, validating the overall architecture of the proposed solution.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Data Link Extender, Subterranean Environment, Mining Automation
National Category
Communication Systems Embedded Systems
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111327 (URN)10.23919/ICCAS63016.2024.10773079 (DOI)2-s2.0-85214367921 (Scopus ID)
Conference
24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024
Funder
EU, Horizon 2020
Note

ISBN for host publication: 978-89-93215-38-0;

Available from: 2025-01-20 Created: 2025-01-20 Last updated: 2025-01-20Bibliographically approved
Saucedo, M. A. V., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2024). EAT: Environment Agnostic Traversability for reactive navigation. Expert systems with applications, 244, Article ID 122919.
Open this publication in new window or tab >>EAT: Environment Agnostic Traversability for reactive navigation
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 244, article id 122919Article in journal (Refereed) Published
Abstract [en]

This work presents EAT (Environment Agnostic Traversability for Reactive Navigation) a novel framework for traversability estimation in indoor, outdoor, subterranean (SubT) and other unstructured environments. The architecture provides updates on traversable regions online during the mission, adapts to varying environments, while being robust to noisy semantic image segmentation. The proposed framework considers terrain prioritization based on a novel decay exponential function to fuse the semantic information and geometric features extracted from RGB-D images to obtain the traversability of the scene. Moreover, EAT introduces an obstacle inflation mechanism on the traversability image, based on mean-window weighting module, allowing to adapt the proximity to untraversable regions. The overall architecture uses two LRASPP MobileNet V3 large Convolutional Neural Networks (CNN) for semantic segmentation over RGB images, where the first one classifies the terrain types and the second one classifies see-through obstacles in the scene. Additionally, the geometric features profile the underlying surface properties of the local scene, extracting normals from depth images. The proposed scheme was integrated with a control architecture in reactive navigation scenarios and was experimentally validated in indoor and outdoor environments as well as in subterranean environments with a Pioneer 3AT mobile robot.

Place, publisher, year, edition, pages
Elsevier Ltd, 2024
Keywords
Navigation in unstructured environments, Traversability estimation with RGB-D data, Traversability guided reactive navigation, Vision based autonomous systems
National Category
Computer graphics and computer vision Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103739 (URN)10.1016/j.eswa.2023.122919 (DOI)001144940800001 ()2-s2.0-85180941472 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-02-12 (joosat);

Funder: European Unions Horizon 2020 Research and Innovation Programme (101003591 NEXGEN-SIMS);

Full text license: CC BY

Available from: 2024-01-16 Created: 2024-01-16 Last updated: 2025-02-05Bibliographically approved
Saucedo, M. A. .., Stathoulopoulos, N., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Leveraging Computation of Expectation Models for Commonsense Affordance Estimation on 3D Scene Graphs. In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024 (pp. 9797-9802). IEEE
Open this publication in new window or tab >>Leveraging Computation of Expectation Models for Commonsense Affordance Estimation on 3D Scene Graphs
Show others...
2024 (English)In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2024, p. 9797-9802Conference paper, Published paper (Refereed)
Abstract [en]

This article studies the commonsense object affordance concept for enabling close-to-human task planning and task optimization of embodied robotic agents in urban environments. The focus of the object affordance is on reasoning how to effectively identify object’s inherent utility during the task execution, which in this work is enabled through the analysis of contextual relations of sparse information of 3D scene graphs. The proposed framework develops a Correlation Information (CECI) model to learn probability distributions using a Graph Convolutional Network, allowing to extract the commonsense affordance for individual members of a semantic class. The overall framework was experimentally validated in a real-world indoor environment, showcasing the ability of the method to level with human commonsense. For a video of the article, showcasing the experimental demonstration, please refer to the following link: https://youtu.be/BDCMVx2GiQE

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111636 (URN)10.1109/IROS58592.2024.10802560 (DOI)2-s2.0-85216468381 (Scopus ID)
Conference
The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024
Funder
EU, Horizon Europe, 101119774 SPEAR
Note

ISBN for host publication: 979-8-3503-7770-5

Available from: 2025-02-17 Created: 2025-02-17 Last updated: 2025-02-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8870-6718

Search in DiVA

Show all publications