Change search
Link to record
Permanent link

Direct link
Nikolakopoulos, GeorgeORCID iD iconorcid.org/0000-0003-0126-1897
Publications (10 of 292) Show all publications
Stathoulopoulos, N., Koval, A. & Nikolakopoulos, G. (2024). 3DEG: Data-Driven Descriptor Extraction for Global re-localization in subterranean environments. Expert systems with applications, 237(part B), Article ID 121508.
Open this publication in new window or tab >>3DEG: Data-Driven Descriptor Extraction for Global re-localization in subterranean environments
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 237, no part B, article id 121508Article in journal (Refereed) Published
Abstract [en]

Localization algorithms that rely on 3D LiDAR scanners often encounter temporary failures due to various factors, such as sensor faults, dust particles, or drifting. These failures can result in a misalignment between the robot’s estimated pose and its actual position in the global map. To address this issue, the process of global re-localization becomes essential, as it involves accurately estimating the robot’s current pose within the given map. In this article, we propose a novel global re-localization framework that addresses the limitations of current algorithms heavily reliant on scan matching and direct point cloud feature extraction. Unlike most methods, our framework eliminates the need for an initial guess and provides multiple top-� candidates for selection, enhancing robustness and flexibility. Furthermore, we introduce an event-based re-localization trigger module, enabling autonomous robotic missions. Focusing on subterranean environments with low features, we leverage range image descriptors derived from 3D LiDAR scans to preserve depth information. Our approach enhances a state-of-the-art data-driven descriptor extraction framework for place recognition and orientation regression by incorporating a junction detection module that utilizes the descriptors for classification purposes. The effectiveness of the proposed approach was evaluated across three distinct real-life subterranean environments.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Global re-localization, Sparse 3D LiDAR scans, Deep learning, Subterranean
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-101413 (URN)10.1016/j.eswa.2023.121508 (DOI)001081895400001 ()2-s2.0-85171330587 (Scopus ID)
Funder
EU, Horizon 2020, No. 869379 illuMINEation, No. 101003591 NEXGEN-SIMS
Note

Validerad;2023;Nivå 2;2023-09-22 (joosat);

CC BY 4.0 License

Available from: 2023-09-22 Created: 2023-09-22 Last updated: 2024-03-07Bibliographically approved
Stathoulopoulos, N., Koval, A. & Nikolakopoulos, G. (2024). A Comparative Field Study of Global Pose Estimation Algorithms in Subterranean Environments. International Journal of Control, Automation and Systems, 22(2), 690-704
Open this publication in new window or tab >>A Comparative Field Study of Global Pose Estimation Algorithms in Subterranean Environments
2024 (English)In: International Journal of Control, Automation and Systems, ISSN 1598-6446, E-ISSN 2005-4092, Vol. 22, no 2, p. 690-704Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-104216 (URN)10.1007/s12555-023-0026-2 (DOI)001155974000016 ()2-s2.0-85183702313 (Scopus ID)
Funder
EU, Horizon 2020, 101003591
Note

Validerad;2024;Nivå 2;2024-02-07 (joosat);

Available from: 2024-02-07 Created: 2024-02-07 Last updated: 2024-03-12Bibliographically approved
Stamatopoulos, M.-N., Banerjee, A. & Nikolakopoulos, G. (2024). A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing. Journal of Intelligent and Robotic Systems, 110(2), Article ID 53.
Open this publication in new window or tab >>A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing
2024 (English)In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 110, no 2, article id 53Article in journal (Refereed) Published
Abstract [en]

Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Aerial 3D printing, Mesh decomposition, Robotic construction
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-104930 (URN)10.1007/s10846-024-02081-8 (DOI)2-s2.0-85188520998 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-04-02 (marisr);

Full text license: CC BY

Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2024-04-02Bibliographically approved
Seisa, A. S., Lindqvist, B., Satpute, S. G. & Nikolakopoulos, G. (2024). An Edge Architecture for Enabling Autonomous Aerial Navigation with Embedded Collision Avoidance Through Remote Nonlinear Model Predictive Control. Journal of Parallel and Distributed Computing, 188, Article ID 104849.
Open this publication in new window or tab >>An Edge Architecture for Enabling Autonomous Aerial Navigation with Embedded Collision Avoidance Through Remote Nonlinear Model Predictive Control
2024 (English)In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 188, article id 104849Article in journal (Refereed) Published
Abstract [en]

In this article, we present an edge-based architecture for enhancing the autonomous capabilities of resource-constrained aerial robots by enabling a remote nonlinear model predictive control scheme, which can be computationally heavy to run on the aerial robots' onboard processors. The nonlinear model predictive control is used to control the trajectory of an unmanned aerial vehicle while detecting, and preventing potential collisions. The proposed edge architecture enables trajectory recalculation for resource-constrained unmanned aerial vehicles in relatively real-time, which will allow them to have fully autonomous behaviors. The architecture is implemented with a remote Kubernetes cluster on the edge side, and it is evaluated on an unmanned aerial vehicle as our controllable robot, while the robotic operating system is used for managing the source codes, and overall communication. With the utilization of edge computing and the architecture presented in this work, we can overcome computational limitations, that resource-constrained robots have, and provide or improve features that are essential for autonomous missions. At the same time, we can minimize the relative travel time delays for time-critical missions over the edge, in comparison to the cloud. We investigate the validity of this hypothesis by evaluating the system's behavior through a series of experiments by utilizing either the unmanned aerial vehicle or the edge resources for the collision avoidance mission.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Edge computing, Kubernetes, Robotics, Nonlinear model predictive control (NMPC)
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-101399 (URN)10.1016/j.jpdc.2024.104849 (DOI)001182184100001 ()2-s2.0-85184517086 (Scopus ID)
Funder
EU, Horizon 2020
Note

Validerad;2024;Nivå 2;2024-04-04 (signyg);

Full text license: CC BY

Available from: 2023-09-20 Created: 2023-09-20 Last updated: 2024-04-04Bibliographically approved
Perez-Cerrolaza, J., Abella, J., Borg, M., Donzella, C., Cerquides, J., Cazorla, F. J., . . . Flores, J. L. (2024). Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey. ACM Computing Surveys, 56(7), Article ID 176.
Open this publication in new window or tab >>Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey
Show others...
2024 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, no 7, article id 176Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-critical systems in which Machine Learning (ML) algorithms learn optimized and safe solutions. AI can also support and assist human safety engineers in developing safety-critical systems. However, reconciling both cutting-edge and state-of-the-art AI technology with safety engineering processes and safety standards is an open challenge that must be addressed before AI can be fully embraced in safety-critical systems. Many works already address this challenge, resulting in a vast and fragmented literature. Focusing on the industrial and transportation domains, this survey structures and analyzes challenges, techniques, and methods for developing AI-based safety-critical systems, from traditional functional safety systems to autonomous systems. AI trustworthiness spans several dimensions, such as engineering, ethics and legal, and this survey focuses on the safety engineering dimension.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2024
National Category
Embedded Systems Computer Systems
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105483 (URN)10.1145/3626314 (DOI)2-s2.0-85191063705 (Scopus ID)
Note

Full text license: CC BY

Available from: 2024-05-15 Created: 2024-05-15 Last updated: 2024-05-15
Saucedo, M. A., Patel, A., Saradagi, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation. In: : . Paper presented at The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024. IEEE
Open this publication in new window or tab >>Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information. We propose a graph-based learning methodology for the computation of belief (also referred to as expectation) on any given 3D scene graph, which is then used to strategically add new nodes (referred to as blind nodes) that are relevant to a robotic mission. We propose the method of Computation of Expectation based on Correlation Information (CECI), to reasonably approximate real Belief/Expectation, by learning histograms from available training data. A novel Graph Convolutional Neural Network (GCN) model is developed, to learn CECI from a repository of 3D scene graphs. As no database of 3D scene graphs exists for the training of the novel CECI model, we present a novel methodology for generating a 3D scene graph dataset based on semantically annotated real-life 3D spaces. The generated dataset is then utilized to train the proposed CECI model and for extensive validation of the proposed method. We establish the novel concept of \textit{Belief Scene Graphs} (BSG), as a core component to integrate expectations into abstract representations. This new concept is an evolution of the classical 3D scene graph concept and aims to enable high-level reasoning for task planning and optimization of a variety of robotics missions. The efficacy of the overall framework has been evaluated in an object search scenario, and has also been tested in a real-life experiment to emulate human common sense of unseen-objects. 

For a video of the article, showcasing the experimental demonstration, please refer to the following link: \url{https://youtu.be/hsGlSCa12iY}

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105326 (URN)
Conference
The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024
Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2024-05-07
Saucedo, M. A., Stathoulopoulos, N., Mololoth, V., Kanellakis, C. & Nikolakopoulos, G. (2024). BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization. In: : . Paper presented at The 32nd Mediterranean Conference on Control and Automation (MED2024) - Chania, Crete, Greece, June 11-14, 2024. IEEE
Open this publication in new window or tab >>BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Object detection and global localization play a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to multi-layered 3D Scene Graphs for semantic scene understanding. This article proposes BOX3D, a novel multi-modal and lightweight scheme for localizing objects of interest by fusing the information from RGB camera and 3D LiDAR. BOX3D is structured around a three-layered architecture, building up from the local perception of the incoming sequential sensor data to the global perception refinement that covers for outliers and the general consistency of each object's observation. More specifically, the first layer handles the low-level fusion of camera and LiDAR data for initial 3D bounding box extraction. The second layer converts each LiDAR's scan 3D bounding boxes to the world coordinate frame and applies a spatial pairing and merging mechanism to maintain the uniqueness of objects observed from different viewpoints. Finally, BOX3D integrates the third layer that supervises the consistency of the results on the global map iteratively, using a point-to-voxel comparison for identifying all points in the global map that belong to the object. Benchmarking results of the proposed novel architecture are showcased in multiple experimental trials on public state-of-the-art large-scale dataset of urban environments.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105327 (URN)
Conference
The 32nd Mediterranean Conference on Control and Automation (MED2024) - Chania, Crete, Greece, June 11-14, 2024
Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2024-05-07
Bai, Y., Lindqvist, B., Karlsson, S., Kanellakis, C. & Nikolakopoulos, G. (2024). Cluster-based Multi-Robot Task Assignment, Planning, and Control.
Open this publication in new window or tab >>Cluster-based Multi-Robot Task Assignment, Planning, and Control
Show others...
2024 (English)In: Article in journal (Other academic) Submitted
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103890 (URN)
Available from: 2024-01-23 Created: 2024-01-23 Last updated: 2024-01-24
Damigos, G., Stathoulopoulos, N., Koval, A., Lindgren, T. & Nikolakopoulos, G. (2024). Communication-Aware Control of Large Data Transmissions via Centralized Cognition and 5G Networks for Multi-Robot Map merging. Journal of Intelligent and Robotic Systems, 110(1), Article ID 22.
Open this publication in new window or tab >>Communication-Aware Control of Large Data Transmissions via Centralized Cognition and 5G Networks for Multi-Robot Map merging
Show others...
2024 (English)In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 110, no 1, article id 22Article in journal (Refereed) Published
Abstract [en]

Multiple modern robotic applications benefit from centralized cognition and processing schemes. However, modern equipped robotic platforms can output a large amount of data, which may exceed the capabilities of modern wireless communication systems if all data is transmitted without further consideration. This research presents a multi-agent, centralized, and real-time 3D point cloud map merging scheme for ceaselessly connected robotic agents. Centralized architectures enable mission awareness to all agents at all times, making tasks such as search and rescue more effective. The centralized component is placed on an edge server, ensuring low communication latency, while all agents access the server utilizing a fifth-generation (5G) network. In addition, the proposed solution introduces a communication-aware control function that regulates the transmissions of map instances to prevent the creation of significant data congestion and communication latencies as well as address conditions where the robotic agents traverse in limited to no coverage areas. The presented framework is agnostic of the used localization and mapping procedure, while it utilizes the full power of an edge server. Finally, the efficiency of the novel established framework is being experimentally validated based on multiple scenarios.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
5G, Edge, Map merging, Multi-agent
National Category
Communication Systems
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103941 (URN)10.1007/s10846-023-02045-4 (DOI)001148732200002 ()2-s2.0-85182956096 (Scopus ID)
Funder
EU, Horizon 2020, 953454
Note

Validerad;2024;Nivå 2;2024-01-26 (joosat);

Full text: CC BY 4.0 License

Available from: 2024-01-26 Created: 2024-01-26 Last updated: 2024-04-15Bibliographically approved
Stamatopoulos, M.-N., Banerjee, A. & Nikolakopoulos, G. (2024). Conflict-free optimal motion planning for parallel aerial 3D printing using multiple UAVs. Expert systems with applications, 246, Article ID 123201.
Open this publication in new window or tab >>Conflict-free optimal motion planning for parallel aerial 3D printing using multiple UAVs
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, article id 123201Article in journal (Refereed) Published
Abstract [en]

This article introduces a novel collaborative optimal motion planning framework for parallel aerial 3D printing. The proposed novel framework is efficiently capable of handling conflicts between the utilized Unmanned Aerial Vehicles (UAVs), as they follow predefined paths, allowing for a seamless enhancement of aerial 3D printing capabilities by employing multiple UAVs to collaborate in a parallel printing process. The established approach ingeniously formulates UAVs’ motion planning as a multi-constraint optimization problem, ensuring minimal adjustments to their velocities within specified limits. This guarantees smooth and uninterrupted printing while preventing collisions and adhering to the requirements of aerial printing. To substantiate the effectiveness of our proposed motion planning algorithm, an extensive array of simulation studies have been undertaken, encompassing scenarios where multiple UAVs engage in the fabrication of diverse construction shapes. The overall novel concept is being extensively validated in simulations, while the obtained results promise for enhancing the viability and advancing the landscape of aerial additive manufacturing.

Keywords
Aerial 3D printing, Parallel printing, Conflict resolution, Multi-agent, Robotics
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103866 (URN)10.1016/j.eswa.2024.123201 (DOI)001164176400001 ()2-s2.0-85182503125 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-01-22 (signyg);

Full text license: CC BY-4.0

Available from: 2024-01-23 Created: 2024-01-23 Last updated: 2024-04-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0126-1897

Search in DiVA

Show all publications