Endre søk
Link to record
Permanent link

Direct link
Nikolakopoulos, GeorgeORCID iD iconorcid.org/0000-0003-0126-1897
Publikasjoner (10 av 292) Visa alla publikasjoner
Stathoulopoulos, N., Koval, A. & Nikolakopoulos, G. (2024). 3DEG: Data-Driven Descriptor Extraction for Global re-localization in subterranean environments. Expert systems with applications, 237(part B), Article ID 121508.
Åpne denne publikasjonen i ny fane eller vindu >>3DEG: Data-Driven Descriptor Extraction for Global re-localization in subterranean environments
2024 (engelsk)Inngår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 237, nr part B, artikkel-id 121508Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Localization algorithms that rely on 3D LiDAR scanners often encounter temporary failures due to various factors, such as sensor faults, dust particles, or drifting. These failures can result in a misalignment between the robot’s estimated pose and its actual position in the global map. To address this issue, the process of global re-localization becomes essential, as it involves accurately estimating the robot’s current pose within the given map. In this article, we propose a novel global re-localization framework that addresses the limitations of current algorithms heavily reliant on scan matching and direct point cloud feature extraction. Unlike most methods, our framework eliminates the need for an initial guess and provides multiple top-� candidates for selection, enhancing robustness and flexibility. Furthermore, we introduce an event-based re-localization trigger module, enabling autonomous robotic missions. Focusing on subterranean environments with low features, we leverage range image descriptors derived from 3D LiDAR scans to preserve depth information. Our approach enhances a state-of-the-art data-driven descriptor extraction framework for place recognition and orientation regression by incorporating a junction detection module that utilizes the descriptors for classification purposes. The effectiveness of the proposed approach was evaluated across three distinct real-life subterranean environments.

sted, utgiver, år, opplag, sider
Elsevier, 2024
Emneord
Global re-localization, Sparse 3D LiDAR scans, Deep learning, Subterranean
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-101413 (URN)10.1016/j.eswa.2023.121508 (DOI)001081895400001 ()2-s2.0-85171330587 (Scopus ID)
Forskningsfinansiär
EU, Horizon 2020, No. 869379 illuMINEation, No. 101003591 NEXGEN-SIMS
Merknad

Validerad;2023;Nivå 2;2023-09-22 (joosat);

CC BY 4.0 License

Tilgjengelig fra: 2023-09-22 Laget: 2023-09-22 Sist oppdatert: 2024-03-07bibliografisk kontrollert
Stathoulopoulos, N., Koval, A. & Nikolakopoulos, G. (2024). A Comparative Field Study of Global Pose Estimation Algorithms in Subterranean Environments. International Journal of Control, Automation and Systems, 22(2), 690-704
Åpne denne publikasjonen i ny fane eller vindu >>A Comparative Field Study of Global Pose Estimation Algorithms in Subterranean Environments
2024 (engelsk)Inngår i: International Journal of Control, Automation and Systems, ISSN 1598-6446, E-ISSN 2005-4092, Vol. 22, nr 2, s. 690-704Artikkel i tidsskrift (Fagfellevurdert) Published
sted, utgiver, år, opplag, sider
Springer Nature, 2024
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-104216 (URN)10.1007/s12555-023-0026-2 (DOI)001155974000016 ()2-s2.0-85183702313 (Scopus ID)
Forskningsfinansiär
EU, Horizon 2020, 101003591
Merknad

Validerad;2024;Nivå 2;2024-02-07 (joosat);

Tilgjengelig fra: 2024-02-07 Laget: 2024-02-07 Sist oppdatert: 2024-03-12bibliografisk kontrollert
Stamatopoulos, M.-N., Banerjee, A. & Nikolakopoulos, G. (2024). A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing. Journal of Intelligent and Robotic Systems, 110(2), Article ID 53.
Åpne denne publikasjonen i ny fane eller vindu >>A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing
2024 (engelsk)Inngår i: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 110, nr 2, artikkel-id 53Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.

sted, utgiver, år, opplag, sider
Springer Nature, 2024
Emneord
Aerial 3D printing, Mesh decomposition, Robotic construction
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-104930 (URN)10.1007/s10846-024-02081-8 (DOI)2-s2.0-85188520998 (Scopus ID)
Merknad

Validerad;2024;Nivå 2;2024-04-02 (marisr);

Full text license: CC BY

Tilgjengelig fra: 2024-04-02 Laget: 2024-04-02 Sist oppdatert: 2024-04-02bibliografisk kontrollert
Seisa, A. S., Lindqvist, B., Satpute, S. G. & Nikolakopoulos, G. (2024). An Edge Architecture for Enabling Autonomous Aerial Navigation with Embedded Collision Avoidance Through Remote Nonlinear Model Predictive Control. Journal of Parallel and Distributed Computing, 188, Article ID 104849.
Åpne denne publikasjonen i ny fane eller vindu >>An Edge Architecture for Enabling Autonomous Aerial Navigation with Embedded Collision Avoidance Through Remote Nonlinear Model Predictive Control
2024 (engelsk)Inngår i: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 188, artikkel-id 104849Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In this article, we present an edge-based architecture for enhancing the autonomous capabilities of resource-constrained aerial robots by enabling a remote nonlinear model predictive control scheme, which can be computationally heavy to run on the aerial robots' onboard processors. The nonlinear model predictive control is used to control the trajectory of an unmanned aerial vehicle while detecting, and preventing potential collisions. The proposed edge architecture enables trajectory recalculation for resource-constrained unmanned aerial vehicles in relatively real-time, which will allow them to have fully autonomous behaviors. The architecture is implemented with a remote Kubernetes cluster on the edge side, and it is evaluated on an unmanned aerial vehicle as our controllable robot, while the robotic operating system is used for managing the source codes, and overall communication. With the utilization of edge computing and the architecture presented in this work, we can overcome computational limitations, that resource-constrained robots have, and provide or improve features that are essential for autonomous missions. At the same time, we can minimize the relative travel time delays for time-critical missions over the edge, in comparison to the cloud. We investigate the validity of this hypothesis by evaluating the system's behavior through a series of experiments by utilizing either the unmanned aerial vehicle or the edge resources for the collision avoidance mission.

sted, utgiver, år, opplag, sider
Elsevier, 2024
Emneord
Edge computing, Kubernetes, Robotics, Nonlinear model predictive control (NMPC)
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-101399 (URN)10.1016/j.jpdc.2024.104849 (DOI)001182184100001 ()2-s2.0-85184517086 (Scopus ID)
Forskningsfinansiär
EU, Horizon 2020
Merknad

Validerad;2024;Nivå 2;2024-04-04 (signyg);

Full text license: CC BY

Tilgjengelig fra: 2023-09-20 Laget: 2023-09-20 Sist oppdatert: 2024-04-04bibliografisk kontrollert
Perez-Cerrolaza, J., Abella, J., Borg, M., Donzella, C., Cerquides, J., Cazorla, F. J., . . . Flores, J. L. (2024). Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey. ACM Computing Surveys, 56(7), Article ID 176.
Åpne denne publikasjonen i ny fane eller vindu >>Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey
Vise andre…
2024 (engelsk)Inngår i: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, nr 7, artikkel-id 176Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-critical systems in which Machine Learning (ML) algorithms learn optimized and safe solutions. AI can also support and assist human safety engineers in developing safety-critical systems. However, reconciling both cutting-edge and state-of-the-art AI technology with safety engineering processes and safety standards is an open challenge that must be addressed before AI can be fully embraced in safety-critical systems. Many works already address this challenge, resulting in a vast and fragmented literature. Focusing on the industrial and transportation domains, this survey structures and analyzes challenges, techniques, and methods for developing AI-based safety-critical systems, from traditional functional safety systems to autonomous systems. AI trustworthiness spans several dimensions, such as engineering, ethics and legal, and this survey focuses on the safety engineering dimension.

sted, utgiver, år, opplag, sider
Association for Computing Machinery, 2024
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-105483 (URN)10.1145/3626314 (DOI)2-s2.0-85191063705 (Scopus ID)
Merknad

Full text license: CC BY

Tilgjengelig fra: 2024-05-15 Laget: 2024-05-15 Sist oppdatert: 2024-05-15
Saucedo, M. A., Patel, A., Saradagi, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation. In: : . Paper presented at The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024. IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation
Vise andre…
2024 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this article, we propose the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information. We propose a graph-based learning methodology for the computation of belief (also referred to as expectation) on any given 3D scene graph, which is then used to strategically add new nodes (referred to as blind nodes) that are relevant to a robotic mission. We propose the method of Computation of Expectation based on Correlation Information (CECI), to reasonably approximate real Belief/Expectation, by learning histograms from available training data. A novel Graph Convolutional Neural Network (GCN) model is developed, to learn CECI from a repository of 3D scene graphs. As no database of 3D scene graphs exists for the training of the novel CECI model, we present a novel methodology for generating a 3D scene graph dataset based on semantically annotated real-life 3D spaces. The generated dataset is then utilized to train the proposed CECI model and for extensive validation of the proposed method. We establish the novel concept of \textit{Belief Scene Graphs} (BSG), as a core component to integrate expectations into abstract representations. This new concept is an evolution of the classical 3D scene graph concept and aims to enable high-level reasoning for task planning and optimization of a variety of robotics missions. The efficacy of the overall framework has been evaluated in an object search scenario, and has also been tested in a real-life experiment to emulate human common sense of unseen-objects. 

For a video of the article, showcasing the experimental demonstration, please refer to the following link: \url{https://youtu.be/hsGlSCa12iY}

sted, utgiver, år, opplag, sider
IEEE, 2024
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-105326 (URN)
Konferanse
The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024
Tilgjengelig fra: 2024-05-03 Laget: 2024-05-03 Sist oppdatert: 2024-05-07
Saucedo, M. A., Stathoulopoulos, N., Mololoth, V., Kanellakis, C. & Nikolakopoulos, G. (2024). BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization. In: : . Paper presented at The 32nd Mediterranean Conference on Control and Automation (MED2024) - Chania, Crete, Greece, June 11-14, 2024. IEEE
Åpne denne publikasjonen i ny fane eller vindu >>BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization
Vise andre…
2024 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Object detection and global localization play a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to multi-layered 3D Scene Graphs for semantic scene understanding. This article proposes BOX3D, a novel multi-modal and lightweight scheme for localizing objects of interest by fusing the information from RGB camera and 3D LiDAR. BOX3D is structured around a three-layered architecture, building up from the local perception of the incoming sequential sensor data to the global perception refinement that covers for outliers and the general consistency of each object's observation. More specifically, the first layer handles the low-level fusion of camera and LiDAR data for initial 3D bounding box extraction. The second layer converts each LiDAR's scan 3D bounding boxes to the world coordinate frame and applies a spatial pairing and merging mechanism to maintain the uniqueness of objects observed from different viewpoints. Finally, BOX3D integrates the third layer that supervises the consistency of the results on the global map iteratively, using a point-to-voxel comparison for identifying all points in the global map that belong to the object. Benchmarking results of the proposed novel architecture are showcased in multiple experimental trials on public state-of-the-art large-scale dataset of urban environments.

sted, utgiver, år, opplag, sider
IEEE, 2024
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-105327 (URN)
Konferanse
The 32nd Mediterranean Conference on Control and Automation (MED2024) - Chania, Crete, Greece, June 11-14, 2024
Tilgjengelig fra: 2024-05-03 Laget: 2024-05-03 Sist oppdatert: 2024-05-07
Bai, Y., Lindqvist, B., Karlsson, S., Kanellakis, C. & Nikolakopoulos, G. (2024). Cluster-based Multi-Robot Task Assignment, Planning, and Control.
Åpne denne publikasjonen i ny fane eller vindu >>Cluster-based Multi-Robot Task Assignment, Planning, and Control
Vise andre…
2024 (engelsk)Inngår i: Artikkel i tidsskrift (Annet vitenskapelig) Submitted
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-103890 (URN)
Tilgjengelig fra: 2024-01-23 Laget: 2024-01-23 Sist oppdatert: 2024-01-24
Damigos, G., Stathoulopoulos, N., Koval, A., Lindgren, T. & Nikolakopoulos, G. (2024). Communication-Aware Control of Large Data Transmissions via Centralized Cognition and 5G Networks for Multi-Robot Map merging. Journal of Intelligent and Robotic Systems, 110(1), Article ID 22.
Åpne denne publikasjonen i ny fane eller vindu >>Communication-Aware Control of Large Data Transmissions via Centralized Cognition and 5G Networks for Multi-Robot Map merging
Vise andre…
2024 (engelsk)Inngår i: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 110, nr 1, artikkel-id 22Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Multiple modern robotic applications benefit from centralized cognition and processing schemes. However, modern equipped robotic platforms can output a large amount of data, which may exceed the capabilities of modern wireless communication systems if all data is transmitted without further consideration. This research presents a multi-agent, centralized, and real-time 3D point cloud map merging scheme for ceaselessly connected robotic agents. Centralized architectures enable mission awareness to all agents at all times, making tasks such as search and rescue more effective. The centralized component is placed on an edge server, ensuring low communication latency, while all agents access the server utilizing a fifth-generation (5G) network. In addition, the proposed solution introduces a communication-aware control function that regulates the transmissions of map instances to prevent the creation of significant data congestion and communication latencies as well as address conditions where the robotic agents traverse in limited to no coverage areas. The presented framework is agnostic of the used localization and mapping procedure, while it utilizes the full power of an edge server. Finally, the efficiency of the novel established framework is being experimentally validated based on multiple scenarios.

sted, utgiver, år, opplag, sider
Springer, 2024
Emneord
5G, Edge, Map merging, Multi-agent
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-103941 (URN)10.1007/s10846-023-02045-4 (DOI)001148732200002 ()2-s2.0-85182956096 (Scopus ID)
Forskningsfinansiär
EU, Horizon 2020, 953454
Merknad

Validerad;2024;Nivå 2;2024-01-26 (joosat);

Full text: CC BY 4.0 License

Tilgjengelig fra: 2024-01-26 Laget: 2024-01-26 Sist oppdatert: 2024-04-15bibliografisk kontrollert
Stamatopoulos, M.-N., Banerjee, A. & Nikolakopoulos, G. (2024). Conflict-free optimal motion planning for parallel aerial 3D printing using multiple UAVs. Expert systems with applications, 246, Article ID 123201.
Åpne denne publikasjonen i ny fane eller vindu >>Conflict-free optimal motion planning for parallel aerial 3D printing using multiple UAVs
2024 (engelsk)Inngår i: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 246, artikkel-id 123201Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

This article introduces a novel collaborative optimal motion planning framework for parallel aerial 3D printing. The proposed novel framework is efficiently capable of handling conflicts between the utilized Unmanned Aerial Vehicles (UAVs), as they follow predefined paths, allowing for a seamless enhancement of aerial 3D printing capabilities by employing multiple UAVs to collaborate in a parallel printing process. The established approach ingeniously formulates UAVs’ motion planning as a multi-constraint optimization problem, ensuring minimal adjustments to their velocities within specified limits. This guarantees smooth and uninterrupted printing while preventing collisions and adhering to the requirements of aerial printing. To substantiate the effectiveness of our proposed motion planning algorithm, an extensive array of simulation studies have been undertaken, encompassing scenarios where multiple UAVs engage in the fabrication of diverse construction shapes. The overall novel concept is being extensively validated in simulations, while the obtained results promise for enhancing the viability and advancing the landscape of aerial additive manufacturing.

Emneord
Aerial 3D printing, Parallel printing, Conflict resolution, Multi-agent, Robotics
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
urn:nbn:se:ltu:diva-103866 (URN)10.1016/j.eswa.2024.123201 (DOI)001164176400001 ()2-s2.0-85182503125 (Scopus ID)
Merknad

Validerad;2024;Nivå 2;2024-01-22 (signyg);

Full text license: CC BY-4.0

Tilgjengelig fra: 2024-01-23 Laget: 2024-01-23 Sist oppdatert: 2024-04-04bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-0126-1897