Change search
Link to record
Permanent link

Direct link
Kanellakis, ChristoforosORCID iD iconorcid.org/0000-0001-8870-6718
Publications (10 of 69) Show all publications
Bai, Y., Lindqvist, B., Karlsson, S., Kanellakis, C. & Nikolakopoulos, G. (2024). Cluster-based Multi-Robot Task Assignment, Planning, and Control.
Open this publication in new window or tab >>Cluster-based Multi-Robot Task Assignment, Planning, and Control
Show others...
2024 (English)In: Article in journal (Other academic) Submitted
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103890 (URN)
Available from: 2024-01-23 Created: 2024-01-23 Last updated: 2024-01-24
Saucedo, M. A. V., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2024). EAT: Environment Agnostic Traversability for reactive navigation. Expert systems with applications, 244, Article ID 122919.
Open this publication in new window or tab >>EAT: Environment Agnostic Traversability for reactive navigation
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 244, article id 122919Article in journal (Refereed) Published
Abstract [en]

This work presents EAT (Environment Agnostic Traversability for Reactive Navigation) a novel framework for traversability estimation in indoor, outdoor, subterranean (SubT) and other unstructured environments. The architecture provides updates on traversable regions online during the mission, adapts to varying environments, while being robust to noisy semantic image segmentation. The proposed framework considers terrain prioritization based on a novel decay exponential function to fuse the semantic information and geometric features extracted from RGB-D images to obtain the traversability of the scene. Moreover, EAT introduces an obstacle inflation mechanism on the traversability image, based on mean-window weighting module, allowing to adapt the proximity to untraversable regions. The overall architecture uses two LRASPP MobileNet V3 large Convolutional Neural Networks (CNN) for semantic segmentation over RGB images, where the first one classifies the terrain types and the second one classifies see-through obstacles in the scene. Additionally, the geometric features profile the underlying surface properties of the local scene, extracting normals from depth images. The proposed scheme was integrated with a control architecture in reactive navigation scenarios and was experimentally validated in indoor and outdoor environments as well as in subterranean environments with a Pioneer 3AT mobile robot.

Place, publisher, year, edition, pages
Elsevier Ltd, 2024
Keywords
Navigation in unstructured environments, Traversability estimation with RGB-D data, Traversability guided reactive navigation, Vision based autonomous systems
National Category
Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103739 (URN)10.1016/j.eswa.2023.122919 (DOI)2-s2.0-85180941472 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-02-12 (joosat);

Funder: European Unions Horizon 2020 Research and Innovation Programme (101003591 NEXGEN-SIMS);

Full text license: CC BY

Available from: 2024-01-16 Created: 2024-01-16 Last updated: 2024-02-12Bibliographically approved
Kottayam Viswanathan, V., Mansouri, S. S. & Kanellakis, C. (2023). Aerial infrastructures inspection. In: George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis (Ed.), Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications: (pp. 175-211). Elsevier
Open this publication in new window or tab >>Aerial infrastructures inspection
2023 (English)In: Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications / [ed] George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis, Elsevier, 2023, p. 175-211Chapter in book (Other academic)
Abstract [en]

This chapter presents the application of autonomous Aerial Robotic Workers towards performing a visual inspection of 3D infrastructures by utilizing single and multiple Aerial Robotic Workers (ARWs). To address this problem, the developed framework combines the fundamental tasks of path planning, localization, and mapping, which are the essential components for autonomous robotic navigation systems. In the presented approach, the Unmanned Aerial Workers (ARWs) deployed for inspecting the structure rely only on their onboard computer and sensory system. Initially, the problem of path planner is discussed and mathematically formulated, leading to the development of a geometry-based approach for coverage of complex structures. The navigation of the platform is performed through the localization component, which provides accurate pose estimation for the vehicle using a visual-inertial estimation scheme. During the coverage mission, the agents collect image data for post-processing and mapping using Visual SLAM and Structure from Motion techniques. The performance of the proposed framework has been experimentally evaluated in multiple indoor and realistic outdoor infrastructure inspection experiments, depicting the merits of the autonomous navigation system (path planning and localization) and 3D model building of the inspected object and infrastructure.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Infrastructure inspections, ARWs, Autonomy, Visual inspection, 3D point cloud
National Category
Control Engineering Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-97393 (URN)10.1016/B978-0-12-814909-6.00017-2 (DOI)2-s2.0-85150100105 (Scopus ID)978-0-12-814909-6 (ISBN)
Available from: 2023-05-24 Created: 2023-05-24 Last updated: 2023-05-24Bibliographically approved
Nikolakopoulos, G., Mansouri, S. S. & Kanellakis, C. (Eds.). (2023). Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications. Elsevier
Open this publication in new window or tab >>Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications
2023 (English)Collection (editor) (Other academic)
Place, publisher, year, edition, pages
Elsevier, 2023. p. 265
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-96926 (URN)10.1016/C2017-0-02260-7 (DOI)2-s2.0-85150084021 (Scopus ID)978-0-12-814909-6 (ISBN)
Available from: 2023-04-24 Created: 2023-04-24 Last updated: 2023-04-24Bibliographically approved
Lindqvist, B., Mansouri, S. S., Kanellakis, C. & Kottayam Viswanathan, V. (2023). ARW deployment for subterranean environments. In: George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis (Ed.), Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications: (pp. 213-243). Elsevier
Open this publication in new window or tab >>ARW deployment for subterranean environments
2023 (English)In: Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications / [ed] George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis, Elsevier, 2023, p. 213-243Chapter in book (Other academic)
Abstract [en]

This chapter will present the application of deployment of full autonomous Aerial Robotic Workers for inspection and exploration tasks in Subterranean environments. The framework shown will focus on the navigation, control, and perception capabilities of resource-constrained aerial platforms, contributing to the development of consumable scouting robotic platforms for real-life applications in extreme environments. In the approach, the aerial platform will be treated as a floating object, commanded by a velocity controller on the x-y axes, a height controller, as well as a heading correction module aligning the platform with the mining tunnel open space. Multiple experimentally verified methods regarding the heading correction module, for dark environments with limited texture, using either a visual camera or a 2D LiDAR presented in real mining environments are presented.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Subterranean, Autonomous, Reactive navigation, Collision avoidance
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-97388 (URN)10.1016/B978-0-12-814909-6.00018-4 (DOI)2-s2.0-85150120419 (Scopus ID)978-0-12-814909-6 (ISBN)
Available from: 2023-05-24 Created: 2023-05-24 Last updated: 2023-05-24Bibliographically approved
Karlsson, S., Koval, A., Kanellakis, C. & Nikolakopoulos, G. (2023). D+∗: A risk aware platform agnostic heterogeneous path planner. Expert systems with applications, 215, Article ID 119408.
Open this publication in new window or tab >>D+: A risk aware platform agnostic heterogeneous path planner
2023 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 215, article id 119408Article, review/survey (Refereed) Published
Abstract [en]

This article establishes the novel D+*, , a risk-aware and platform-agnostic heterogeneous global path planner for robotic navigation in complex environments. The proposed planner addresses a fundamental bottleneck of occupancy-based path planners related to their dependency on accurate and dense maps. More specifically, their performance is highly affected by poorly reconstructed or sparse areas (e.g. holes in the walls or ceilings) leading to faulty generated paths outside the physical boundaries of the 3-dimensional space. As it will be presented, D+* addresses this challenge with three novel contributions, integrated into one solution, namely: (a) the proximity risk, (b) the modeling of the unknown space, and (c) the map updates. By adding a risk layer to spaces that are closer to the occupied ones, some holes are filled, and thus the problematic short-cutting through them to the final goal is prevented. The novel established D+*  also provides safety marginals to the walls and other obstacles, a property that results in paths that do not cut the corners that could potentially disrupt the platform operation. D+*  has also the capability to model the unknown space as risk-free areas that should keep the paths inside, e.g in a tunnel environment, and thus heavily reducing the risk of larger shortcuts through openings in the walls. D+* is also introducing a dynamic map handling capability that continuously updates with the latest information acquired during the map building process, allowing the planner to use constant map growth and resolve cases of planning over outdated sparser map reconstructions. The proposed path planner is also capable to plan 2D and 3D paths by only changing the input map to a 2D or 3D map and it is independent of the dynamics of the robotic platform. The efficiency of the proposed scheme is experimentally evaluated in multiple real-life experiments where D+* is producing successfully proper planned paths, either in 2D in the use case of the Boston dynamics Spot robot or 3D paths in the case of an unmanned areal vehicle in varying and challenging scenarios.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
DSP, Path planing, Risk aware, Platform agnostic
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-94859 (URN)10.1016/j.eswa.2022.119408 (DOI)000906357700001 ()2-s2.0-85144050427 (Scopus ID)
Funder
EU, Horizon 2020, 869379 illuMINEation
Note

Validerad;2023;Nivå 2;2023-01-01 (hanlid)

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2023-03-03Bibliographically approved
Koval, A., Mansouri, S. S. & Kanellakis, C. (2023). Machine learning for ARWs. In: George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis (Ed.), Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications: (pp. 159-174). Elsevier
Open this publication in new window or tab >>Machine learning for ARWs
2023 (English)In: Aerial Robotic Workers: Design, Modeling, Control, Vision, and Their Applications / [ed] George Nikolakopoulos, Sina Sharif Mansouri, Christoforos Kanellakis, Elsevier, 2023, p. 159-174Chapter in book (Other academic)
Abstract [en]

Navigation in underground mine environments is a challenging area for the aerial robotic workers. Mines usually have complex geometries, including multiple crossings with different tunnels. Moreover, improving the safety of mines requires drones to be able to detect human workers. Thus, in this Chapter, we introduce frameworks for junction and human detection in the underground mine environments.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
CNN, Human detection, Junction recognition, Transfer learning
National Category
Other Civil Engineering
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-97392 (URN)10.1016/B978-0-12-814909-6.00016-0 (DOI)2-s2.0-85150104596 (Scopus ID)978-0-12-814909-6 (ISBN)
Available from: 2023-05-24 Created: 2023-05-24 Last updated: 2023-09-15Bibliographically approved
Saucedo, M. A. .., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2023). Memory Enabled Segmentation of Terrain for Traversability based Reactive Navigation. In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO): . Paper presented at 2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023. IEEE
Open this publication in new window or tab >>Memory Enabled Segmentation of Terrain for Traversability based Reactive Navigation
2023 (English)In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2023Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2023
National Category
Computer Vision and Robotics (Autonomous Systems) Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103974 (URN)10.1109/ROBIO58561.2023.10354930 (DOI)2-s2.0-85182558371 (Scopus ID)979-8-3503-2570-6 (ISBN)979-8-3503-2571-3 (ISBN)
Conference
2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023
Funder
EU, Horizon 2020, 101003591
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-01-29Bibliographically approved
Saucedo, M. A., Kanellakis, C. & Nikolakopoulos, G. (2023). MSL3D: Pointcloud-based muck pile Segmentation and Localization in Unknown SubT Environments. In: 2023 31st Mediterranean Conference on Control and Automation, MED 2023: . Paper presented at 31st Mediterranean Conference on Control and Automation, MED 2023, Limassol, Cyprus, June 26-29, 2023 (pp. 269-274). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>MSL3D: Pointcloud-based muck pile Segmentation and Localization in Unknown SubT Environments
2023 (English)In: 2023 31st Mediterranean Conference on Control and Automation, MED 2023, Institute of Electrical and Electronics Engineers Inc. , 2023, p. 269-274Conference paper, Published paper (Refereed)
Abstract [en]

This article presents MSL3D, a novel framework for pointcloud-based muck pile Segmentation and Localization in unknown Sub-Terranean (Sub-T) environments. The proposed framework is capable of progressively segmenting the muck piles and extracting their location in a global constructed point cloud map, using the autonomy sensor payload of mining or robotic platforms. MSL3D is structured in a two layer novel architecture that relies on the geometric properties of muck piles in underground tunnels, where the first layer extracts a local Volume Of Interest (VOI) proposal area out of the registered point cloud and the second layer is refining the muck pile extraction of each VOI proposal in the global optimized point cloud map. The first layer of MSL3D is extracting local VOIs bounded in the look-ahead surroundings of the platform. More specifically, the ceiling, left and right walls as well as the ground are continuously segmented using progessive RANSAC, searching for inclination in the segmented ground area to keep as the next-best local VOI. Once a local VOI is extracted, it is transmitted to the second layer, where it is converted to the world frame coordinates. In the sequel, a morphological filter is applied, in order to segment ground and nonground points, followed by RANSAC once again to extract the remaining points corresponding to the right and left walls. In this approach, Euclidean clustering is utilized to keep the cluster with the majority of points, which is assumed to belong to the muck pile. The efficacy of the proposed novel scheme was successfully and experimentally validated in real and large scale SubT environments by utilizing a custom-made UAV.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
Series
Mediterranean Conference on Control and Automation, ISSN 2325-369X, E-ISSN 2473-3504
Keywords
Automatic muckl pile extraction, Muck pile Localization, Muck pile segmentation, Pointcloud processing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer and Information Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-101102 (URN)10.1109/MED59994.2023.10185912 (DOI)2-s2.0-85167798931 (Scopus ID)979-8-3503-1544-8 (ISBN)979-8-3503-1543-1 (ISBN)
Conference
31st Mediterranean Conference on Control and Automation, MED 2023, Limassol, Cyprus, June 26-29, 2023
Available from: 2023-08-30 Created: 2023-08-30 Last updated: 2023-08-30Bibliographically approved
Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2023). Multi Agent Coordination Strategy for Collaborative Exploration of GPS-denied Environments. In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO): . Paper presented at 2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023. IEEE
Open this publication in new window or tab >>Multi Agent Coordination Strategy for Collaborative Exploration of GPS-denied Environments
2023 (English)In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2023Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2023
National Category
Robotics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103976 (URN)10.1109/ROBIO58561.2023.10354634 (DOI)2-s2.0-85182582431 (Scopus ID)979-8-3503-2570-6 (ISBN)979-8-3503-2571-3 (ISBN)
Conference
2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023
Funder
EU, Horizon 2020, 101003591
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2024-01-29Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8870-6718

Search in DiVA

Show all publications