Change search
Link to record
Permanent link

Direct link
Saucedo, Mario Alberto ValdesORCID iD iconorcid.org/0000-0001-8132-4178
Publications (10 of 11) Show all publications
Saucedo, M. A., Patel, A., Saradagi, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation. In: : . Paper presented at The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024 (pp. 9441-9447). IEEE
Open this publication in new window or tab >>Belief Scene Graphs: Expanding Partial Scenes with Objects through Computation of Expectation
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information. We propose a graph-based learning methodology for the computation of belief (also referred to as expectation) on any given 3D scene graph, which is then used to strategically add new nodes (referred to as blind nodes) that are relevant to a robotic mission. We propose the method of Computation of Expectation based on Correlation Information (CECI), to reasonably approximate real Belief/Expectation, by learning histograms from available training data. A novel Graph Convolutional Neural Network (GCN) model is developed, to learn CECI from a repository of 3D scene graphs. As no database of 3D scene graphs exists for the training of the novel CECI model, we present a novel methodology for generating a 3D scene graph dataset based on semantically annotated real-life 3D spaces. The generated dataset is then utilized to train the proposed CECI model and for extensive validation of the proposed method. We establish the novel concept of \textit{Belief Scene Graphs} (BSG), as a core component to integrate expectations into abstract representations. This new concept is an evolution of the classical 3D scene graph concept and aims to enable high-level reasoning for task planning and optimization of a variety of robotics missions. The efficacy of the overall framework has been evaluated in an object search scenario, and has also been tested in a real-life experiment to emulate human common sense of unseen-objects. 

For a video of the article, showcasing the experimental demonstration, please refer to the following link: \url{https://youtu.be/hsGlSCa12iY}

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105326 (URN)10.1109/ICRA57147.2024.10611352 (DOI)2-s2.0-85202433848 (Scopus ID)
Conference
The 2024 IEEE International Conference on Robotics and Automation (ICRA2024), Yokohama, Japan, May 13-17, 2024
Note

Funder: European Union’s HorizonEurope Research and Innovation Programme (101119774 SPEAR);

ISBN for host publication: 979-8-3503-8457-4;

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved
Saucedo, M. A., Stathoulopoulos, N., Mololoth, V., Kanellakis, C. & Nikolakopoulos, G. (2024). BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization. In: 2024 32nd Mediterranean Conference on Control and Automation (MED): . Paper presented at The 32nd Mediterranean Conference on Control and Automation (MED2024), Chania, Crete, Greece, June 11-14, 2024. IEEE
Open this publication in new window or tab >>BOX3D: Lightweight Camera-LiDAR Fusion for 3D Object Detection and Localization
Show others...
2024 (English)In: 2024 32nd Mediterranean Conference on Control and Automation (MED), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Object detection and global localization play a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to multi-layered 3D Scene Graphs for semantic scene understanding. This article proposes BOX3D, a novel multi-modal and lightweight scheme for localizing objects of interest by fusing the information from RGB camera and 3D LiDAR. BOX3D is structured around a three-layered architecture, building up from the local perception of the incoming sequential sensor data to the global perception refinement that covers for outliers and the general consistency of each object's observation. More specifically, the first layer handles the low-level fusion of camera and LiDAR data for initial 3D bounding box extraction. The second layer converts each LiDAR's scan 3D bounding boxes to the world coordinate frame and applies a spatial pairing and merging mechanism to maintain the uniqueness of objects observed from different viewpoints. Finally, BOX3D integrates the third layer that supervises the consistency of the results on the global map iteratively, using a point-to-voxel comparison for identifying all points in the global map that belong to the object. Benchmarking results of the proposed novel architecture are showcased in multiple experimental trials on public state-of-the-art large-scale dataset of urban environments.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105327 (URN)10.1109/MED61351.2024.10566236 (DOI)2-s2.0-85198221796 (Scopus ID)
Conference
The 32nd Mediterranean Conference on Control and Automation (MED2024), Chania, Crete, Greece, June 11-14, 2024
Funder
Swedish Energy AgencyEU, Horizon Europe, 101091462 m4mining
Note

Funder: SP14 ‘Autonomous Drones for Underground Mining Operations’;

ISBN for host publication: 979-8-3503-9545-7; 979-8-3503-9544-0

Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved
Saucedo, M. A. V., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2024). EAT: Environment Agnostic Traversability for reactive navigation. Expert systems with applications, 244, Article ID 122919.
Open this publication in new window or tab >>EAT: Environment Agnostic Traversability for reactive navigation
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 244, article id 122919Article in journal (Refereed) Published
Abstract [en]

This work presents EAT (Environment Agnostic Traversability for Reactive Navigation) a novel framework for traversability estimation in indoor, outdoor, subterranean (SubT) and other unstructured environments. The architecture provides updates on traversable regions online during the mission, adapts to varying environments, while being robust to noisy semantic image segmentation. The proposed framework considers terrain prioritization based on a novel decay exponential function to fuse the semantic information and geometric features extracted from RGB-D images to obtain the traversability of the scene. Moreover, EAT introduces an obstacle inflation mechanism on the traversability image, based on mean-window weighting module, allowing to adapt the proximity to untraversable regions. The overall architecture uses two LRASPP MobileNet V3 large Convolutional Neural Networks (CNN) for semantic segmentation over RGB images, where the first one classifies the terrain types and the second one classifies see-through obstacles in the scene. Additionally, the geometric features profile the underlying surface properties of the local scene, extracting normals from depth images. The proposed scheme was integrated with a control architecture in reactive navigation scenarios and was experimentally validated in indoor and outdoor environments as well as in subterranean environments with a Pioneer 3AT mobile robot.

Place, publisher, year, edition, pages
Elsevier Ltd, 2024
Keywords
Navigation in unstructured environments, Traversability estimation with RGB-D data, Traversability guided reactive navigation, Vision based autonomous systems
National Category
Computer graphics and computer vision Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103739 (URN)10.1016/j.eswa.2023.122919 (DOI)001144940800001 ()2-s2.0-85180941472 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-02-12 (joosat);

Funder: European Unions Horizon 2020 Research and Innovation Programme (101003591 NEXGEN-SIMS);

Full text license: CC BY

Available from: 2024-01-16 Created: 2024-01-16 Last updated: 2025-02-05Bibliographically approved
Saucedo, M. A. .., Stathoulopoulos, N., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2024). Leveraging Computation of Expectation Models for Commonsense Affordance Estimation on 3D Scene Graphs. In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024 (pp. 9797-9802). IEEE
Open this publication in new window or tab >>Leveraging Computation of Expectation Models for Commonsense Affordance Estimation on 3D Scene Graphs
Show others...
2024 (English)In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2024, p. 9797-9802Conference paper, Published paper (Refereed)
Abstract [en]

This article studies the commonsense object affordance concept for enabling close-to-human task planning and task optimization of embodied robotic agents in urban environments. The focus of the object affordance is on reasoning how to effectively identify object’s inherent utility during the task execution, which in this work is enabled through the analysis of contextual relations of sparse information of 3D scene graphs. The proposed framework develops a Correlation Information (CECI) model to learn probability distributions using a Graph Convolutional Network, allowing to extract the commonsense affordance for individual members of a semantic class. The overall framework was experimentally validated in a real-world indoor environment, showcasing the ability of the method to level with human commonsense. For a video of the article, showcasing the experimental demonstration, please refer to the following link: https://youtu.be/BDCMVx2GiQE

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111636 (URN)10.1109/IROS58592.2024.10802560 (DOI)2-s2.0-85216468381 (Scopus ID)
Conference
The 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, UAE, October 14-18, 2024
Funder
EU, Horizon Europe, 101119774 SPEAR
Note

ISBN for host publication: 979-8-3503-7770-5

Available from: 2025-02-17 Created: 2025-02-17 Last updated: 2025-02-17Bibliographically approved
Stathoulopoulos, N., Saucedo, M. A., Koval, A. & Nikolakopoulos, G. (2024). RecNet: An Invertible Point Cloud Encoding through Range Image Embeddings for Multi-Robot Map Sharing and Reconstruction. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, May 13-17, 2024 (pp. 4883-4889). IEEE
Open this publication in new window or tab >>RecNet: An Invertible Point Cloud Encoding through Range Image Embeddings for Multi-Robot Map Sharing and Reconstruction
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 4883-4889Conference paper, Published paper (Refereed)
Abstract [en]

In the field of resource-constrained robots and the need for effective place recognition in multi-robotic systems, this article introduces RecNet, a novel approach that concurrently addresses both challenges. The core of RecNet’s methodology involves a transformative process: it projects 3D point clouds into range images, compresses them using an encoder-decoder framework, and subsequently reconstructs the range image, restoring the original point cloud. Additionally, RecNet utilizes the latent vector extracted from this process for efficient place recognition tasks. This approach not only achieves comparable place recognition results but also maintains a compact representation, suitable for sharing among robots to reconstruct their collective maps. The evaluation of RecNet encompasses an array of metrics, including place recognition performance, the structural similarity of the reconstructed point clouds, and the bandwidth transmission advantages, derived from sharing only the latent vectors. Our proposed approach is assessed using both a publicly available dataset and field experiments 1 confirming its efficacy and potential for real-world applications.

Place, publisher, year, edition, pages
IEEE, 2024
National Category
Computer graphics and computer vision Computer Sciences Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-108594 (URN)10.1109/ICRA57147.2024.10611602 (DOI)2-s2.0-85193767104 (Scopus ID)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, May 13-17, 2024
Note

ISBN for host publication: 979-8-3503-8457-4

Available from: 2024-08-16 Created: 2024-08-16 Last updated: 2025-02-05Bibliographically approved
Patel, A., Saucedo, M. A., Kanellakis, C. & Nikolakopoulos, G. (2024). STAGE: Scalable and Traversability-Aware Graph based Exploration Planner for Dynamically Varying Environments. In: 2024 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, May 13-17, 2024 (pp. 5949-5955). IEEE
Open this publication in new window or tab >>STAGE: Scalable and Traversability-Aware Graph based Exploration Planner for Dynamically Varying Environments
2024 (English)In: 2024 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2024, p. 5949-5955Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose a novel navigation framework that leverages a two layered graph representation of the environment for efficient large-scale exploration, while it integrates a novel uncertainty awareness scheme to handle dynamic scene changes in previously explored areas. The framework is structured around a novel goal oriented graph representation, that consists of, i) the local sub-graph and ii) the global graph layer respectively. The local sub-graphs encode local volumetric gain locations as frontiers, based on the direct pointcloud visibility, allowing fast graph building and path planning. Additionally, the global graph is build in an efficient way, using node-edge information exchange only on overlapping regions of sequential sub-graphs. Different from the state-of-the-art graph based exploration methods, the proposed approach efficiently re-uses sub-graphs built in previous iterations to construct the global navigation layer. Another merit of the proposed scheme is the ability to handle scene changes (e.g. blocked pathways), adaptively updating the obstructed part of the global graph from traversable to not-traversable. This operation involved oriented sample space of a path segment in the global graph layer, while removing the respective edges from connected nodes of the global graph in cases of obstructions. As such, the exploration behavior is directing the robot to follow another route in the global re-positioning phase through path-way updates in the global graph. Finally, we showcase the performance of the method both in simulation runs as well as deployed in real-world scene involving a legged robot carrying camera and lidar sensor.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Autonomous navigation, GPS-denied environments, exploration, dynamic environments, aerial and legged robots
National Category
Computer Sciences Robotics and automation Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-109777 (URN)10.1109/ICRA57147.2024.10610939 (DOI)2-s2.0-85202437949 (Scopus ID)
Conference
2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, May 13-17, 2024
Note

Funder: Sustainable Underground Mining Academy SP14 project;

ISBN for host publication: 979-8-3503-8457-4;

Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2025-02-05Bibliographically approved
Saucedo, M. A. V., Patel, A., Dahlquist, N., Bai, Y., Lindqvist, B., Kanellakis, C. & Nikolakopoulos, G. (2024). TFMarker: A Tangible Fiducial Pattern for Enabling Camera-assisted Guided Landing in SubT Environments. In: 2024 24th International Conference on Control, Automation and Systems (ICCAS): . Paper presented at 24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024 (pp. 1212-1217). IEEE
Open this publication in new window or tab >>TFMarker: A Tangible Fiducial Pattern for Enabling Camera-assisted Guided Landing in SubT Environments
Show others...
2024 (English)In: 2024 24th International Conference on Control, Automation and Systems (ICCAS), IEEE, 2024, p. 1212-1217Conference paper, Published paper (Refereed)
Abstract [en]

Visual servoing plays a crucial role in robotics, spanning across a great spectrum of applications from autonomous cars to aerial manipulation. This article proposes TFMarker, a novel tangible fiducial pattern for enabling camera-assisted guided landing of UAVs by using the visual features from color markers as the main source of information. TFMarker is structured around a 4-point fiducial marker, allowing for accurate, precise, and consistent pose estimation in different environments and lighting conditions, while also offering resilience to motion blur. The presented detection framework is based on a three-step architecture, where the first step uses Gaussian and color filtering in addition to morphological operation in order to generate a robust detection of the markers. The second step uses the Gift Wrapping Algorithm, to organize the same-color markers based on their relative positioning with respect to the off-color marker. Finally, the Perspective-n-Point optimization problem is solved in order to extract the pose (i.e. position and orientation) of the proposed pattern with respect to the vision sensor. The efficacy of the proposed scheme has been extensively validated in indoor and SubT environments for the task of autonomous landing using a custom-made UAV. The experimental results showcase the performance of the proposed method, which presents a better detection rate in both environments while retaining similar accuracy and precision to the baseline approach. For the video of the experimental evaluation please refer to the following link: https://youtu.be/Zh13OObp15Q

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Autonomous Drone Landing, Perception in Perceptually Degraded Conditions, Pose-base Visual Servoing
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-111328 (URN)10.23919/ICCAS63016.2024.10773374 (DOI)2-s2.0-85214368108 (Scopus ID)
Conference
24th International Conference on Control, Automation and Systems (ICCAS 2024), Jeju, Korea, October 29 - November 1, 2024
Note

ISBN for host publication: 978-89-93215-38-0;

Available from: 2025-01-20 Created: 2025-01-20 Last updated: 2025-01-20Bibliographically approved
Saucedo, M. A. (2024). Towards human-inspired perception in robotic systems by leveraging computational methods for semantic understanding. (Licentiate dissertation). Luleå: Luleå University of Technology
Open this publication in new window or tab >>Towards human-inspired perception in robotic systems by leveraging computational methods for semantic understanding
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis presents a recollection of developments and results towards the research of human-like semantic understanding of the environment for robotics systems. Achieving a level of understanding in robots comparable to humans has proven to be a significant challenge in robotics, although modern sensors like stereo cameras and neuromorphic cameras enable robots to perceive the world in a manner akin to human senses, extracting and interpreting semantic information proves to be significantly inefficient by comparison. This thesis explores different aspects of the machine vision field to level computational methods in order to address real-life challenges for the task of semantic scene understanding in both everyday environments as well as challenging unstructured environments. 

The works included in this thesis present key contributions towards three main research directions. The first direction establishes novel perception algorithms for object detection and localization, aimed at real-life deployments in onboard mobile devices for %perceptually degraded unstructured environments. Along this direction, the contributions focus on the development of robust detection pipelines as well as fusion strategies for different sensor modalities including stereo cameras, neuromorphic cameras, and LiDARs. 

The second research direction establishes a computational method for levering semantic information into meaningful knowledge representations to enable human-inspired behaviors for the task of traversability estimation for reactive navigation. The contribution presents a novel decay function for traversability soft image generation based on exponential decay, by fusing semantic and geometric information to obtain density images that represent the pixel-wise traversability of the scene. Additionally, it presents a novel Encoder-Decoder lightweight network architecture for coarse semantic segmentation of terrain, integrated with a memory module based on a dynamic certainty filter.

Finally, the third research direction establishes the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information.The research thus presents an approach to meaningfully incorporate unobserved objects as nodes into an incomplete 3D scene graph using the proposed method Computation of Expectation based on Correlation Information (CECI), to reasonably approximate the probability distribution of the scene by learning histograms from available training data. Extensive simulations and real-life experimental setups support the results and assumptions presented in this work.

Place, publisher, year, edition, pages
Luleå: Luleå University of Technology, 2024
Series
Licentiate thesis / Luleå University of Technology, ISSN 1402-1757
National Category
Computer graphics and computer vision
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-105329 (URN)978-91-8048-568-5 (ISBN)978-91-8048-569-2 (ISBN)
Presentation
2024-06-17, A117, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2024-05-03 Created: 2024-05-03 Last updated: 2025-02-07Bibliographically approved
V. Saucedo, M. A., Patel, A., Sawlekar, R., Saradagi, A., Kanellakis, C., Agha-Mohammadi, A.-A. & Nikolakopoulos, G. (2023). Event Camera and LiDAR based Human Tracking for Adverse Lighting Conditions in Subterranean Environments. In: Hideaki Ishii; Yoshio Ebihara; Jun-ichi Imura; Masaki Yamakita (Ed.), 22nd IFAC World Congress: Proceedings. Paper presented at 22nd IFAC World Congress, Yokohama, Japan, July 9-14, 2023 (pp. 9257-9262). Elsevier, 56(2)
Open this publication in new window or tab >>Event Camera and LiDAR based Human Tracking for Adverse Lighting Conditions in Subterranean Environments
Show others...
2023 (English)In: 22nd IFAC World Congress: Proceedings / [ed] Hideaki Ishii; Yoshio Ebihara; Jun-ichi Imura; Masaki Yamakita, Elsevier, 2023, Vol. 56, no 2, p. 9257-9262Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we propose a novel LiDAR and event camera fusion modality for subterranean (SubT) environments for fast and precise object and human detection in a wide variety of adverse lighting conditions, such as low or no light, high-contrast zones and in the presence of blinding light sources. In the proposed approach, information from the event camera and LiDAR are fused to localize a human or an object-of-interest in a robot's local frame. The local detection is then transformed into the inertial frame and used to set references for a Nonlinear Model Predictive Controller (NMPC) for reactive tracking of humans or objects in SubT environments. The proposed novel fusion uses intensity filtering and K-means clustering on the LiDAR point cloud and frequency filtering and connectivity clustering on the events induced in an event camera by the returning LiDAR beams. The centroids of the clusters in the event camera and LiDAR streams are then paired to localize reflective markers present on safety vests and signs in SubT environments. The efficacy of the proposed scheme has been experimentally validated in a real SubT environment (a mine) with a Pioneer 3AT mobile robot. The experimental results show real-time performance for human detection and the NMPC-based controller allows for reactive tracking of a human or object of interest, even in complete darkness.

Place, publisher, year, edition, pages
Elsevier, 2023
Series
IFAC-PapersOnLine, ISSN 2405-8971, E-ISSN 2405-8963
Keywords
Event-based vision, Event camera and LiDAR fusion, Human detection and tracking, NMPC-based tracking
National Category
Computer graphics and computer vision Atom and Molecular Physics and Optics
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-104460 (URN)10.1016/j.ifacol.2023.10.008 (DOI)001122557300481 ()2-s2.0-85183658513 (Scopus ID)
Conference
22nd IFAC World Congress, Yokohama, Japan, July 9-14, 2023
Funder
EU, Horizon 2020, 101003591
Note

Full text license: CC BY-NC-ND

Available from: 2024-03-12 Created: 2024-03-12 Last updated: 2025-02-01Bibliographically approved
Saucedo, M. A. .., Patel, A., Kanellakis, C. & Nikolakopoulos, G. (2023). Memory Enabled Segmentation of Terrain for Traversability based Reactive Navigation. In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO): . Paper presented at 2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023. IEEE
Open this publication in new window or tab >>Memory Enabled Segmentation of Terrain for Traversability based Reactive Navigation
2023 (English)In: 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO), IEEE, 2023Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2023
National Category
Computer graphics and computer vision Robotics and automation
Research subject
Robotics and Artificial Intelligence
Identifiers
urn:nbn:se:ltu:diva-103974 (URN)10.1109/ROBIO58561.2023.10354930 (DOI)2-s2.0-85182558371 (Scopus ID)979-8-3503-2570-6 (ISBN)979-8-3503-2571-3 (ISBN)
Conference
2023 IEEE International Conference on Robotics and Biomimetics, ROBIO 2023, Koh Samui, Thailand, December 4-9, 2023
Funder
EU, Horizon 2020, 101003591
Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2025-02-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8132-4178

Search in DiVA

Show all publications