Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Multi-agent Exploration with Reinforcement Learning
Department of Electrical and Computer Engineering, University of Patras, Greece.
Department of Electrical and Computer Engineering, University of Patras, Greece.
Luleå tekniska universitet, Institutionen för system- och rymdteknik, Signaler och system.ORCID-id: 0000-0003-0126-1897
Department of Electrical and Computer Engineering, University of Patras, Greece.
2022 (engelsk)Inngår i: 2022 30th Mediterranean Conference on Control and Automation (MED), IEEE, 2022, s. 630-635Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Modern robots are used in many exploration, search and rescue applications nowadays. They are essentially coordinated by human operators and collaborate with inspection or rescue teams. Over time, robots (agents) have become more sophisticated with more autonomy, operating in complex environments. Therefore, the purpose of this paper is to present an approach for autonomous multi-agent coordination for exploring and covering unknown environments. The method we suggest combines reinforcement learning with multiple neural networks (Deep Learning) to plan the path for each agent separately and achieve collaborative behavior amongst them. Specifically, we have applied two recent techniques, namely the target neural network and the prioritized experience replay, which have been proven to stabilize and accelerate the training process. Agents should also avoid obstacles (walls, objects, etc.) throughout the exploration without prior information/knowledge about the environment; thus we use only local information available at any time instant to make the decision of each agent. Furthermore, two neural networks are used for generating actions, accompanied by an extra neural network with a switching logic that chooses one of them. The exploration of the unknown environment is conducted in a two-dimensional model (2D) using multiple agents for various maps, ranging from small to large size. Finally, the efficiency of the exploration is investigated for a different number of agents and various types of neural networks.

sted, utgiver, år, opplag, sider
IEEE, 2022. s. 630-635
Serie
Mediterranean Conference on Control and Automation (MED), ISSN 2325-369X, E-ISSN 2473-3504
HSV kategori
Forskningsprogram
Robotik och artificiell intelligens
Identifikatorer
URN: urn:nbn:se:ltu:diva-92639DOI: 10.1109/MED54222.2022.9837168ISI: 000854013700103Scopus ID: 2-s2.0-85136300911OAI: oai:DiVA.org:ltu-92639DiVA, id: diva2:1689466
Konferanse
30th Mediterranean Conference on Control and Automation (MED), Vouliagmeni, Greece, June 28 - July 1, 2022
Merknad

ISBN för värdpublikation: 978-1-6654-0673-4 (electronic), 978-1-6654-0674-1 (print)

Tilgjengelig fra: 2022-08-23 Laget: 2022-08-23 Sist oppdatert: 2025-10-21bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Nikolakopoulos, George

Søk i DiVA

Av forfatter/redaktør
Nikolakopoulos, George
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 157 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf