Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Model-Free Event-Triggered Optimal Consensus Control of Multiple Euler-Lagrange Systems via Reinforcement Learning
Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China.
Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China.
Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. School of Electrical and Data Engineering, University of Technology Sydney, Australia. Department of Computer Science and Technology, Fuzhou University, Fuzhou 350116, China.ORCID iD: 0000-0003-1902-9877
Show others and affiliations
2021 (English)In: IEEE Transactions on Network Science and Engineering, E-ISSN 2327-4697, Vol. 8, no 1, p. 246-258Article in journal (Refereed) Published
Abstract [en]

This paper develops a model-free approach to solve the event-triggered optimal consensus of multiple Euler-Lagrange systems (MELSs) via reinforcement learning (RL). Firstly, an augmented system is constructed by defining a pre-compensator to circumvent the dependence on system dynamics. Secondly, the Hamilton-Jacobi-Bellman (HJB) equations are applied to the deduction of the model-free event-triggered optimal controller. Thirdly, we present a policy iteration (PI) algorithm derived from reinforcement learning (RL), which converges to the optimal policy. Then, the value function of each agent is represented through a neural network to realize the PI algorithm. Moreover, the gradient descent method is used to update the neural network only at a series of discrete event-triggered instants. The specific form of the event-triggered condition is then proposed, and it is guaranteed that the closed-loop augmented system under the event-triggered mechanism is uniformly ultimately bounded (UUB). Meanwhile, the Zeno behavior is also eliminated. Finally, the validity of this approach is verified by a simulation example.

Place, publisher, year, edition, pages
IEEE, 2021. Vol. 8, no 1, p. 246-258
Keywords [en]
Reinforcement learning, event-triggered control, Euler-Lagrange system, augmented system
National Category
Computer Sciences
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-81463DOI: 10.1109/TNSE.2020.3036604ISI: 000631202700021Scopus ID: 2-s2.0-85096869102OAI: oai:DiVA.org:ltu-81463DiVA, id: diva2:1502244
Note

Validerad;2021;Nivå 2;2021-04-06 (alebob);

Finansiär: National Key Research and Development Program of China (2018YFC0809302); National Natural Science Foundation of China (61751305, 61673176); Program of Shanghai Academic Research Leader (20XD1401300); Programme of Introducing Talents of Discipline to Universities (B17017)

Available from: 2020-11-19 Created: 2020-11-19 Last updated: 2024-01-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Vasilakos, Athanasios V.

Search in DiVA

By author/editor
Vasilakos, Athanasios V.
By organisation
Computer Science
In the same journal
IEEE Transactions on Network Science and Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf