System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Few-shot Federated Learning in Randomized Neural Networks via Hyperdimensional Computing
Dept. of Information Engineering, Electronics and Telecommunications (DIET), University of Rome “La Sapienza”, Via Eudossiana 18, 00184 Rome, Italy.
Dept. of Information Engineering, Electronics and Telecommunications (DIET), University of Rome “La Sapienza”, Via Eudossiana 18, 00184 Rome, Italy.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0003-0069-640x
University of California, Berkeley, USA; Research Institutes of Sweden, Kista, Sweden.
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

The recent interest in federated learning has initiated the investigation for efficient models deployable in scenarios with strict communication and computational constraints. Furthermore, the inherent privacy concerns in decentralized and federated learning call for efficient distribution of information in a network of interconnected agents. Therefore, we propose a novel distributed classification solution that is based on shallow randomized networks equipped with a compression mechanism that is used for sharing the local model in the federated context. We make extensive use of hyperdimensional computing both in the local network model and in the compressed communication protocol, which is enabled by the binding and the superposition operations. Accuracy, precision, and stability of our proposed approach are demonstrated on a collection of datasets with several network topologies and for different data partitioning schemes.

Place, publisher, year, edition, pages
IEEE, 2022.
National Category
Computer Sciences Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
URN: urn:nbn:se:ltu:diva-94152DOI: 10.1109/IJCNN55064.2022.9892007ISI: 000867070901017Scopus ID: 2-s2.0-85140772045OAI: oai:DiVA.org:ltu-94152DiVA, id: diva2:1711790
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-11-18 Created: 2022-11-18 Last updated: 2023-05-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Osipov, Evgeny

Search in DiVA

By author/editor
Osipov, Evgeny
By organisation
Computer Science
Computer SciencesComputer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 118 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf