System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluating Complex Sparse Representation of Hypervectors for Unsupervised Machine Learning
Centre for Data Analytics and Cognition at La Trobe University, Melbourne, Australia.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0003-0069-640x
Centre for Data Analytics and Cognition at La Trobe University, Melbourne, Australia.
Centre for Data Analytics and Cognition at La Trobe University, Melbourne, Australia.
Show others and affiliations
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of Vector Symbolic Architectures (VSA) in machine learning has contributed towards en-ergy efficient computation, short training cycles and improved performance. A further advancement of VSA is to leverage sparse representations, where the VSA-encoded hypervectors are sparsified to represent receptive field properties when encoding sensory inputs. The hyperseed algorithm is an unsupervised machine learning algorithm based on VSA for fast learning a topology preserving feature map of unlabelled data. In this paper, we implement two methods of sparse block-codes on the hyperseed algorithm, they are selecting the maximum element of each block and selecting a random element of each block as the nonzero element. Finally, the sparsified hyperseed algorithm is empirically evaluated for performance using three distinct bench-mark datasets, Iris classification, classification and visualisation of synthetic datasets from the Fundamental Clustering Problems Suite and language classification using n-gram statistics.

Place, publisher, year, edition, pages
IEEE, 2022.
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
URN: urn:nbn:se:ltu:diva-94787DOI: 10.1109/IJCNN55064.2022.9892981ISI: 000867070908091Scopus ID: 2-s2.0-85140777444OAI: oai:DiVA.org:ltu-94787DiVA, id: diva2:1717489
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-12-08 Created: 2022-12-08 Last updated: 2023-05-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Osipov, Evgeny

Search in DiVA

By author/editor
Osipov, Evgeny
By organisation
Computer Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 151 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf