Evaluating Complex Sparse Representation of Hypervectors for Unsupervised Machine Learning Show others and affiliations
2022 (English) In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]
The increasing use of Vector Symbolic Architectures (VSA) in machine learning has contributed towards en-ergy efficient computation, short training cycles and improved performance. A further advancement of VSA is to leverage sparse representations, where the VSA-encoded hypervectors are sparsified to represent receptive field properties when encoding sensory inputs. The hyperseed algorithm is an unsupervised machine learning algorithm based on VSA for fast learning a topology preserving feature map of unlabelled data. In this paper, we implement two methods of sparse block-codes on the hyperseed algorithm, they are selecting the maximum element of each block and selecting a random element of each block as the nonzero element. Finally, the sparsified hyperseed algorithm is empirically evaluated for performance using three distinct bench-mark datasets, Iris classification, classification and visualisation of synthetic datasets from the Fundamental Clustering Problems Suite and language classification using n-gram statistics.
Place, publisher, year, edition, pages IEEE, 2022.
National Category
Computer Sciences
Research subject Dependable Communication and Computation Systems
Identifiers URN: urn:nbn:se:ltu:diva-94787 DOI: 10.1109/IJCNN55064.2022.9892981 ISI: 000867070908091 Scopus ID: 2-s2.0-85140777444 OAI: oai:DiVA.org:ltu-94787 DiVA, id: diva2:1717489
Conference IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note ISBN för värdpublikation: 978-1-7281-8671-9
2022-12-082022-12-082023-05-08 Bibliographically approved