Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 11) Show all publications
Smets, L., Rachkovskij, D., Osipov, E., Van Leekwijck, W., Volkov, O. & Latré, S. (2025). Margin-Based Training of HDC Classifiers. Big Data and Cognitive Computing, 9(3), Article ID 68.
Open this publication in new window or tab >>Margin-Based Training of HDC Classifiers
Show others...
2025 (English)In: Big Data and Cognitive Computing, E-ISSN 2504-2289, Vol. 9, no 3, article id 68Article in journal (Refereed) Published
Abstract [en]

The explicit kernel transformation of input data vectors to their distributed high-dimensional representations has recently been receiving increasing attention in the field of hyperdimensional computing (HDC). The main argument is that such representations endow simpler last-leg classification models, often referred to as HDC classifiers. HDC models have obvious advantages over resource-intensive deep learning models for use cases requiring fast, energy-efficient computations both for model training and deploying. Recent approaches to training HDC classifiers have primarily focused on various methods for selecting individual learning rates for incorrectly classified samples. In contrast to these methods, we propose an alternative strategy where the decision to learn is based on a margin applied to the classifier scores. This approach ensures that even correctly classified samples within the specified margin are utilized in training the model. This leads to improved test performances while maintaining a basic learning rule with a fixed (unit) learning rate. We propose and empirically evaluate two such strategies, incorporating either an additive or multiplicative margin, on the standard subset of the UCI collection, consisting of 121 datasets. Our approach demonstrates superior mean accuracy compared to other HDC classifiers with iterative error-correcting training.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2025
Keywords
hyperdimensional computing, HDC classifier, compositional representation, hypervector, margin classifier, confidence
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-112274 (URN)10.3390/bdcc9030068 (DOI)2-s2.0-105001165098 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, UKR22-0024, UKR24-0014Swedish Research Council, 2022-04657Luleå University of Technology
Note

Validerad;2025;Nivå 1;2025-04-07 (u8);

Funder: Swedish Section for Scholars at Risk (SAR-Sweden) (GU 2022/1963); National Research Fund of Ukraine (2023.04/0082);

Full text license: CC BY

Available from: 2025-04-07 Created: 2025-04-07 Last updated: 2025-04-07Bibliographically approved
Schlegel, K., Rachkovskij, D. A., Osipov, E., Protzel, P. & Neubert, P. (2024). Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification. In: 2024 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024. IEEE
Open this publication in new window or tab >>Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification
Show others...
2024 (English)In: 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The vector superposition operation plays a central role in Hyperdimensional Computing (HDC), enabling compositionality of hypervectors without expanding the dimensionality, unlike concatenation. However, a problem arises when the quantity of superimposed vectors surpasses a certain threshold, which is determined by the hypervector’s information capacity relative to its dimensionality. Beyond this point, cross-talk noise incrementally obscures the distinctiveness of individual hypervectors and information is lost. To solve this challenge, we introduce a novel method for weighting individual hypervectors within the superposition, ensuring that only those hypervectors crucial for a given task are prioritized. The weights are learned end-to-end using the backpropagation algorithm in a neural network. Our method is characterized by two key features: (1) The resultant weighting model is exceptionally compact, as the number of trainable weights is equal to the total number of hypervectors in the superposition; (2) The model offers enhanced explainability due to the compositional nature of its encoding. These features collectively contribute to the efficiency and effectiveness of our proposed classification approach using hyperdimensional computing. We illustrate our approach through the multi-channel time series classification task. In this framework, each channel is encoded as a hypervector-descriptor, and those are subsequently composed into a single hypervector via superposition. This superimposed vector forms the basis for training the classification model based on the neural network. Applying our approach of weighted superposition on this task improved the classification performance compared to standard superposition or concatenation of feature vectors, especially for larger numbers of channels.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
HDC/VSA, neural networks, superposition, weighted superposition, bundling
National Category
Computer Sciences Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-110295 (URN)10.1109/IJCNN60899.2024.10650604 (DOI)2-s2.0-85204954431 (Scopus ID)
Conference
13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024
Note

Funder: German Federal Ministry for Economic Affairs and Climate Action; Swedish Foundation for Strategic Research (UKR22-0024, UKR24-0014); Swedish Research Council (VR SAR grant no.GU 2022/1963); LTU support grant; Swedish Research Council (VR grant no. 2022-04657);

ISBN for host publication: 978-8-3503-5931-2;

Available from: 2024-10-10 Created: 2024-10-10 Last updated: 2024-10-10Bibliographically approved
Kleyko, D. & Rachkovskij, D. A. (2024). On Design Choices in Similarity-Preserving Sparse Randomized Embeddings. In: 2024 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024. IEEE
Open this publication in new window or tab >>On Design Choices in Similarity-Preserving Sparse Randomized Embeddings
2024 (English)In: 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Expand & Sparsify is a principle that is observed in anatomically similar neural circuits found in the mushroom body (insects) and the cerebellum (mammals). Sensory data are projected randomly to much higher-dimensionality (expand part) where only few the most strongly excited neurons are activated (sparsify part). This principle has been leveraged to design a FlyHash algorithm that forms similarity-preserving sparse embeddings, which have been found useful for such tasks as novelty detection, pattern recognition, and similarity search. Despite its simplicity, FlyHash has a number of design choices to be set such as preprocessing of the input data, choice of sparsifying activation function, and formation of the random projection matrix. In this paper, we explore the effect of these choices on the performance of similarity search with FlyHash embeddings. We find that the right combination of design choices can lead to drastic difference in the search performance.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
random projection, Winner-Take-All, sparse representations, hyperdimensional computing, expand & sparsify
National Category
Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-110298 (URN)10.1109/IJCNN60899.2024.10651277 (DOI)2-s2.0-85205027306 (Scopus ID)
Conference
13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024
Note

Funder: Horizon 2020 (839179); Swedish Foundation for Strategic Research (UKR22-0024 & UKR24-0014); Swedish Research Council Scholars at Risk Sweden (2022/1963);

ISBN for host publication: 978-8-3503-5931-2;

Available from: 2024-10-10 Created: 2024-10-10 Last updated: 2024-10-10Bibliographically approved
Rachkovskij, D. A. (2024). Shift-Equivariant Similarity-Preserving Hypervector Representations of Sequences. Cognitive Computation, 16, 909-923
Open this publication in new window or tab >>Shift-Equivariant Similarity-Preserving Hypervector Representations of Sequences
2024 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 16, p. 909-923Article in journal (Refereed) Published
Abstract [en]

Hyperdimensional Computing (HDC), also known as Vector-Symbolic Architectures (VSA), is a promising framework for the development of cognitive architectures and artificial intelligence systems, as well as for technical applications and emerging neuromorphic and nanoscale hardware. HDC/VSA operate with hypervectors, i.e., neural-like distributed vector representations of large fixed dimension (usually > 1000). One of the key ingredients of HDC/VSA are the methods for encoding various data types (from numeric scalars and vectors to graphs) by hypervectors. In this paper, we propose an approach for the formation of hypervectors of sequences that provides both an equivariance with respect to the shift of sequences and preserves the similarity of sequences with identical elements at nearby positions. Our methods represent the sequence elements by compositional hypervectors and exploit permutations of hypervectors for representing the order of sequence elements. We experimentally explored the proposed representations using a diverse set of tasks with data in the form of symbolic strings. Although we did not use any features here (hypervector of a sequence was formed just from the hypervectors of its symbols at their positions), the proposed approach demonstrated the performance on a par with the methods that exploit various features, such as subsequences. The proposed techniques were designed for the HDC/VSA model known as Sparse Binary Distributed Representations. However, they can be adapted to hypervectors in formats of other HDC/VSA models, as well as for representing sequences of types other than symbolic strings. Directions for further research are discussed.

Place, publisher, year, edition, pages
Springer, 2024
Keywords
Brain-like distributed representations, Hyperdimensional computing, Hypervector permutation, Sequence representation, Similarity preserving transformation, Vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-104883 (URN)10.1007/s12559-024-10258-4 (DOI)001181130200001 ()2-s2.0-85187426558 (Scopus ID)
Funder
Luleå University of Technology
Note

Validerad;2024;Nivå 2;2024-06-07 (joosat);

Funder: Swedish Foundation for Strategic Research (UKR22-0024, GU 2022/1963);

Full text license: CC BY

Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2024-11-20Bibliographically approved
Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, 55(6), Article ID 130.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 6, article id 130Article in journal (Refereed) Published
Abstract [en]

This two-part comprehensive survey is devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and distributed vector representations. Notable models in the HDC/VSA family are Tensor Product Representations, Holographic Reduced Representations, Multiply-Add-Permute, Binary Spatter Codes, and Sparse Binary Distributed Representations but there are other models too. HDC/VSA is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the field. However, due to a surge of new researchers joining the field in recent years, the necessity for a comprehensive survey of the field has become extremely important. Therefore, amongst other aspects of the field, this Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II of this survey [84] is devoted to applications, cognitive computing and architectures, as well as directions for future work. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95133 (URN)10.1145/3538531 (DOI)000893245700022 ()2-s2.0-85146491559 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-01-03 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (0120U000122, 0121U000016, 0117U002286); Ministry of Education and Science of Ukraine (0121U000228, 0122U000818);

Available from: 2023-01-03 Created: 2023-01-03 Last updated: 2024-03-07Bibliographically approved
Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. ACM Computing Surveys, 55(9), Article ID 175.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 9, article id 175Article in journal (Refereed) Published
Abstract [en]

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [321, 326] is an influential HDC/VSA model that is well known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field.Part I of this survey [222] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain; however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2023
Keywords
analogical reasoning, applications, Artificial intelligence, binary spatter codes, cognitive architectures, cognitive computing, distributed representations, geometric analogue of holographic reduced representations, holographic reduced representations, hyperdimensional computing, machine learning, matrix binding of additive terms, modular composite representations, multiply-add-permute, sparse binary distributed representations, sparse block codes, tensor product representations, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95673 (URN)10.1145/3558000 (DOI)000924882300001 ()2-s2.0-85147845869 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-02-21 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (grant no. 0120U000122, 0121U000016, 0122U002151, 0117U002286); Ministry of Education and Science of Ukraine (grant no. 0121U000228, 0122U000818)

Available from: 2023-02-21 Created: 2023-02-21 Last updated: 2024-03-28Bibliographically approved
Tyshchuk, O. V., Desiateryk, O. O., Volkov, O. E., Revunova, E. G. & Rachkovskij, D. (2022). A Linear System Output Transformation for Sparse Approximation. Cybernetics and Systems Analysis, 58(5), 840-850
Open this publication in new window or tab >>A Linear System Output Transformation for Sparse Approximation
Show others...
2022 (English)In: Cybernetics and Systems Analysis, ISSN 1060-0396, E-ISSN 1573-8337, Vol. 58, no 5, p. 840-850Article in journal (Refereed) Published
Abstract [en]

We propose an approach that provides a stable transformation of the output of a linear system into the output of a system with the desired basis. The matrix of basis functions of the linear system has a large condition number, and the series of its singular numbers gradually decreases to zero. Two types of methods for stable output transformation are developed using the approximation of matrices based on the truncated Singular Value Decomposition and on the Random Projection with different types of random matrices. It is shown that the use of the output transformation as preprocessing increases the accuracy of solving sparse approximation problems. An example of using the method to determine the activity of weak radiation sources is considered.

Place, publisher, year, edition, pages
Springer, 2022
Keywords
discrete ill-posed problem, random projection, singular value decomposition, sparse approximation
National Category
Computational Mathematics Control Engineering
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94998 (URN)10.1007/s10559-022-00517-3 (DOI)000895709900018 ()2-s2.0-85143236339 (Scopus ID)
Note

Validerad;2023;Nivå 2;2023-01-01 (marisr);

Translated from Kibernetyka ta Systemnyi Analiz, No. 5, September–October, 2022, pp. 189–202.

Available from: 2022-12-27 Created: 2022-12-27 Last updated: 2024-12-06Bibliographically approved
Rachkovskij, D. A. & Kleyko, D. (2022). Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences. In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings. Paper presented at IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022. IEEE
Open this publication in new window or tab >>Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Hyperdimensional computing (HDC), also known as vector symbolic architectures (VSA), is a computing framework used within artificial intelligence and cognitive computing that operates with distributed vector representations of large fixed dimensionality. A critical step in designing the HDC/VSA solutions is to obtain such representations from the input data. Here, we focus on a wide-spread data type of sequences and propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant with respect to the sequence shift. These properties are enabled by forming representations of sequence positions using recursive binding as well as superposition operations. The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity. The obtained results are on a par with more sophisticated approaches from the literature. The proposed transformation was designed for the HDC/VSA model known as Fourier Holographic Reduced Representations. However, it can be adapted to some other HDC/VSA models.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
data structures, distributed representation, hyperdimensional computing, hypervector, recursive binding, sequence representation, shift equivariance, similarity preserving transformation, vector symbolic architectures
National Category
Computer Sciences Information Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94150 (URN)10.1109/IJCNN55064.2022.9892462 (DOI)000867070904096 ()2-s2.0-85137519384 (Scopus ID)
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Funder: AFOSR (FA9550-19-1-0241), Intel’s THWAI program, National Academy of Sciences of Ukraine (0121U000016), Ministry of Education and Science of Ukraine (0121U000228, 0122U000818);

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-11-18 Created: 2022-11-18 Last updated: 2023-05-08Bibliographically approved
Rachkovskij, D. A. (2022). Representation of spatial objects by shift-equivariant similarity-preserving hypervectors. Neural Computing & Applications, 34(24), 22387-22403
Open this publication in new window or tab >>Representation of spatial objects by shift-equivariant similarity-preserving hypervectors
2022 (English)In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 34, no 24, p. 22387-22403Article in journal (Refereed) Published
Abstract [en]

Hyperdimensional Computing (HDC), also known as Vector-Symbolic Architectures (VSA), is an approach that has been proposed to combine the advantages of distributed vector representations and symbolic structured data representations in Artificial Intelligence, Machine Learning, and Pattern Recognition problems. HDC/VSA operate with hypervectors, i.e., brain-like distributed representations of large fixed dimension. The key problem of HDC/VSA is how to transform data of various types into hypervectors. In this paper, we propose a novel approach for the formation of hypervectors of spatial objects, such as images, that provides both an equivariance with respect to the shift of objects and preserves the similarity of objects described by similar features at nearby positions. In contrast to known hypervector formation methods, we represent the features by compositional hypervectors and exploit permutations of hypervectors for representing the position of features. We experimentally explored the proposed approach in some tasks that exploit various descriptions of two-dimensional (2D) images. In terms of standard accuracy measures such as error rate or mean average precision, our results are on a par or better than those of other methods and are obtained without feature learning. The proposed techniques were designed for the HDC/VSA model known as Sparse Binary Distributed Representations. However, they can be adapted to hypervectors in formats of other HDC/VSA models, as well as for representing spatial objects other than 2D images.

Place, publisher, year, edition, pages
Springer Nature, 2022
Keywords
Hyperdimensional computing, Vector symbolic architectures, Spatial object representation, Neural-likedistributed representations, Shift equivariance, Image classification and retrieval
National Category
Information Systems Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-93082 (URN)10.1007/s00521-022-07619-1 (DOI)000850761000006 ()2-s2.0-85137526660 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2022;Nivå 2;2022-11-29 (hanlid);

Funder: National Academy of Sciences of Ukraine (0120U000122, 0121U000016, 0117U002286); Ministry of Education and Science of Ukraine (0121U000228, 0122U000818)

Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2022-11-29Bibliographically approved
Volkov, O., Komar, M., Rachkovskij, D. & Volosheniuk, D. (2022). Technology of Autonomous Take-Off and Landing for the Modern Flight and Navigation Complex of an Unmanned Aerial Vehicle. Cybernetics and Systems Analysis, 58(6), 882-888
Open this publication in new window or tab >>Technology of Autonomous Take-Off and Landing for the Modern Flight and Navigation Complex of an Unmanned Aerial Vehicle
2022 (English)In: Cybernetics and Systems Analysis, ISSN 1060-0396, E-ISSN 1573-8337, Vol. 58, no 6, p. 882-888Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Springer Nature, 2022
National Category
Control Engineering
Identifiers
urn:nbn:se:ltu:diva-96473 (URN)10.1007/s10559-023-00521-1 (DOI)000927817900001 ()2-s2.0-85145743550 (Scopus ID)
Available from: 2023-04-13 Created: 2023-04-13 Last updated: 2023-07-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3414-5334

Search in DiVA

Show all publications