Change search
Link to record
Permanent link

Direct link
Publications (10 of 122) Show all publications
Smets, L., Rachkovskij, D., Osipov, E., Volkov, O., Van Leekwijck, W. & Latré, S. (2025). Classification Performance of Confidence-Driven Centroids. Cybernetics and Systems Analysis, 61(2), 289-304
Open this publication in new window or tab >>Classification Performance of Confidence-Driven Centroids
Show others...
2025 (English)In: Cybernetics and Systems Analysis, ISSN 1060-0396, E-ISSN 1573-8337, Vol. 61, no 2, p. 289-304Article in journal (Refereed) Published
Abstract [en]

Hyperdimensional computing (HDC) is a powerful algorithmic framework at the intersection of symbolic and neural network Artificial Intelligence. In particular, HDC has received significant attention as a suitable candidate for low-resource machine learning tasks, exemplified by wearable Internet of Things. To solve classification tasks, HDC transforms input data to a high-dimensional space and uses simple component-wise vector operations to create, train, and operate the classification model. While the classical centroid model has been often used in HDC, iterative updating of centroids with wrongly classified samples improves the classification performance. In this paper, using a large and variable collection of 121 UCI datasets, we explore how confidence-driven training of centroids formed from HDC representations further improves the classification accuracy.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
centroid, linear classifier, non-linear data transformation, hyperdimensional computing, vector symbolic architecture
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-112622 (URN)10.1007/s10559-025-00768-w (DOI)001476154700001 ()2-s2.0-105003685198 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, UKR22-0024, UKR24-0014Swedish Research Council, GU 2022/1963, 2022-04657Luleå University of Technology
Note

Validerad;2025;Nivå 1;2025-05-14 (u8);

Funder: Flemish Government; Scholars at Risk (SAR) (GU 2022/1963);

This article has been translated from Kibernetyka ta Systemnyi Analiz, vol. 61, no. 2 March-April 2025, pp. 142-160, 10.34229/KCA2522-9664.25.2.13

Available from: 2025-05-14 Created: 2025-05-14 Last updated: 2025-10-21Bibliographically approved
Sumanasena, V., de Silva, D., Osipov, E., Rachkovskij, D. A. & Gayler, R. W. (2025). Implementing Holographic Reduced Representations for Spiking Neural Networks. IEEE Access, 13, 116606-116620
Open this publication in new window or tab >>Implementing Holographic Reduced Representations for Spiking Neural Networks
Show others...
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 116606-116620Article in journal (Refereed) Published
Abstract [en]

Neuromorphic Computing surpasses conventional von Neumann architectures in terms of energy efficiency, parallelisation, scalability, and stochasticity. Given the inherent structure of neurons and synapses, neuromorphic computers can be directly implemented as spiking neural networks. Despite these advantages, neuromorphic computing applications are hitherto limited to benchmark datasets and empirical demonstrations. This is primarily due to the lack of a unifying computing framework that designates a middle-layer abstraction between the actual neuromorphic computing and the required application functionality. Drawing on the distributed vector representation of symbolic and numerical data structures and robust dual interface with diverse operational primitives, Vector Symbolic Architectures (VSA) have been positioned as a suitable candidate to address this middle-layer void. In this paper, we explore the potential of VSA as an intermediary abstraction layer to advance practical neuromorphic computing applications. We introduce a novel vectorised framework that efficiently processes parallel streams of spiking data by combining and computing them through VSA for real-time downstream learning tasks, leveraging spike latency encoding. Our implementation utilises containerised methods within Lava, an open-source framework for neuromorphic computing.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2025
Keywords
Spiking neural networks (SNN), vector symbolic architecture (VSA), holographic reduced representations, neuromorphic computing
National Category
Computer Sciences Networked, Parallel and Distributed Computing
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-114170 (URN)10.1109/ACCESS.2025.3580582 (DOI)001531374800002 ()2-s2.0-105008652088 (Scopus ID)
Funder
Swedish Research Council, 2022-04657, 2022-06725, 2022/1963Swedish Foundation for Strategic Research, UKR22-0024, UKR24-0014
Note

Validerad;2025;Nivå 2;2025-08-05 (u5);

Full text license: CC BY-NC-ND 4.0;

For funding information, see: https://ieeexplore.ieee.org/document/11037669

Available from: 2025-08-05 Created: 2025-08-05 Last updated: 2025-11-28Bibliographically approved
Smets, L., Rachkovskij, D., Osipov, E., Van Leekwijck, W., Volkov, O. & Latré, S. (2025). Margin-Based Training of HDC Classifiers. Big Data and Cognitive Computing, 9(3), Article ID 68.
Open this publication in new window or tab >>Margin-Based Training of HDC Classifiers
Show others...
2025 (English)In: Big Data and Cognitive Computing, E-ISSN 2504-2289, Vol. 9, no 3, article id 68Article in journal (Refereed) Published
Abstract [en]

The explicit kernel transformation of input data vectors to their distributed high-dimensional representations has recently been receiving increasing attention in the field of hyperdimensional computing (HDC). The main argument is that such representations endow simpler last-leg classification models, often referred to as HDC classifiers. HDC models have obvious advantages over resource-intensive deep learning models for use cases requiring fast, energy-efficient computations both for model training and deploying. Recent approaches to training HDC classifiers have primarily focused on various methods for selecting individual learning rates for incorrectly classified samples. In contrast to these methods, we propose an alternative strategy where the decision to learn is based on a margin applied to the classifier scores. This approach ensures that even correctly classified samples within the specified margin are utilized in training the model. This leads to improved test performances while maintaining a basic learning rule with a fixed (unit) learning rate. We propose and empirically evaluate two such strategies, incorporating either an additive or multiplicative margin, on the standard subset of the UCI collection, consisting of 121 datasets. Our approach demonstrates superior mean accuracy compared to other HDC classifiers with iterative error-correcting training.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2025
Keywords
hyperdimensional computing, HDC classifier, compositional representation, hypervector, margin classifier, confidence
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-112274 (URN)10.3390/bdcc9030068 (DOI)001452882100001 ()2-s2.0-105001165098 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, UKR22-0024, UKR24-0014Swedish Research Council, 2022-04657Luleå University of Technology
Note

Validerad;2025;Nivå 1;2025-04-07 (u8);

Funder: Swedish Section for Scholars at Risk (SAR-Sweden) (GU 2022/1963); National Research Fund of Ukraine (2023.04/0082);

Full text license: CC BY

Available from: 2025-04-07 Created: 2025-04-07 Last updated: 2025-10-21Bibliographically approved
Glover, T. E., Osipov, E. & Nichele, S. (2025). When is Reservoir Computing with Cellular Automata Beneficial?. In: Timoteo Carletti; Thierry-Sainclair Njougouo; Elio Tuci (Ed.), Artificial Life and Evolutionary Computation: 18th Italian Workshop, WIVACE 2024, Namur, Belgium, September 11–13, 2024, Revised Selected Papers. Paper presented at 18th International Workshop on Artificial Life and Evolutionary Computation, Namur, Belgium, September 11-13, 2024 (pp. 42-55). Springer Science and Business Media Deutschland GmbH
Open this publication in new window or tab >>When is Reservoir Computing with Cellular Automata Beneficial?
2025 (English)In: Artificial Life and Evolutionary Computation: 18th Italian Workshop, WIVACE 2024, Namur, Belgium, September 11–13, 2024, Revised Selected Papers / [ed] Timoteo Carletti; Thierry-Sainclair Njougouo; Elio Tuci, Springer Science and Business Media Deutschland GmbH , 2025, p. 42-55Conference paper, Published paper (Refereed)
Abstract [en]

Reservoir Computing with Cellular Automata (ReCA) is a relatively novel and promising approach. It consists of 3 steps: encoding the problem into the CA, the CA iterations step, and a simple classifying step. This paper demonstrates that the ReCA concept is effective even in arguably the simplest implementation of a ReCA system. However, we also report a failed attempt on the UCR Time Series Classification Archive where ReCA seems to work, but only because of the encoding scheme, not the CA. This highlights the need for ablation testing, i.e., comparing internally without sub-parts of one model, but also raises an open question on what kind of tasks ReCA is best suited for. 

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2352
Keywords
Cellular Automata, Reservoir Computing, ReCA, MNIST, UCRArchive
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-115065 (URN)10.1007/978-3-031-93631-9_4 (DOI)2-s2.0-105012253639 (Scopus ID)
Conference
18th International Workshop on Artificial Life and Evolutionary Computation, Namur, Belgium, September 11-13, 2024
Funder
The Research Council of Norway, 286558
Note

ISBN for host publication: 978-3-031-93630-2, 978-3-031-93631-9

Available from: 2025-10-10 Created: 2025-10-10 Last updated: 2025-10-21Bibliographically approved
Kempitiya, T., Alahakoon, D., Osipov, E., Kahawala, S. & De Silva, D. (2024). A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction. Biomimetics, 9(3), Article ID 175.
Open this publication in new window or tab >>A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction
Show others...
2024 (English)In: Biomimetics, E-ISSN 2313-7673, Vol. 9, no 3, article id 175Article in journal (Refereed) Published
Abstract [en]

We propose a new nature- and neuro-science-inspired algorithm for spatiotemporal learning and prediction based on sequential recall and vector symbolic architecture. A key novelty is the learning of spatial and temporal patterns as decoupled concepts where the temporal pattern sequences are constructed using the learned spatial patterns as an alphabet of elements. The decoupling, motivated by cognitive neuroscience research, provides the flexibility for fast and adaptive learning with dynamic changes to data and concept drift and as such is better suited for real-time learning and prediction. The algorithm further addresses several key computational requirements for predicting the next occurrences based on real-life spatiotemporal data, which have been found to be challenging with current state-of-the-art algorithms. Firstly, spatial and temporal patterns are detected using unsupervised learning from unlabeled data streams in changing environments; secondly, vector symbolic architecture (VSA) is used to manage variable-length sequences; and thirdly, hyper dimensional (HD) computing-based associative memory is used to facilitate the continuous prediction of the next occurrences in sequential patterns. The algorithm has been empirically evaluated using two benchmark and three time-series datasets to demonstrate its advantages compared to the state-of-the-art in spatiotemporal unsupervised sequence learning where the proposed ST-SOM algorithm is able to achieve 45% error reduction compared to HTM algorithm.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
hierarchical temporal memory, self-organizing maps, spatiotemporal sequence learning, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-104938 (URN)10.3390/biomimetics9030175 (DOI)001191336900001 ()38534860 (PubMedID)2-s2.0-85188743995 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-04-02 (marisr);

Full text license: CC BY

Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2025-10-21Bibliographically approved
Samarajeewa, C., De Silva, D., Osipov, E., Alahakoon, D. & Manic, M. (2024). Causal Reasoning in Large Language Models using Causal Graph Retrieval Augmented Generation. In: 2024 16th International Conference on Human System Interaction (HSI): . Paper presented at 16th International Conference on Human System Interaction (HSI 2024), Paris, France, July 8-11, 2024. IEEE
Open this publication in new window or tab >>Causal Reasoning in Large Language Models using Causal Graph Retrieval Augmented Generation
Show others...
2024 (English)In: 2024 16th International Conference on Human System Interaction (HSI), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Large Language Models (LLMs) are leading the Generative Artificial Intelligence transformation in natural language understanding. Beyond language understanding, LLMs have demonstrated capabilities in reasoning tasks, including commonsense, logical, and mathematical reasoning. However, their proficiency in causal understanding has been limited due to the complex nature of causal reasoning. Several recent studies have discussed the role of external causal models for improved causal understanding. Building on the success of Retrieval-Augmented Generation (RAG) for factual reasoning in LLMs, this paper introduces a novel approach that utilizes Causal Graphs as external sources for establishing causal relationships between complex vectors. This method is empirically evaluated using two benchmark datasets across the metrics of Context Relevance, Answer Relevance, and Grounding, in its ability to retrieve relevant context with causal alignment. The retrieval effectiveness is further compared with traditional RAG methods that are based on semantic proximity.

Place, publisher, year, edition, pages
IEEE, 2024
Series
International Conference on Human System Interaction, HSI, ISSN 2158-2246, E-ISSN 2158-2254
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-108948 (URN)10.1109/HSI61632.2024.10613566 (DOI)001294372700043 ()2-s2.0-85201523447 (Scopus ID)
Conference
16th International Conference on Human System Interaction (HSI 2024), Paris, France, July 8-11, 2024
Note

ISBN for host publication: 979-8-3503-6291-6

Available from: 2024-08-29 Created: 2024-08-29 Last updated: 2025-10-21Bibliographically approved
Sumanasena, V., Fernando, H., De Silva, D., Thileepan, B., Pasan, A., Samarawickrama, J., . . . Alahakoon, D. (2024). Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings. Sensors, 24(1), Article ID 185.
Open this publication in new window or tab >>Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings
Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 1, article id 185Article in journal (Refereed) Published
Abstract [en]

Direct policy learning (DPL) is a widely used approach in imitation learning for time-efficient and effective convergence when training mobile robots. However, using DPL in real-world applications is not sufficiently explored due to the inherent challenges of mobilizing direct human expertise and the difficulty of measuring comparative performance. Furthermore, autonomous systems are often resource-constrained, thereby limiting the potential application and implementation of highly effective deep learning models. In this work, we present a lightweight DPL-based approach to train mobile robots in navigational tasks. We integrated a safety policy alongside the navigational policy to safeguard the robot and the environment. The approach was evaluated in simulations and real-world settings and compared with recent work in this space. The results of these experiments and the efficient transfer from simulations to real-world settings demonstrate that our approach has improved performance compared to its hardware-intensive counterparts. We show that using the proposed methodology, the training agent achieves closer performance to the expert within the first 15 training iterations in simulation and real-world settings.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
autonomous navigation, direct policy learning, imitation learning, mobile robots
National Category
Robotics and automation Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103858 (URN)10.3390/s24010185 (DOI)001140602200001 ()38203047 (PubMedID)2-s2.0-85181972214 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-01-22 (joosat);

Full text license: CC BY

Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2025-10-21Bibliographically approved
Osipov, E., Kahawala, S., Haputhanthri, D., Kempitiya, T., De Silva, D., Alahakoon, D. & Kleyko, D. (2024). Hyperseed: Unsupervised Learning With Vector Symbolic Architectures. IEEE Transactions on Neural Networks and Learning Systems, 35(5), 6583-6597
Open this publication in new window or tab >>Hyperseed: Unsupervised Learning With Vector Symbolic Architectures
Show others...
2024 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, no 5, p. 6583-6597Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Hyperseed, neuromorphic hardware, self-organizing maps (SOMs), vector symbolic architectures (VSAs)
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94924 (URN)10.1109/TNNLS.2022.3211274 (DOI)000890842400001 ()36383581 (PubMedID)2-s2.0-85142777444 (Scopus ID)
Funder
The Swedish Foundation for International Cooperation in Research and Higher Education (STINT), MG2020-8842
Note

Validerad;2024;Nivå 2;2024-05-21 (joosat);

Funder: Intel Neuro-morphic Research Community Grant to the Luleå University of Technology; Russian Science Foundation during the period of 2020–2021 (Grant 20-71-10116); Centre for Data Analytics and Cognition (CDAC); European Union’s Horizon 2020 Research and Innovation Program, Marie Skłodowska-Curie (Grant 839179);

Available from: 2022-12-20 Created: 2022-12-20 Last updated: 2025-10-21Bibliographically approved
Kahawala, S., Madhusanka, N., De Silva, D., Osipov, E., Mills, N., Manic, M. & Jennings, A. (2024). Hypervector Approximation of Complex Manifolds for Artificial Intelligence Digital Twins in Smart Cities. Smart Cities, 7(6), 3371-3387
Open this publication in new window or tab >>Hypervector Approximation of Complex Manifolds for Artificial Intelligence Digital Twins in Smart Cities
Show others...
2024 (English)In: Smart Cities, E-ISSN 2624-6511, Vol. 7, no 6, p. 3371-3387Article in journal (Refereed) Published
Abstract [en]

The United Nations Sustainable Development Goal 11 aims to make cities and human settlements inclusive, safe, resilient and sustainable. Smart cities have been studied extensively as an overarching framework to address the needs of increasing urbanisation and the targets of SDG 11. Digital twins and artificial intelligence are foundational technologies that enable the rapid prototyping, development and deployment of systems and solutions within this overarching framework of smart cities. In this paper, we present a novel AI approach for hypervector approximation of complex manifolds in high-dimensional datasets and data streams such as those encountered in smart city settings. This approach is based on hypervectors, few-shot learning and a learning rule based on single-vector operation that collectively maintain low computational complexity. Starting with high-level clusters generated by the K-means algorithm, the approach interrogates these clusters with the Hyperseed algorithm that approximates the complex manifold into fine-grained local variations that can be tracked for anomalies and temporal changes. The approach is empirically evaluated in the smart city setting of a multi-campus tertiary education institution where diverse sensors, buildings and people movement data streams are collected, analysed and processed for insights and decisions.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
artificial intelligence, hypervectors, digital twin, complex manifold, hyperseed
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-111212 (URN)10.3390/smartcities7060131 (DOI)001387647900001 ()2-s2.0-85213454192 (Scopus ID)
Funder
Swedish Research Council, 2022-04657
Note

Full text license: CC BY 4.0;

Funder: Australian Federal Government (ICIRN000077)

Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-10-21Bibliographically approved
Schlegel, K., Rachkovskij, D. A., Osipov, E., Protzel, P. & Neubert, P. (2024). Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification. In: 2024 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024. IEEE
Open this publication in new window or tab >>Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification
Show others...
2024 (English)In: 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The vector superposition operation plays a central role in Hyperdimensional Computing (HDC), enabling compositionality of hypervectors without expanding the dimensionality, unlike concatenation. However, a problem arises when the quantity of superimposed vectors surpasses a certain threshold, which is determined by the hypervector’s information capacity relative to its dimensionality. Beyond this point, cross-talk noise incrementally obscures the distinctiveness of individual hypervectors and information is lost. To solve this challenge, we introduce a novel method for weighting individual hypervectors within the superposition, ensuring that only those hypervectors crucial for a given task are prioritized. The weights are learned end-to-end using the backpropagation algorithm in a neural network. Our method is characterized by two key features: (1) The resultant weighting model is exceptionally compact, as the number of trainable weights is equal to the total number of hypervectors in the superposition; (2) The model offers enhanced explainability due to the compositional nature of its encoding. These features collectively contribute to the efficiency and effectiveness of our proposed classification approach using hyperdimensional computing. We illustrate our approach through the multi-channel time series classification task. In this framework, each channel is encoded as a hypervector-descriptor, and those are subsequently composed into a single hypervector via superposition. This superimposed vector forms the basis for training the classification model based on the neural network. Applying our approach of weighted superposition on this task improved the classification performance compared to standard superposition or concatenation of feature vectors, especially for larger numbers of channels.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
HDC/VSA, neural networks, superposition, weighted superposition, bundling
National Category
Computer Sciences Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-110295 (URN)10.1109/IJCNN60899.2024.10650604 (DOI)001315691505103 ()2-s2.0-85204954431 (Scopus ID)
Conference
13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024
Note

Funder: German Federal Ministry for Economic Affairs and Climate Action; Swedish Foundation for Strategic Research (UKR22-0024, UKR24-0014); Swedish Research Council (VR SAR grant no.GU 2022/1963); LTU support grant; Swedish Research Council (VR grant no. 2022-04657);

ISBN for host publication: 978-8-3503-5931-2;

Available from: 2024-10-10 Created: 2024-10-10 Last updated: 2025-10-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0069-640X

Search in DiVA

Show all publications