Change search
Link to record
Permanent link

Direct link
Publications (10 of 119) Show all publications
Smets, L., Rachkovskij, D., Osipov, E., Van Leekwijck, W., Volkov, O. & Latré, S. (2025). Margin-Based Training of HDC Classifiers. Big Data and Cognitive Computing, 9(3), Article ID 68.
Open this publication in new window or tab >>Margin-Based Training of HDC Classifiers
Show others...
2025 (English)In: Big Data and Cognitive Computing, E-ISSN 2504-2289, Vol. 9, no 3, article id 68Article in journal (Refereed) Published
Abstract [en]

The explicit kernel transformation of input data vectors to their distributed high-dimensional representations has recently been receiving increasing attention in the field of hyperdimensional computing (HDC). The main argument is that such representations endow simpler last-leg classification models, often referred to as HDC classifiers. HDC models have obvious advantages over resource-intensive deep learning models for use cases requiring fast, energy-efficient computations both for model training and deploying. Recent approaches to training HDC classifiers have primarily focused on various methods for selecting individual learning rates for incorrectly classified samples. In contrast to these methods, we propose an alternative strategy where the decision to learn is based on a margin applied to the classifier scores. This approach ensures that even correctly classified samples within the specified margin are utilized in training the model. This leads to improved test performances while maintaining a basic learning rule with a fixed (unit) learning rate. We propose and empirically evaluate two such strategies, incorporating either an additive or multiplicative margin, on the standard subset of the UCI collection, consisting of 121 datasets. Our approach demonstrates superior mean accuracy compared to other HDC classifiers with iterative error-correcting training.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2025
Keywords
hyperdimensional computing, HDC classifier, compositional representation, hypervector, margin classifier, confidence
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-112274 (URN)10.3390/bdcc9030068 (DOI)2-s2.0-105001165098 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, UKR22-0024, UKR24-0014Swedish Research Council, 2022-04657Luleå University of Technology
Note

Validerad;2025;Nivå 1;2025-04-07 (u8);

Funder: Swedish Section for Scholars at Risk (SAR-Sweden) (GU 2022/1963); National Research Fund of Ukraine (2023.04/0082);

Full text license: CC BY

Available from: 2025-04-07 Created: 2025-04-07 Last updated: 2025-04-07Bibliographically approved
Kempitiya, T., Alahakoon, D., Osipov, E., Kahawala, S. & De Silva, D. (2024). A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction. Biomimetics, 9(3), Article ID 175.
Open this publication in new window or tab >>A Two-Layer Self-Organizing Map with Vector Symbolic Architecture for Spatiotemporal Sequence Learning and Prediction
Show others...
2024 (English)In: Biomimetics, E-ISSN 2313-7673, Vol. 9, no 3, article id 175Article in journal (Refereed) Published
Abstract [en]

We propose a new nature- and neuro-science-inspired algorithm for spatiotemporal learning and prediction based on sequential recall and vector symbolic architecture. A key novelty is the learning of spatial and temporal patterns as decoupled concepts where the temporal pattern sequences are constructed using the learned spatial patterns as an alphabet of elements. The decoupling, motivated by cognitive neuroscience research, provides the flexibility for fast and adaptive learning with dynamic changes to data and concept drift and as such is better suited for real-time learning and prediction. The algorithm further addresses several key computational requirements for predicting the next occurrences based on real-life spatiotemporal data, which have been found to be challenging with current state-of-the-art algorithms. Firstly, spatial and temporal patterns are detected using unsupervised learning from unlabeled data streams in changing environments; secondly, vector symbolic architecture (VSA) is used to manage variable-length sequences; and thirdly, hyper dimensional (HD) computing-based associative memory is used to facilitate the continuous prediction of the next occurrences in sequential patterns. The algorithm has been empirically evaluated using two benchmark and three time-series datasets to demonstrate its advantages compared to the state-of-the-art in spatiotemporal unsupervised sequence learning where the proposed ST-SOM algorithm is able to achieve 45% error reduction compared to HTM algorithm.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
hierarchical temporal memory, self-organizing maps, spatiotemporal sequence learning, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-104938 (URN)10.3390/biomimetics9030175 (DOI)001191336900001 ()38534860 (PubMedID)2-s2.0-85188743995 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-04-02 (marisr);

Full text license: CC BY

Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2024-12-20Bibliographically approved
Samarajeewa, C., De Silva, D., Osipov, E., Alahakoon, D. & Manic, M. (2024). Causal Reasoning in Large Language Models using Causal Graph Retrieval Augmented Generation. In: 2024 16th International Conference on Human System Interaction (HSI): . Paper presented at 16th International Conference on Human System Interaction (HSI 2024), Paris, France, July 8-11, 2024. IEEE
Open this publication in new window or tab >>Causal Reasoning in Large Language Models using Causal Graph Retrieval Augmented Generation
Show others...
2024 (English)In: 2024 16th International Conference on Human System Interaction (HSI), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

Large Language Models (LLMs) are leading the Generative Artificial Intelligence transformation in natural language understanding. Beyond language understanding, LLMs have demonstrated capabilities in reasoning tasks, including commonsense, logical, and mathematical reasoning. However, their proficiency in causal understanding has been limited due to the complex nature of causal reasoning. Several recent studies have discussed the role of external causal models for improved causal understanding. Building on the success of Retrieval-Augmented Generation (RAG) for factual reasoning in LLMs, this paper introduces a novel approach that utilizes Causal Graphs as external sources for establishing causal relationships between complex vectors. This method is empirically evaluated using two benchmark datasets across the metrics of Context Relevance, Answer Relevance, and Grounding, in its ability to retrieve relevant context with causal alignment. The retrieval effectiveness is further compared with traditional RAG methods that are based on semantic proximity.

Place, publisher, year, edition, pages
IEEE, 2024
Series
International Conference on Human System Interaction, HSI, ISSN 2158-2246, E-ISSN 2158-2254
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-108948 (URN)10.1109/HSI61632.2024.10613566 (DOI)001294372700043 ()2-s2.0-85201523447 (Scopus ID)
Conference
16th International Conference on Human System Interaction (HSI 2024), Paris, France, July 8-11, 2024
Note

ISBN for host publication: 979-8-3503-6291-6

Available from: 2024-08-29 Created: 2024-08-29 Last updated: 2024-11-20Bibliographically approved
Sumanasena, V., Fernando, H., De Silva, D., Thileepan, B., Pasan, A., Samarawickrama, J., . . . Alahakoon, D. (2024). Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings. Sensors, 24(1), Article ID 185.
Open this publication in new window or tab >>Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings
Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 1, article id 185Article in journal (Refereed) Published
Abstract [en]

Direct policy learning (DPL) is a widely used approach in imitation learning for time-efficient and effective convergence when training mobile robots. However, using DPL in real-world applications is not sufficiently explored due to the inherent challenges of mobilizing direct human expertise and the difficulty of measuring comparative performance. Furthermore, autonomous systems are often resource-constrained, thereby limiting the potential application and implementation of highly effective deep learning models. In this work, we present a lightweight DPL-based approach to train mobile robots in navigational tasks. We integrated a safety policy alongside the navigational policy to safeguard the robot and the environment. The approach was evaluated in simulations and real-world settings and compared with recent work in this space. The results of these experiments and the efficient transfer from simulations to real-world settings demonstrate that our approach has improved performance compared to its hardware-intensive counterparts. We show that using the proposed methodology, the training agent achieves closer performance to the expert within the first 15 training iterations in simulation and real-world settings.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
autonomous navigation, direct policy learning, imitation learning, mobile robots
National Category
Robotics and automation Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103858 (URN)10.3390/s24010185 (DOI)001140602200001 ()38203047 (PubMedID)2-s2.0-85181972214 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-01-22 (joosat);

Full text license: CC BY

Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2025-02-05Bibliographically approved
Osipov, E., Kahawala, S., Haputhanthri, D., Kempitiya, T., De Silva, D., Alahakoon, D. & Kleyko, D. (2024). Hyperseed: Unsupervised Learning With Vector Symbolic Architectures. IEEE Transactions on Neural Networks and Learning Systems, 35(5), 6583-6597
Open this publication in new window or tab >>Hyperseed: Unsupervised Learning With Vector Symbolic Architectures
Show others...
2024 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 35, no 5, p. 6583-6597Article in journal (Refereed) Published
Place, publisher, year, edition, pages
IEEE, 2024
Keywords
Hyperseed, neuromorphic hardware, self-organizing maps (SOMs), vector symbolic architectures (VSAs)
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94924 (URN)10.1109/TNNLS.2022.3211274 (DOI)000890842400001 ()36383581 (PubMedID)2-s2.0-85142777444 (Scopus ID)
Funder
The Swedish Foundation for International Cooperation in Research and Higher Education (STINT), MG2020-8842
Note

Validerad;2024;Nivå 2;2024-05-21 (joosat);

Funder: Intel Neuro-morphic Research Community Grant to the Luleå University of Technology; Russian Science Foundation during the period of 2020–2021 (Grant 20-71-10116); Centre for Data Analytics and Cognition (CDAC); European Union’s Horizon 2020 Research and Innovation Program, Marie Skłodowska-Curie (Grant 839179);

Available from: 2022-12-20 Created: 2022-12-20 Last updated: 2024-05-21Bibliographically approved
Kahawala, S., Madhusanka, N., De Silva, D., Osipov, E., Mills, N., Manic, M. & Jennings, A. (2024). Hypervector Approximation of Complex Manifolds for Artificial Intelligence Digital Twins in Smart Cities. Smart Cities, 7(6), 3371-3387
Open this publication in new window or tab >>Hypervector Approximation of Complex Manifolds for Artificial Intelligence Digital Twins in Smart Cities
Show others...
2024 (English)In: Smart Cities, E-ISSN 2624-6511, Vol. 7, no 6, p. 3371-3387Article in journal (Refereed) Published
Abstract [en]

The United Nations Sustainable Development Goal 11 aims to make cities and human settlements inclusive, safe, resilient and sustainable. Smart cities have been studied extensively as an overarching framework to address the needs of increasing urbanisation and the targets of SDG 11. Digital twins and artificial intelligence are foundational technologies that enable the rapid prototyping, development and deployment of systems and solutions within this overarching framework of smart cities. In this paper, we present a novel AI approach for hypervector approximation of complex manifolds in high-dimensional datasets and data streams such as those encountered in smart city settings. This approach is based on hypervectors, few-shot learning and a learning rule based on single-vector operation that collectively maintain low computational complexity. Starting with high-level clusters generated by the K-means algorithm, the approach interrogates these clusters with the Hyperseed algorithm that approximates the complex manifold into fine-grained local variations that can be tracked for anomalies and temporal changes. The approach is empirically evaluated in the smart city setting of a multi-campus tertiary education institution where diverse sensors, buildings and people movement data streams are collected, analysed and processed for insights and decisions.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
artificial intelligence, hypervectors, digital twin, complex manifold, hyperseed
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-111212 (URN)10.3390/smartcities7060131 (DOI)001387647900001 ()2-s2.0-85213454192 (Scopus ID)
Funder
Swedish Research Council, 2022-04657
Note

Full text license: CC BY 4.0;

Funder: Australian Federal Government (ICIRN000077)

Available from: 2025-01-07 Created: 2025-01-07 Last updated: 2025-01-07Bibliographically approved
Schlegel, K., Rachkovskij, D. A., Osipov, E., Protzel, P. & Neubert, P. (2024). Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification. In: 2024 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024. IEEE
Open this publication in new window or tab >>Learnable Weighted Superposition in HDC and its Application to Multi-channel Time Series Classification
Show others...
2024 (English)In: 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The vector superposition operation plays a central role in Hyperdimensional Computing (HDC), enabling compositionality of hypervectors without expanding the dimensionality, unlike concatenation. However, a problem arises when the quantity of superimposed vectors surpasses a certain threshold, which is determined by the hypervector’s information capacity relative to its dimensionality. Beyond this point, cross-talk noise incrementally obscures the distinctiveness of individual hypervectors and information is lost. To solve this challenge, we introduce a novel method for weighting individual hypervectors within the superposition, ensuring that only those hypervectors crucial for a given task are prioritized. The weights are learned end-to-end using the backpropagation algorithm in a neural network. Our method is characterized by two key features: (1) The resultant weighting model is exceptionally compact, as the number of trainable weights is equal to the total number of hypervectors in the superposition; (2) The model offers enhanced explainability due to the compositional nature of its encoding. These features collectively contribute to the efficiency and effectiveness of our proposed classification approach using hyperdimensional computing. We illustrate our approach through the multi-channel time series classification task. In this framework, each channel is encoded as a hypervector-descriptor, and those are subsequently composed into a single hypervector via superposition. This superimposed vector forms the basis for training the classification model based on the neural network. Applying our approach of weighted superposition on this task improved the classification performance compared to standard superposition or concatenation of feature vectors, especially for larger numbers of channels.

Place, publisher, year, edition, pages
IEEE, 2024
Keywords
HDC/VSA, neural networks, superposition, weighted superposition, bundling
National Category
Computer Sciences Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-110295 (URN)10.1109/IJCNN60899.2024.10650604 (DOI)2-s2.0-85204954431 (Scopus ID)
Conference
13th IEEE World Congress on Computational Intelligence (WCCI 2024), Yokohama, Japan, June 30 - July 5, 2024
Note

Funder: German Federal Ministry for Economic Affairs and Climate Action; Swedish Foundation for Strategic Research (UKR22-0024, UKR24-0014); Swedish Research Council (VR SAR grant no.GU 2022/1963); LTU support grant; Swedish Research Council (VR grant no. 2022-04657);

ISBN for host publication: 978-8-3503-5931-2;

Available from: 2024-10-10 Created: 2024-10-10 Last updated: 2024-10-10Bibliographically approved
Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, 55(6), Article ID 130.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 6, article id 130Article in journal (Refereed) Published
Abstract [en]

This two-part comprehensive survey is devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and distributed vector representations. Notable models in the HDC/VSA family are Tensor Product Representations, Holographic Reduced Representations, Multiply-Add-Permute, Binary Spatter Codes, and Sparse Binary Distributed Representations but there are other models too. HDC/VSA is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the field. However, due to a surge of new researchers joining the field in recent years, the necessity for a comprehensive survey of the field has become extremely important. Therefore, amongst other aspects of the field, this Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II of this survey [84] is devoted to applications, cognitive computing and architectures, as well as directions for future work. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95133 (URN)10.1145/3538531 (DOI)000893245700022 ()2-s2.0-85146491559 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-01-03 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (0120U000122, 0121U000016, 0117U002286); Ministry of Education and Science of Ukraine (0121U000228, 0122U000818);

Available from: 2023-01-03 Created: 2023-01-03 Last updated: 2024-03-07Bibliographically approved
Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. ACM Computing Surveys, 55(9), Article ID 175.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 9, article id 175Article in journal (Refereed) Published
Abstract [en]

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [321, 326] is an influential HDC/VSA model that is well known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field.Part I of this survey [222] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain; however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2023
Keywords
analogical reasoning, applications, Artificial intelligence, binary spatter codes, cognitive architectures, cognitive computing, distributed representations, geometric analogue of holographic reduced representations, holographic reduced representations, hyperdimensional computing, machine learning, matrix binding of additive terms, modular composite representations, multiply-add-permute, sparse binary distributed representations, sparse block codes, tensor product representations, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95673 (URN)10.1145/3558000 (DOI)000924882300001 ()2-s2.0-85147845869 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-02-21 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (grant no. 0120U000122, 0121U000016, 0122U002151, 0117U002286); Ministry of Education and Science of Ukraine (grant no. 0121U000228, 0122U000818)

Available from: 2023-02-21 Created: 2023-02-21 Last updated: 2024-03-28Bibliographically approved
Glover, T. E., Lind, P., Yazidi, A., Osipov, E. & Nichele, S. (2023). Investigating Rules and Parameters of Reservoir Computing with Elementary Cellular Automata, with a Criticism of Rule 90 and the Five-Bit Memory Benchmark. Complex Systems, 32(3), 309-351
Open this publication in new window or tab >>Investigating Rules and Parameters of Reservoir Computing with Elementary Cellular Automata, with a Criticism of Rule 90 and the Five-Bit Memory Benchmark
Show others...
2023 (English)In: Complex Systems, ISSN 0891-2513, Vol. 32, no 3, p. 309-351Article in journal (Refereed) Published
Abstract [en]

Reservoir computing with cellular automata (ReCAs) is a promising concept by virtue of its potential for effective hardware implementation. In this paper, we explore elementary cellular automata rules in the context of ReCAs and the 5-bit memory benchmark. We combine elementary cellular automaton theory with our results and use them to identify and explain some of the patterns found. Furthermore, we use these findings to expose weaknesses in the 5-bit memory benchmark as it is typically applied in ReCAs, such as pointing out what features it selects for or solving it using random vectors. We look deeply into previ-ously successful rules in ReCAs such as rule 90 and explain some of the consequences of its additive properties as well as the correlation between grid size and performance. Additionally, we present results from exhaustively exploring ReCAs on key parameters such as distrac-tor period, iterations and grid size. The findings of this paper should motivate the ReCAs community to move away from using the 5-bit memory benchmark as it is being applied today.

Place, publisher, year, edition, pages
Complex Systems Publications, Inc, 2023
Keywords
Cellular Automata, Edge of Chaos, Reservoir Computing, Reservoir Computing with Cellular Automata (ReCAs)
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103857 (URN)10.25088/ComplexSystems.32.3.309 (DOI)001181026300005 ()2-s2.0-85182398645 (Scopus ID)
Funder
Swedish Research Council, 2022-04657
Note

Validerad;2024;Nivå 1;2024-01-25 (hanlid);

Funder: Research Council of Norway (286558); 

Full text license: CC BY (Complex Systems is Platinum Open Access. This means permanent and free access to published scientific works for readers with no publication fees for the authors—100% free. All articles are published under the most flexible reuse standard—the CC BY license)

Available from: 2024-01-25 Created: 2024-01-25 Last updated: 2024-11-20Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0069-640X

Search in DiVA

Show all publications