Change search
Link to record
Permanent link

Direct link
Publications (10 of 114) Show all publications
Sumanasena, V., Fernando, H., De Silva, D., Thileepan, B., Pasan, A., Samarawickrama, J., . . . Alahakoon, D. (2024). Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings. Sensors, 24(1), Article ID 185.
Open this publication in new window or tab >>Hardware Efficient Direct Policy Imitation Learning for Robotic Navigation in Resource-Constrained Settings
Show others...
2024 (English)In: Sensors, E-ISSN 1424-8220, Vol. 24, no 1, article id 185Article in journal (Refereed) Published
Abstract [en]

Direct policy learning (DPL) is a widely used approach in imitation learning for time-efficient and effective convergence when training mobile robots. However, using DPL in real-world applications is not sufficiently explored due to the inherent challenges of mobilizing direct human expertise and the difficulty of measuring comparative performance. Furthermore, autonomous systems are often resource-constrained, thereby limiting the potential application and implementation of highly effective deep learning models. In this work, we present a lightweight DPL-based approach to train mobile robots in navigational tasks. We integrated a safety policy alongside the navigational policy to safeguard the robot and the environment. The approach was evaluated in simulations and real-world settings and compared with recent work in this space. The results of these experiments and the efficient transfer from simulations to real-world settings demonstrate that our approach has improved performance compared to its hardware-intensive counterparts. We show that using the proposed methodology, the training agent achieves closer performance to the expert within the first 15 training iterations in simulation and real-world settings.

Place, publisher, year, edition, pages
Multidisciplinary Digital Publishing Institute (MDPI), 2024
Keywords
autonomous navigation, direct policy learning, imitation learning, mobile robots
National Category
Robotics Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103858 (URN)10.3390/s24010185 (DOI)001140602200001 ()38203047 (PubMedID)2-s2.0-85181972214 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-01-22 (joosat);

Full text license: CC BY

Available from: 2024-01-22 Created: 2024-01-22 Last updated: 2024-03-07Bibliographically approved
Kleyko, D., Rachkovskij, D. A., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Computing Surveys, 55(6), Article ID 130.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 6, article id 130Article in journal (Refereed) Published
Abstract [en]

This two-part comprehensive survey is devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and distributed vector representations. Notable models in the HDC/VSA family are Tensor Product Representations, Holographic Reduced Representations, Multiply-Add-Permute, Binary Spatter Codes, and Sparse Binary Distributed Representations but there are other models too. HDC/VSA is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the field. However, due to a surge of new researchers joining the field in recent years, the necessity for a comprehensive survey of the field has become extremely important. Therefore, amongst other aspects of the field, this Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II of this survey [84] is devoted to applications, cognitive computing and architectures, as well as directions for future work. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95133 (URN)10.1145/3538531 (DOI)000893245700022 ()2-s2.0-85146491559 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-01-03 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (0120U000122, 0121U000016, 0117U002286); Ministry of Education and Science of Ukraine (0121U000228, 0122U000818);

Available from: 2023-01-03 Created: 2023-01-03 Last updated: 2024-03-07Bibliographically approved
Kleyko, D., Rachkovskij, D., Osipov, E. & Rahimi, A. (2023). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges. ACM Computing Surveys, 55(9), Article ID 175.
Open this publication in new window or tab >>A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges
2023 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 9, article id 175Article in journal (Refereed) Published
Abstract [en]

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [321, 326] is an influential HDC/VSA model that is well known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field.Part I of this survey [222] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain; however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2023
Keywords
analogical reasoning, applications, Artificial intelligence, binary spatter codes, cognitive architectures, cognitive computing, distributed representations, geometric analogue of holographic reduced representations, holographic reduced representations, hyperdimensional computing, machine learning, matrix binding of additive terms, modular composite representations, multiply-add-permute, sparse binary distributed representations, sparse block codes, tensor product representations, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-95673 (URN)10.1145/3558000 (DOI)000924882300001 ()2-s2.0-85147845869 (Scopus ID)
Funder
EU, Horizon 2020, 839179Swedish Foundation for Strategic Research, UKR22-0024
Note

Validerad;2023;Nivå 2;2023-02-21 (joosat);

Funder: AFOSR (FA9550-19-1-0241); National Academy of Sciences of Ukraine (grant no. 0120U000122, 0121U000016, 0122U002151, 0117U002286); Ministry of Education and Science of Ukraine (grant no. 0121U000228, 0122U000818)

Available from: 2023-02-21 Created: 2023-02-21 Last updated: 2023-05-08Bibliographically approved
Glover, T. E., Lind, P., Yazidi, A., Osipov, E. & Nichele, S. (2023). Investigating Rules and Parameters of Reservoir Computing with Elementary Cellular Automata, with a Criticism of Rule 90 and the Five-Bit Memory Benchmark. Complex Systems, 32(3), 309-351
Open this publication in new window or tab >>Investigating Rules and Parameters of Reservoir Computing with Elementary Cellular Automata, with a Criticism of Rule 90 and the Five-Bit Memory Benchmark
Show others...
2023 (English)In: Complex Systems, ISSN 0891-2513, Vol. 32, no 3, p. 309-351Article in journal (Refereed) Published
Abstract [en]

Reservoir computing with cellular automata (ReCAs) is a promising concept by virtue of its potential for effective hardware implementation. In this paper, we explore elementary cellular automata rules in the context of ReCAs and the 5-bit memory benchmark. We combine elementary cellular automaton theory with our results and use them to identify and explain some of the patterns found. Furthermore, we use these findings to expose weaknesses in the 5-bit memory benchmark as it is typically applied in ReCAs, such as pointing out what features it selects for or solving it using random vectors. We look deeply into previ-ously successful rules in ReCAs such as rule 90 and explain some of the consequences of its additive properties as well as the correlation between grid size and performance. Additionally, we present results from exhaustively exploring ReCAs on key parameters such as distrac-tor period, iterations and grid size. The findings of this paper should motivate the ReCAs community to move away from using the 5-bit memory benchmark as it is being applied today.

Place, publisher, year, edition, pages
Complex Systems Publications, Inc, 2023
Keywords
Cellular Automata, Edge of Chaos, Reservoir Computing, Reservoir Computing with Cellular Automata (ReCAs)
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-103857 (URN)10.25088/ComplexSystems.32.3.309 (DOI)2-s2.0-85182398645 (Scopus ID)
Funder
Swedish Research Council, 2022-04657
Note

Validerad;2024;Nivå 1;2024-01-25 (hanlid);

Funder: Research Council of Norway (286558); 

Full text license: CC BY (Complex Systems is Platinum Open Access. This means permanent and free access to published scientific works for readers with no publication fees for the authors—100% free. All articles are published under the most flexible reuse standard—the CC BY license)

Available from: 2024-01-25 Created: 2024-01-25 Last updated: 2024-01-25Bibliographically approved
Haputhanthri, D., Osipov, E., Kahawala, S., De Silva, D., Kempitiya, T. & Alahakoon, D. (2022). Evaluating Complex Sparse Representation of Hypervectors for Unsupervised Machine Learning. In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings: . Paper presented at IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022. IEEE
Open this publication in new window or tab >>Evaluating Complex Sparse Representation of Hypervectors for Unsupervised Machine Learning
Show others...
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of Vector Symbolic Architectures (VSA) in machine learning has contributed towards en-ergy efficient computation, short training cycles and improved performance. A further advancement of VSA is to leverage sparse representations, where the VSA-encoded hypervectors are sparsified to represent receptive field properties when encoding sensory inputs. The hyperseed algorithm is an unsupervised machine learning algorithm based on VSA for fast learning a topology preserving feature map of unlabelled data. In this paper, we implement two methods of sparse block-codes on the hyperseed algorithm, they are selecting the maximum element of each block and selecting a random element of each block as the nonzero element. Finally, the sparsified hyperseed algorithm is empirically evaluated for performance using three distinct bench-mark datasets, Iris classification, classification and visualisation of synthetic datasets from the Fundamental Clustering Problems Suite and language classification using n-gram statistics.

Place, publisher, year, edition, pages
IEEE, 2022
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94787 (URN)10.1109/IJCNN55064.2022.9892981 (DOI)000867070908091 ()2-s2.0-85140777444 (Scopus ID)
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-12-08 Created: 2022-12-08 Last updated: 2023-05-08Bibliographically approved
Rosato, A., Panella, M., Osipov, E. & Kleyko, D. (2022). Few-shot Federated Learning in Randomized Neural Networks via Hyperdimensional Computing. In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings. Paper presented at IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022. IEEE
Open this publication in new window or tab >>Few-shot Federated Learning in Randomized Neural Networks via Hyperdimensional Computing
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

The recent interest in federated learning has initiated the investigation for efficient models deployable in scenarios with strict communication and computational constraints. Furthermore, the inherent privacy concerns in decentralized and federated learning call for efficient distribution of information in a network of interconnected agents. Therefore, we propose a novel distributed classification solution that is based on shallow randomized networks equipped with a compression mechanism that is used for sharing the local model in the federated context. We make extensive use of hyperdimensional computing both in the local network model and in the compressed communication protocol, which is enabled by the binding and the superposition operations. Accuracy, precision, and stability of our proposed approach are demonstrated on a collection of datasets with several network topologies and for different data partitioning schemes.

Place, publisher, year, edition, pages
IEEE, 2022
National Category
Computer Sciences Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94152 (URN)10.1109/IJCNN55064.2022.9892007 (DOI)000867070901017 ()2-s2.0-85140772045 (Scopus ID)
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-11-18 Created: 2022-11-18 Last updated: 2023-05-08Bibliographically approved
Osipov, E., Kahawala, S., Haputhanthri, D., Kempitiya, T., De Silva, D., Alahakoon, D. & Kleyko, D. (2022). Hyperseed: Unsupervised Learning With Vector Symbolic Architectures. IEEE Transactions on Neural Networks and Learning Systems
Open this publication in new window or tab >>Hyperseed: Unsupervised Learning With Vector Symbolic Architectures
Show others...
2022 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388Article in journal (Refereed) Epub ahead of print
Abstract [en]

Motivated by recent innovations in biologically inspired neuromorphic hardware, this article presents a novel unsupervised machine learning algorithm named Hyperseed that draws on the principles of vector symbolic architectures (VSAs) for fast learning of a topology preserving feature map of unlabeled data. It relies on two major operations of VSA, binding and bundling. The algorithmic part of Hyperseed is expressed within the Fourier holographic reduced representations (FHRR) model, which is specifically suited for implementation on spiking neuromorphic hardware. The two primary contributions of the Hyperseed algorithm are few-shot learning and a learning rule based on single vector operation. These properties are empirically evaluated on synthetic datasets and on illustrative benchmark use cases, IRIS classification, and a language identification task using the n -gram statistics. The results of these experiments confirm the capabilities of Hyperseed and its applications in neuromorphic hardware.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Hyperseed, neuromorphic hardware, self-organizing maps (SOMs), vector symbolic architectures (VSAs)
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94924 (URN)10.1109/TNNLS.2022.3211274 (DOI)000890842400001 ()36383581 (PubMedID)2-s2.0-85142777444 (Scopus ID)
Available from: 2022-12-20 Created: 2022-12-20 Last updated: 2023-02-24
Kleyko, D., Frady, E. P., Kheffache, M. & Osipov, E. (2022). Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware. IEEE Transactions on Neural Networks and Learning Systems, 33(4), 1688-1701
Open this publication in new window or tab >>Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware
2022 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 33, no 4, p. 1688-1701Article in journal (Refereed) Published
Abstract [en]

We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n-bits integers (where n< 8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Dynamic systems modeling, echo state networks (ESNs), hyperdimensional computing (HDC), memory capacity, reservoir computing (RC), time-series classification, vector symbolic architectures
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-82337 (URN)10.1109/TNNLS.2020.3043309 (DOI)000778930100029 ()33351770 (PubMedID)2-s2.0-85098778990 (Scopus ID)
Funder
Swedish Research Council, 2015-04677EU, Horizon 2020, 839179
Note

Validerad;2022;Nivå 2;2022-04-13 (sofila);

Funder: Defense Advanced Research Projects Agency

Available from: 2021-01-13 Created: 2021-01-13 Last updated: 2023-10-28Bibliographically approved
Kempitiya, T., De Silva, D., Kahawala, S., Haputhanthri, D., Alahakoon, D. & Osipov, E. (2022). Parameterization of Vector Symbolic Approach for Sequence Encoding Based Visual Place Recognition. In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings. Paper presented at IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022. IEEE
Open this publication in new window or tab >>Parameterization of Vector Symbolic Approach for Sequence Encoding Based Visual Place Recognition
Show others...
2022 (English)In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

Sequence-based methods for visual place recognition (VPR) have great importance due to their ability of additional information capture through the sequences compared to single image comparison. Vector symbolic architecture (VSA) started to gain attention within these methods due to the unique capabilities for representing variable-length sequences using single high-dimensional vectors. But the effect of different sequence parameters for the visual place recognition task is yet to be explored. In this work, we explore the parametrization of sequence encoding with VSA in the SeqNet variant of sequence-based visual place recognition and introduce a new hierarchical VPR method, which utilizes the proposed parametrization. We show that with our parametrization the VSA realization of sequence-based visual place recognition achieves on par results to conventional algorithms, while featuring the capability of being implemented on novel neuromorphic hardware for efficient execution.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
HD computing, SeqSLAM, Vector symbolic architecture, visual place recognition
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-94151 (URN)10.1109/IJCNN55064.2022.9892397 (DOI)2-s2.0-85140753596 (Scopus ID)
Conference
IEEE World Congress on Computational Intelligence (WCCI 2022), International Joint Conference on Neural Networks (IJCNN 2022), Padua, Italy, July 18-23, 2022
Note

ISBN för värdpublikation: 978-1-7281-8671-9

Available from: 2022-11-18 Created: 2022-11-18 Last updated: 2023-09-13Bibliographically approved
Kovalev, A. K., Shaban, M., Osipov, E. & Panov, A. I. (2022). Vector Semiotic Model for Visual Question Answering. Cognitive Systems Research, 71, 52-63
Open this publication in new window or tab >>Vector Semiotic Model for Visual Question Answering
2022 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 71, p. 52-63Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a Vector Semiotic Model as a possible solution to the symbol grounding problem in the context of Visual Question Answering. The Vector Semiotic Model combines the advantages of a Semiotic Approach implemented in the Sign-Based World Model and Vector Symbolic Architectures. The Sign-Based World Model represents information about a scene depicted on an input image in a structured way and grounds abstract objects in an agent’s sensory input. We use the Vector Symbolic Architecture to represent the elements of the Sign-Based World Model on a computational level. Properties of a high-dimensional space and operations defined for high-dimensional vectors allow encoding the whole scene into a high-dimensional vector with the preservation of the structure. That leads to the ability to apply explainable reasoning to answer an input question. We conducted experiments are on a CLEVR dataset and show results comparable to the state of the art. The proposed combination of approaches, first, leads to the possible solution of the symbol-grounding problem and, second, allows expanding current results to other intelligent tasks (collaborative robotics, embodied intellectual assistance, etc.).

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Vector-symbolic architecture, Semiotic approach, Symbol grounding problem, Causal network, Visual Question Answering
National Category
Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-87680 (URN)10.1016/j.cogsys.2021.09.001 (DOI)000721354800002 ()2-s2.0-85118894608 (Scopus ID)
Note

Validerad;2021;Nivå 2;2021-11-10 (beamah);

Forskningsfinansiär: Russian Science Foundation (20-71-10116)

Available from: 2021-10-28 Created: 2021-10-28 Last updated: 2021-11-29Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0069-640x

Search in DiVA

Show all publications