Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Kleyko, Denis
    et al.
    Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA 94720 USA; Intelligent Systems Laboratory, Research Institutes of Sweden, 16440 Kista.
    Davies, Mike
    Neuromorphic Computing Laboratory, Intel Labs, Santa Clara, CA, USA.
    Frady, Edward Paxon
    Neuromorphic Computing Laboratory, Intel Labs, Santa Clara, CA, USA.
    Kanerva, Pentti
    Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA.
    Kent, Spencer J.
    Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA.
    Olshausen, Bruno A.
    Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rabaey, Jan M.
    Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, CA, USA.
    Rachkovskij, Dmitri A.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies and Systems, Kyiv, Ukraine.
    Rahimi, Abbas
    IBM Research–Zurich, Rüschlikon, Switzerland.
    Sommer, Friedrich T.
    Redwood Center for Theoretical Neuroscience, University of California at Berkeley, Berkeley, CA, USA; Neuromorphic Computing Laboratory, Intel Labs, Santa Clara, CA, USA.
    Vector Symbolic Architectures as a Computing Framework for Emerging Hardware2022In: Proceedings of the IEEE, ISSN 0018-9219, E-ISSN 1558-2256, Vol. 110, no 10, p. 1538-1571Article in journal (Refereed)
    Abstract [en]

    This article reviews recent progress in the development of the computing framework vector symbolic architectures (VSA) (also known as hyperdimensional computing). This framework is well suited for implementation in stochastic, emerging hardware, and it naturally expresses the types of cognitive operations required for artificial intelligence (AI). We demonstrate in this article that the field-like algebraic structure of VSA offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of VSA, “computing in superposition,” which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that VSA are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind VSA, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.

  • 2.
    Kleyko, Denis
    et al.
    University of California at Berkeley, USA; Research Institutes of Sweden, Kista, Sweden.
    Rachkovskij, Dmitri A.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies and Systems, Ukraine.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rahimi, Abbas
    IBM Research – Zurich, Zurich, Switzerland.
    A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations2023In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 6, article id 130Article in journal (Refereed)
    Abstract [en]

    This two-part comprehensive survey is devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and distributed vector representations. Notable models in the HDC/VSA family are Tensor Product Representations, Holographic Reduced Representations, Multiply-Add-Permute, Binary Spatter Codes, and Sparse Binary Distributed Representations but there are other models too. HDC/VSA is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science. This fact makes it challenging to create a thorough overview of the field. However, due to a surge of new researchers joining the field in recent years, the necessity for a comprehensive survey of the field has become extremely important. Therefore, amongst other aspects of the field, this Part I surveys important aspects such as: known computational models of HDC/VSA and transformations of various input data types to high-dimensional distributed representations. Part II of this survey [84] is devoted to applications, cognitive computing and architectures, as well as directions for future work. The survey is written to be useful for both newcomers and practitioners.

  • 3.
    Kleyko, Denis
    et al.
    University of California at Berkeley Berkeley CA; Research Institutes of Sweden.
    Rachkovskij, Dmitri
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies, Ukraine.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rahimi, Abbas
    IBM Research Zurich, Zurich, Switzerland.
    A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges2023In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 55, no 9, article id 175Article in journal (Refereed)
    Abstract [en]

    This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [321, 326] is an influential HDC/VSA model that is well known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field.Part I of this survey [222] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain; however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners.

  • 4.
    Rachkovskij, Dmitri A.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies and Systems, Kiev, 03680, Ukraine.
    Representation of spatial objects by shift-equivariant similarity-preserving hypervectors2022In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 34, no 24, p. 22387-22403Article in journal (Refereed)
    Abstract [en]

    Hyperdimensional Computing (HDC), also known as Vector-Symbolic Architectures (VSA), is an approach that has been proposed to combine the advantages of distributed vector representations and symbolic structured data representations in Artificial Intelligence, Machine Learning, and Pattern Recognition problems. HDC/VSA operate with hypervectors, i.e., brain-like distributed representations of large fixed dimension. The key problem of HDC/VSA is how to transform data of various types into hypervectors. In this paper, we propose a novel approach for the formation of hypervectors of spatial objects, such as images, that provides both an equivariance with respect to the shift of objects and preserves the similarity of objects described by similar features at nearby positions. In contrast to known hypervector formation methods, we represent the features by compositional hypervectors and exploit permutations of hypervectors for representing the position of features. We experimentally explored the proposed approach in some tasks that exploit various descriptions of two-dimensional (2D) images. In terms of standard accuracy measures such as error rate or mean average precision, our results are on a par or better than those of other methods and are obtained without feature learning. The proposed techniques were designed for the HDC/VSA model known as Sparse Binary Distributed Representations. However, they can be adapted to hypervectors in formats of other HDC/VSA models, as well as for representing spatial objects other than 2D images.

  • 5.
    Rachkovskij, Dmitri A.
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies and Systems, Kiev, Ukraine.
    Kleyko, Denis
    University of California at Berkeley, Redwood Center for Theoretical Neuroscience, Berkeley, USA; Research Institutes of Sweden, Intelligent Systems Lab, Kista, Sweden.
    Recursive Binding for Similarity-Preserving Hypervector Representations of Sequences2022In: 2022 International Joint Conference on Neural Networks (IJCNN): 2022 Conference Proceedings, IEEE, 2022Conference paper (Refereed)
    Abstract [en]

    Hyperdimensional computing (HDC), also known as vector symbolic architectures (VSA), is a computing framework used within artificial intelligence and cognitive computing that operates with distributed vector representations of large fixed dimensionality. A critical step in designing the HDC/VSA solutions is to obtain such representations from the input data. Here, we focus on a wide-spread data type of sequences and propose their transformation to distributed representations that both preserve the similarity of identical sequence elements at nearby positions and are equivariant with respect to the sequence shift. These properties are enabled by forming representations of sequence positions using recursive binding as well as superposition operations. The proposed transformation was experimentally investigated with symbolic strings used for modeling human perception of word similarity. The obtained results are on a par with more sophisticated approaches from the literature. The proposed transformation was designed for the HDC/VSA model known as Fourier Holographic Reduced Representations. However, it can be adapted to some other HDC/VSA models.

  • 6.
    Tyshchuk, O. V.
    et al.
    Roku Inc., Kyiv, Ukraine.
    Desiateryk, O. O.
    Taras Shevchenko National University of Kyiv, Kyiv, Ukraine.
    Volkov, O. E.
    International Research and Training Center for Information Technologies and Systems of the NAS of Ukraine and the MES of Ukraine, Kyiv, Ukraine.
    Revunova, E. G.
    International Research and Training Center for Information Technologies and Systems of the NAS of Ukraine and the MES of Ukraine, Kyiv, Ukraine.
    Rachkovskij, Dmitri
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. International Research and Training Center for Information Technologies and Systems of the NAS of Ukraine and the MES of Ukraine, Kyiv, Ukraine.
    A Linear System Output Transformation for Sparse Approximation*2022In: Cybernetics and Systems Analysis, ISSN 1060-0396, E-ISSN 1573-8337, Vol. 58, no 5, p. 840-850Article in journal (Refereed)
    Abstract [en]

    We propose an approach that provides a stable transformation of the output of a linear system into the output of a system with the desired basis. The matrix of basis functions of the linear system has a large condition number, and the series of its singular numbers gradually decreases to zero. Two types of methods for stable output transformation are developed using the approximation of matrices based on the truncated Singular Value Decomposition and on the Random Projection with different types of random matrices. It is shown that the use of the output transformation as preprocessing increases the accuracy of solving sparse approximation problems. An example of using the method to determine the activity of weak radiation sources is considered.

  • 7.
    Volkov, O.
    et al.
    International Research and Training Center for Information Technologies and Systems of National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine, Kyiv, Ukraine.
    Komar, M.
    International Research and Training Center for Information Technologies and Systems of National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine, Kyiv, Ukraine.
    Rachkovskij, Dmitri
    International Research and Training Center for Information Technologies and Systems of National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine, Kyiv, Ukraine.
    Volosheniuk, D.
    International Research and Training Center for Information Technologies and Systems of National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine, Kyiv, Ukraine.
    Technology of Autonomous Take-Off and Landing for the Modern Flight and Navigation Complex of an Unmanned Aerial Vehicle2022In: Cybernetics and Systems Analysis, ISSN 1060-0396, E-ISSN 1573-8337, Vol. 58, no 6, p. 882-888Article in journal (Refereed)
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf