Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Classification and Recall With Binary Hyperdimensional Computing: Tradeoffs in Choice of Density and Mapping Characteristics
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0002-6032-6155
University of California at Berkeley, Berkeley.
International Research and Training, Center for Information Technologies and Systems, Kiev, Ukraine.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0003-0069-640x
Show others and affiliations
2018 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 29, no 12, p. 5880-5898Article in journal (Refereed) Published
Abstract [en]

Hyperdimensional (HD) computing is a promising paradigm for future intelligent electronic appliances operating at low power. This paper discusses tradeoffs of selecting parameters of binary HD representations when applied to pattern recognition tasks. Particular design choices include density of representations and strategies for mapping data from the original representation. It is demonstrated that for the considered pattern recognition tasks (using synthetic and real-world data) both sparse and dense representations behave nearly identically. This paper also discusses implementation peculiarities which may favor one type of representations over the other. Finally, the capacity of representations of various densities is discussed.

Place, publisher, year, edition, pages
IEEE, 2018. Vol. 29, no 12, p. 5880-5898
National Category
Computer Systems
Research subject
Dependable Communication and Computation Systems
Identifiers
URN: urn:nbn:se:ltu:diva-68400DOI: 10.1109/TNNLS.2018.2814400ISI: 000451230100008PubMedID: 29993669Scopus ID: 2-s2.0-85045214003OAI: oai:DiVA.org:ltu-68400DiVA, id: diva2:1198610
Funder
Swedish Research Council, 2015- 04677
Note

Validerad;2018;Nivå 2;2018-12-05 (svasva)

Available from: 2018-04-18 Created: 2018-04-18 Last updated: 2019-03-04Bibliographically approved
In thesis
1. Vector Symbolic Architectures and their applications: Computing with random vectors in a hyperdimensional space
Open this publication in new window or tab >>Vector Symbolic Architectures and their applications: Computing with random vectors in a hyperdimensional space
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Vektor symboliska Arkitekturer och deras tillämpningar : Beräkning med slumpmässiga vektorer i ett hyperdimensionellt utrymme
Abstract [en]

The main focus of this thesis lies in a rather narrow subfield of Artificial Intelligence. As any beloved child, it has many names. The most common ones are Vector Symbolic Architectures and Hyperdimensional Computing. Vector Symbolic Architectures are a family of bio-inspired methods of representing and manipulating concepts and their meanings in a high-dimensional space (hence Hyperdimensional Computing). Information in Vector Symbolic Architectures is evenly distributed across representational units, therefore, it is said that they operate with distributed representations. Representational units can be of different nature, however, the thesis concentrates on the case when units have either binary or integer values. 

This thesis includes eleven scientific papers and extends the research area in three directions: theory of Vector Symbolic Architectures, their applications for pattern recognition, and unification of Vector Symbolic Architectures with other neural-like computational approaches. 

Previously, Vector Symbolic Architectures have been used mainly in the area of cognitive computing for representing and reasoning upon semantically bound information, for example, for analogy-based reasoning. This thesis significantly extends the applicability of Vector Symbolic Architectures to an area of pattern recognition. Pattern recognition is the area constantly enlarging its theoretical and practical horizons. Applications of pattern recognition and machine learning can be found in many areas of the present day world including health-care, robotics, manufacturing, economics, automation, transportation, etc. Despite the success in many domains pattern recognition algorithms are still far from being close to their biological vis-a-vis – the brain. In particular, one of the challenges is a large amount of training data required by conventional machine learning algorithms. Therefore, it is important to look for new possibilities in the area via exploring biologically inspired approaches.

All application scenarios, which are considered in the thesis, contribute to the development of the global strategy of creating an information society. Specifically, such important applications as biomedical signal processing, automation systems, and text processing were considered. All applications scenarios used novel methods of mapping data to Vector Symbolic Architectures proposed in the thesis.

In the domain of biomedical signal processing, Vector Symbolic Architectures were applied for three tasks: classification of a modality of medical images, gesture recognition, and assessment of synchronization of cardiovascular signals. In the domain of automation systems, Vector Symbolic Architectures were used for a data-driven fault isolation. In the domain of text processing, Vector Symbolic Architectures were used to search for the longest common substring and to recognize permuted words.

The theoretical contributions of the thesis come in four aspects. First, the thesis proposes several methods for mapping data from its original representation into a distributed representation suitable for further manipulations by Vector Symbolic Architectures. These methods can be used for one-shot learning of patterns of generic sensor stimuli. Second, the thesis presents the analysis of an informational capacity of Vector Symbolic Architectures in the case of binary distributed representations. Third, it is shown how to represent finite state automata using Vector Symbolic Architectures. Fourth, the thesis describes the approach of combining Vector Symbolic Architectures and a cellular automaton.

Finally, the thesis presents the results of unification of two computational approaches with Vector Symbolic Architectures. This is one of the most interesting cross-disciplinary contributions of the thesis. First, it is shown that Bloom Filters – an important data structure for an approximate membership query task – can be treated in terms of Vector Symbolic Architectures. It allows generalizing the process of building the filter. Second, Vector Symbolic Architectures and Echo State Networks (a special kind of recurrent neural networks) were combined together. It is possible to implement Echo State Networks using only integer values in network’s units and much simpler operation for a recurrency operation while preserving the entire dynamics of the network. It results in a simpler architecture with lower requirements on memory and operations. 

Place, publisher, year, edition, pages
Luleå: Luleå University of Technology, 2018
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Computer Systems Computer Sciences
Research subject
Dependable Communication and Computation Systems
Identifiers
urn:nbn:se:ltu:diva-68338 (URN)978-91-7790-110-5 (ISBN)978-91-7790-111-2 (ISBN)
Public defence
2018-06-11, A109, Luleå, 10:00 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2015-04677
Available from: 2018-04-16 Created: 2018-04-13 Last updated: 2018-05-31Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Kleyko, DenisOsipov, Evgeny

Search in DiVA

By author/editor
Kleyko, DenisOsipov, Evgeny
By organisation
Computer Science
In the same journal
IEEE Transactions on Neural Networks and Learning Systems
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 882 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf