Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Emruli, Blerim
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Simple principles of cognitive computation with distributed representations2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Brains and computers represent and process sensory information in different ways. Bridgingthat gap is essential for managing and exploiting the deluge of unprocessed andcomplex data in modern information systems. The development of brain-like computersthat learn from experience and process information in a non-numeric cognitive way willopen up new possibilities in the design and operation of both sensor and informationcommunication systems.This thesis presents a set of simple computational principles with cognitive qualities,which can enable computers to learn interesting relationships in large amounts of datastreaming from complex and changing real-world environments. More specifically, thiswork focuses on the construction of a computational model for analogical mapping andthe development of a method for semantic analysis with high-dimensional arrays.A key function of cognitive systems is the ability to make analogies. A computationalmodel of analogical mapping that learns to generalize from experience is presented in thisthesis. This model is based on high-dimensional random distributed representations anda sparse distributed associative memory. The model has a one-shot learning process andan ability to recall distinct mappings. After learning a few similar mapping examplesthe model generalizes and performs analogical mapping of novel inputs. As a majorimprovement over related models, the proposed model uses associative memory to learnmultiple analogical mappings in a coherent way.Random Indexing (RI) is a brain-inspired dimension reduction method that was developedfor natural language processing to identify semantic relationships in text. Ageneralized mathematical formulation of RI is presented, which enables N-way RandomIndexing (NRI) of multidimensional arrays. NRI is an approximate, incremental, scalable,and lightweight dimension reduction method for large non-sparse arrays. In addition, itprovides low and predictable storage requirements, and also enables the range of arrayindices to be further extended without modification of the data representation. Numericalsimulations of two-way and ordinary one-way RI are presented that illustrate whenthe approach is feasible. In conclusion, it is suggested that NRI can be used as a tool tomanage and exploit Big Data, for instance in data mining, information retrieval, socialnetwork analysis, and other machine learning applications.

  • 2.
    Emruli, Blerim
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Ubiquitous Cognitive Computing: A Vector Symbolic Approach2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    A wide range of physical things are currently being integrated with the infrastructure of cyberspace in a process that is creating the so-called Internet of Things. It is expected that Internet-connected devices will vastly outnumber people on the planet in the near future. Such devices need to be easily deployed and integrated, otherwise the resulting systems will be too costly to configure and maintain. This is challenging to accomplish using conventional technology, especially when dealing with complex or heterogeneous systems consisting of diverse components that implement functionality and standards in different ways. In addition, artificial systems that interact with humans, the environment and one-another need to deal with complex and imprecise information, which is difficult to represent in a flexible and standardized manner using conventional methods.This thesis investigates the use of cognitive computing principles that offer new ways to represent information and design such devices and systems. The core idea underpinning the work presented herein is that functioning systems can potentially emerge autonomously by learning from user interactions and the environment provided that each component of the system conforms to a set of general information-coding and communication rules.The proposed learning approach uses vector-based representations of information, which are common in models of cognition and semantic spaces. Vector symbolic architectures (VSAs) are a class of biology-inspired models that represent and manipulate structured representations of information, which can be used to model high-level cognitive processes such as analogy-making. Analogy-making is a central element of cognition that enables animals to identify and manage new information by generalizing past experiences, possibly from a few learned examples.The work presented herein is based on a VSA and a binary associative memory model known as sparse distributed memory. The thesis outlines a learning architecture for the automated configuration and interoperation of devices operating in heterogeneous and ubiquitous environments. To this end, the sparse distributed memory model is extended with a VSA-based analogy-making mechanism that enables generalization from a few learned examples, thereby facilitating rapid learning. The thesis also presents a generalization of random indexing, which is an incremental and lightweight feature extraction method for streaming data that is commonly used to generate vector representations of semantic spaces.The impact of this thesis is twofold. First, the appended papers extend previous theoretical and empirical work on vector-based cognitive models, in particular for analogy-making and learning. Second, a new approach for designing the next generation of ubiquitous cognitive systems is outlined, which in principle can enable heterogeneous devices and systems to autonomously learn how to interoperate.

  • 3.
    Emruli, Blerim
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Gayler, Ross W.
    La Trobe University.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Analogical mapping and inference with binary spatter codes and sparse distributed memory2013In: The 2013 International Joint Conference on Neural Networks (IJCNN): Dallas, Texas 4-9 Aug 2013, Piscataway, NJ: IEEE Communications Society, 2013, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Analogy-making is a key function of human cognition. Therefore, the development of computational models of analogy that automatically learn from examples can lead to significant advances in cognitive systems. Analogies require complex, relational representations of learned structures, which is challenging for both symbolic and neurally inspired models. Vector symbolic architectures (VSAs) are a class of connectionist models for the representation and manipulation of compositional structures, which can be used to model analogy. We study a novel VSA network for the analogical mapping of compositional structures, which integrates an associative memory known as sparse distributed memory (SDM). The SDM enables non-commutative binding of compositional structures, which makes it possible to predict novel patterns in sequences. To demonstrate this property we apply the network to a commonly used intelligence test called Raven’s Progressive Matrices. We present results of simulation experiments for the Raven’s task and calculate the probability of prediction error at 95% confidence level. We find that non-commutative binding requires sparse activation of the SDM and that 10–20% concept-specific activation of neurons is optimal. The optimal dimensionality of the binary distributed representations of the VSA is of the order 10^4, which is comparable with former results and the average synapse count of neurons in the cerebral cortex.

  • 4.
    Emruli, Blerim
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Analogical mapping with sparse distributed memory: a simple model that learns to generalize from examples2014In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 1, p. 74-88Article in journal (Refereed)
    Abstract [en]

    We present a computational model for the analogical mapping of compositional structures that com- bines two existing ideas known as holistic mapping vec- tors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x_i → y_i and computing analogies x_j → y_j for novel inputs x_j. The model has a one-shot learning process, is randomly initialized and has three exogenous parameters: the dimensionality D of representations, the memory size S and the prob- ability χ for activation of the memory. After learning three examples the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10^4, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships and we calculate the error probabilities for recall and generalization.

  • 5.
    Emruli, Blerim
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Delsing, Jerker
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vector space architecture for emergent interoperability of systems by learning from demonstration2015In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, E-ISSN 2212-6848, Vol. 11, p. 53-64Article in journal (Refereed)
    Abstract [en]

    The rapid integration of physical systems with cyberspace infrastructure, the so-called Internet of Things, is likely to have a significant effect on how people interact with the physical environment and design information and communication systems. Internet-connected systems are expected to vastly outnumber people on the planet in the near future, leading to grand challenges in software engineering and automation in application domains involving complex and evolving systems. Several decades of artificial intelligence research suggests that conventional approaches to making such systems interoperable using handcrafted "semantic" descriptions of services and information are difficult to apply. In this paper we outline a bioinspired learning approach to creating interoperable systems, which does not require handcrafted semantic descriptions and rules. Instead, the idea is that a functioning system (of systems) can emerge from an initial pseudorandom state through learning from examples, provided that each component conforms to a set of information coding rules. We combine a binary vector symbolic architecture (VSA) with an associative memory known as sparse distributed memory (SDM) to model context-dependent prediction by learning from examples. We present simulation results demonstrating that the proposed architecture can enable system interoperability by learning, for example by human demonstration.

  • 6.
    Emruli, Blerim
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Delsing, Jerker
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Vector space architecture for emergent interoperability of systems by learning from demonstration2014In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, E-ISSN 2212-6848, Vol. 9, p. 33-45Article in journal (Refereed)
    Abstract [en]

    The rapid integration of physical systems with cyberspace infrastructure, the so-called Internet of Things, is likely to have a significant effect on how people interact with the physical environment and design information and communication systems. Internet-connected systems are expected to vastly outnumber people on the planet in the near future, leading to grand challenges in software engineering and automation in application domains involving complex and evolving systems. Several decades of artificial intelligence research suggests that conventional approaches to making such systems interoperable using handcrafted "semantic" descriptions of services and information are difficult to apply. In this paper we outline a bioinspired learning approach to creating interoperable systems, which does not require handcrafted semantic descriptions and rules. Instead, the idea is that a functioning system (of systems) can emerge from an initial pseudorandom state through learning from examples, provided that each component conforms to a set of information coding rules. We combine a binary vector symbolic architecture (VSA) with an associative memory known as sparse distributed memory (SDM) to model context-dependent prediction by learning from examples. We present simulation results demonstrating that the proposed architecture can enable system interoperability by learning, for example by human demonstration.

  • 7.
    Sandin, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Emruli, Blerim
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sahlgren, Magnus
    Swedish Institute of Computer Science, Stockholm.
    Incremental dimension reduction of tensors with random index2011Report (Other academic)
    Abstract [en]

    We present an incremental, scalable and efficient dimension reduction technique for tensors that is based on sparse random linear coding. Data is stored in a compactified representation with fixed size, which makes memory requirements low and predictable. Component encoding and decoding are performed on-line without computationally expensive re-analysis of the data set. The range of tensor indices can be extended dynamically without modifying the component representation. This idea originates from a mathematical model of semantic memory and a method known as random indexing in natural language processing. We generalize the random-indexing algorithm to tensors and present signal-to-noise-ratio simulations for representations of vectors and matrices. We present also a mathematical analysis of the approximate orthogonality of high-dimensional ternary vectors, which is a property that underpins this and other similar random-coding approaches to dimension reduction. To further demonstrate the properties of random indexing we present results of a synonym identification task. The method presented here has some similarities with random projection and Tucker decomposition, but it performs well at high dimensionality only (n>10^3). Random indexing is useful for a range of complex practical problems, e.g., in natural language processing, data mining, pattern recognition, event detection, graph searching and search engines. Prototype software is provided. It supports encoding and decoding of tensors of order >= 1 in a unified framework, i.e., vectors, matrices and higher order tensors.

  • 8.
    Sandin, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Emruli, Blerim
    SICS Swedish ICT, SE-722 13 Västerås .
    Sahlgren, Magnus
    SICS Swedish ICT, SE-164 29 Kista .
    Random indexing of multi-dimensional data2017In: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116, Vol. 52, no 1, p. 267-290Article in journal (Refereed)
    Abstract [en]

    Random indexing (RI) is a lightweight dimension reduction method, which is used for example to approximate vector-semantic relationships in online natural language processing systems. Here we generalise RI to multi-dimensional arrays and thereby enable approximation of higher-order statistical relationships in data. The generalised method is a sparse implementation of random projections,which is the theoretical basis also for ordinary RI and other randomisation approaches to dimensionality reduction and data representation. We present numerical experiments which demonstrate that a multi-dimensional generalisation of RI is feasible, including comparisons with ordinary RI and principal component analysis (PCA). The RI method is well suited for online processing of data streams because relationship weights can be updated incrementally in a fixed-size distributed representation,and inner products can be approximated on the fly at low computational cost. An open source implementation of generalised RI is provided.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf