Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Frady, E. Paxon
    et al.
    Redwood Center for Theoretical Neuroscience, University of California, Berkeley.
    Kleyko, Denis
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sommer, Friedrich T.
    Redwood Center for Theoretical Neuroscience, University of California, Berkeley.
    A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks2018In: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 30, no 6, p. 1449-1513Article in journal (Refereed)
    Abstract [en]

    To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and cross-talk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are aligned. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.

  • 2.
    Gustafsson, Lennart
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    A Case of Near-Optimal Sensory Integration Based on Kohonen Self-Organizing Maps2019In: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 31, no 7, p. 1419-1429Article in journal (Refereed)
    Abstract [en]

    This letter shows by digital simulation that a simple rule applied to one-dimensional self-organized maps for integrating sensory perceptions from two identical sources yielding position information as integers, corrupted by independent noise sources, yields almost statistically optimal results for position estimation as determined by maximum likelihood estimation. There is no learning of the corrupting noise sources nor is any information about the statistics of the noise sources available to the integrating process. The simple rule employed yields a measure of the quality of the estimated position of the source. The letter also shows that if the Bayesian estimates, which are rational numbers, are rounded in order to comply with the stipulation that integers be identified, the Bayesian estimation will have a larger variance than the proposed integration.

  • 3.
    Jantvik, Tamas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Gustafsson, Lennart
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Paplinski, Andrew
    Clayton School of Information Technology, CSIT, Monash University.
    A self-organized artificial neural network architecture for sensory integration with applications to letter-phoneme integration2011In: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 23, no 8, p. 2101-2139Article in journal (Refereed)
    Abstract [en]

    The multimodal self-organizing network (MMSON), an artificial neural network architecture carrying out sensory integration, is presented here. The architecture is designed using neurophysiological findings and imaging studies that pertain to sensory integration and consists of interconnected lattices of artificial neurons. In this artificial neural architecture, the degree of recognition of stimuli, that is, the perceived reliability of stimuli in the various subnetworks, is included in the computation. The MMSON's behavior is compared to aspects of brain function that deal with sensory integration. According to human behavioral studies, integration of signals from sensory receptors of different modalities enhances perception of objects and events and also reduces time to detection. In neocortex, integration takes place in bimodal and multimodal association areas and result, not only in feedback-mediated enhanced unimodal perception and shortened reaction time, but also in robust bimodal or multimodal percepts. Simulation data from the presented artificial neural network architecture show that it replicates these important psychological and neuroscientific characteristics of sensory integration.

1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf