Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks
Redwood Center for Theoretical Neuroscience, University of California, Berkeley.
Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.ORCID-id: 0000-0002-6032-6155
Redwood Center for Theoretical Neuroscience, University of California, Berkeley.
2018 (Engelska)Ingår i: Neural Computation, ISSN 0899-7667, E-ISSN 1530-888X, Vol. 30, nr 6, s. 1449-1513Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and cross-talk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data. The theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are aligned. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.

Ort, förlag, år, upplaga, sidor
MIT Press, 2018. Vol. 30, nr 6, s. 1449-1513
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
Kommunikations- och beräkningssystem
Identifikatorer
URN: urn:nbn:se:ltu:diva-68365DOI: 10.1162/neco_a_01084ISI: 000432863200001PubMedID: 29652585Scopus ID: 2-s2.0-85047470315OAI: oai:DiVA.org:ltu-68365DiVA, id: diva2:1197913
Anmärkning

Validerad;2018;Nivå 2;2018-06-07 (andbra)

Tillgänglig från: 2018-04-16 Skapad: 2018-04-16 Senast uppdaterad: 2018-06-08Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextPubMedScopus

Personposter BETA

Kleyko, Denis

Sök vidare i DiVA

Av författaren/redaktören
Kleyko, Denis
Av organisationen
Datavetenskap
I samma tidskrift
Neural Computation
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetricpoäng

doi
pubmed
urn-nbn
Totalt: 46 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf