Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Scene analysis by mid-level attribute learning using 2D LSTM networks and an application to web-image tagging
University of Kaiserslautern, Kaiserslautern, Germany; German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany.
University of Kaiserslautern, Kaiserslautern, Germany.ORCID iD: 0000-0003-4029-6574
University of Kaiserslautern, Kaiserslautern, Germany.
2015 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 63, p. 23-29Article in journal (Refereed) Published
Abstract [en]

Abstract This paper describes an approach to scene analysis based on supervised training of 2D Long Short-Term Memory recurrent neural networks (LSTM networks). Unlike previous methods, our approach requires no manual construction of feature hierarchies or incorporation of other prior knowledge. Rather, like deep learning approaches using convolutional networks, our recognition networks are trained directly on raw pixel values. However, in contrast to convolutional neural networks, our approach uses 2D LSTM networks at all levels. Our networks yield per pixel mid-level classifications of input images; since training data for such applications is not available in large numbers, we describe an approach to generating artificial training data, and then evaluate the trained networks on real-world images. Our approach performed significantly better than others methods including Convolutional Neural Networks (ConvNet), yet using two orders of magnitude fewer parameters. We further show the experiment on a recently published dataset, outdoor scene attribute dataset for fair comparisons of scene attribute learning which had significant performance improvement (ca. 21%). Finally, our approach is successfully applied on a real-world application, automatic web-image tagging.

Place, publisher, year, edition, pages
2015. Vol. 63, p. 23-29
Keywords [en]
LSTM, Mid-level attribute learning, Recurrent neural network, Scene analysis, Web-image tagging
Identifiers
URN: urn:nbn:se:ltu:diva-72203DOI: 10.1016/j.patrec.2015.06.003OAI: oai:DiVA.org:ltu-72203DiVA, id: diva2:1271595
Available from: 2018-12-17 Created: 2018-12-17 Last updated: 2019-01-29Bibliographically approved

Open Access in DiVA

fulltext(2043 kB)44 downloads
File information
File name FULLTEXT01.pdfFile size 2043 kBChecksum SHA-512
27e904ef6d072a8344122eec65bca2119a4a0f1632fdf70fff4548ce694b436ecfb62818dd3ab16bdc8cad2ca3c3e915d398a727e548ffc568f4ce750a11c7f7
Type fulltextMimetype application/pdf

Other links

Publisher's full texthttp://www.sciencedirect.com/science/article/pii/S0167865515001634

Search in DiVA

By author/editor
Liwicki, Marcus
In the same journal
Pattern Recognition Letters

Search outside of DiVA

GoogleGoogle Scholar
Total: 44 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 40 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf