Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A Blended Attention-CTC Network Architecture for Amharic Text-image Recognition
Bahir Dar Institute of Technology, Bahir Dar, Ethiopia.
Technical University of Kaiserslautern, Kaiserslautern, Germany.
Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.ORCID-id: 0000-0003-4029-6574
DFKI, Augmented Vision Department, Kaiserslautern, Germany.
Vise andre og tillknytning
2021 (engelsk)Inngår i: Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods (ICPRAM), SciTePress, 2021, s. 435-441Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this paper, we propose a blended Attention-Connectionist Temporal Classification (CTC) network architecture for a unique script, Amharic, text-image recognition. Amharic is an indigenous Ethiopic script that uses 34 consonant characters with their 7 vowel variants of each and 50 labialized characters which are derived, with a small change, from the 34 consonant characters. The change involves modifying the structure of these characters by adding a straight line, or shortening and/or elongating one of its main legs including the addition of small diacritics to the right, left, top or bottom of the character. Such a small change affects orthographic identities of character and results in shape similarly among characters which are interesting, but challenging task, for OCR research. Motivated with the recent success of attention mechanism on neural machine translation tasks, we propose an attention-based CTC approach which is designed by blending attention mechanism directly within the CTC network. The proposed model consists of an encoder module, attention module and transcription module in a unified framework. The efficacy of the proposed model on the Amharic language shows that attention mechanism allows learning powerful representations by integrating information from different time steps. Our method outperforms state-of-the-art methods and achieves 1.04% and 0.93% of the character error rate on ADOCR test datasets.

sted, utgiver, år, opplag, sider
SciTePress, 2021. s. 435-441
Emneord [en]
Amharic Script, Blended Attention-CTC, BLSTM, CNN, Encoder-decoder, Network Architecture, OCR, Pattern Recognition
HSV kategori
Forskningsprogram
Maskininlärning
Identifikatorer
URN: urn:nbn:se:ltu:diva-86383DOI: 10.5220/0010284204350441ISI: 000662835900050Scopus ID: 2-s2.0-85103829482OAI: oai:DiVA.org:ltu-86383DiVA, id: diva2:1580729
Konferanse
10th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2021, Online Streaming, February 4-6, 2021
Merknad

ISBN för värdpublikation: 978-989-758-486-2

Tilgjengelig fra: 2021-07-15 Laget: 2021-07-15 Sist oppdatert: 2022-12-19bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Liwicki, Marcus

Søk i DiVA

Av forfatter/redaktør
Liwicki, Marcus
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 166 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf