Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) Trippstadter Str. 122, 67663 Kaiserslautern; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) Trippstadter Str. 122, 67663 Kaiserslautern.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.ORCID iD: 0000-0003-4029-6574
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) Trippstadter Str. 122, 67663 Kaiserslautern; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
Show others and affiliations
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 120781-120791Article in journal (Refereed) Published
Abstract [en]

Depth completion involves recovering a dense depth map from a sparse map and an RGB image. Recent approaches focus on utilizing color images as guidance images to recover depth at invalid pixels. However, color images alone are not enough to provide the necessary semantic understanding of the scene. Consequently, the depth completion task suffers from sudden illumination changes in RGB images (e.g., shadows). In this paper, we propose a novel three-branch backbone comprising color-guided, semantic-guided, and depth-guided branches. Specifically, the color-guided branch takes a sparse depth map and RGB image as an input and generates color depth which includes color cues (e.g., object boundaries) of the scene. The predicted dense depth map of color-guided branch along-with semantic image and sparse depth map is passed as input to semantic-guided branch for estimating semantic depth. The depth-guided branch takes sparse, color, and semantic depths to generate the dense depth map. The color depth, semantic depth, and guided depth are adaptively fused to produce the output of our proposed three-branch backbone. In addition, we also propose to apply semantic-aware multi-modal attention-based fusion block (SAMMAFB) to fuse features between all three branches. We further use CSPN++ with Atrous convolutions to refine the dense depth map produced by our three-branch backbone. Extensive experiments show that our model achieves state-of-the-art performance in the KITTI depth completion benchmark at the time of submission.

Place, publisher, year, edition, pages
IEEE, 2022. Vol. 10, p. 120781-120791
Keywords [en]
Attention-based fusion for depth completion, Benchmark testing, Reliability, Semantic-guided depth completion, State-of-the-art Depth Completion approach on KITTI depth completion benchmark
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:ltu:diva-93839DOI: 10.1109/ACCESS.2022.3214316ISI: 000890063400001Scopus ID: 2-s2.0-85140796312OAI: oai:DiVA.org:ltu-93839DiVA, id: diva2:1709213
Note

Validerad;2022;Nivå 2;2022-11-28 (sofila);

Funder: The European Project INFINITY (grant no. 883293)

Available from: 2022-11-08 Created: 2022-11-08 Last updated: 2023-02-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liwicki, Marcus

Search in DiVA

By author/editor
Liwicki, Marcus
By organisation
Embedded Internet Systems Lab
In the same journal
IEEE Access
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 6 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf