System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China; Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen 518066, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Show others and affiliations
2022 (English)In: IEEE Journal on Selected Areas in Communications, ISSN 0733-8716, E-ISSN 1558-0008, Vol. 40, no 2, p. 611-625Article in journal (Refereed) Published
Abstract [en]

To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.

Place, publisher, year, edition, pages
IEEE, 2022. Vol. 40, no 2, p. 611-625
Keywords [en]
Edge computing, deep learning training, dynamic scheduling, convolutional neural network
National Category
Computer Systems
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-87417DOI: 10.1109/jsac.2021.3118419ISI: 000742724700013Scopus ID: 2-s2.0-85119617658OAI: oai:DiVA.org:ltu-87417DiVA, id: diva2:1601138
Note

Validerad;2022;Nivå 2;2022-03-07 (joosat);

Funder: National Key R&D Program of China (2019YFB2101700, 2018YFB0804402); National Science Foundation of China (U1736115); Key Research and Development Project of Sichuan Province (21SYSX0082)

Available from: 2021-10-07 Created: 2021-10-07 Last updated: 2022-07-04Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Vasilakos, Athanasios V.

Search in DiVA

By author/editor
Vasilakos, Athanasios V.
By organisation
Computer Science
In the same journal
IEEE Journal on Selected Areas in Communications
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 83 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf