Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China; Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen 518066, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Vise andre og tillknytning
2022 (engelsk)Inngår i: IEEE Journal on Selected Areas in Communications, ISSN 0733-8716, E-ISSN 1558-0008, Vol. 40, nr 2, s. 611-625Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.

sted, utgiver, år, opplag, sider
IEEE, 2022. Vol. 40, nr 2, s. 611-625
Emneord [en]
Edge computing, deep learning training, dynamic scheduling, convolutional neural network
HSV kategori
Forskningsprogram
Distribuerade datorsystem
Identifikatorer
URN: urn:nbn:se:ltu:diva-87417DOI: 10.1109/jsac.2021.3118419ISI: 000742724700013Scopus ID: 2-s2.0-85119617658OAI: oai:DiVA.org:ltu-87417DiVA, id: diva2:1601138
Merknad

Validerad;2022;Nivå 2;2022-03-07 (joosat);

Funder: National Key R&D Program of China (2019YFB2101700, 2018YFB0804402); National Science Foundation of China (U1736115); Key Research and Development Project of Sichuan Province (21SYSX0082)

Tilgjengelig fra: 2021-10-07 Laget: 2021-10-07 Sist oppdatert: 2022-07-04bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Vasilakos, Athanasios V.

Søk i DiVA

Av forfatter/redaktør
Vasilakos, Athanasios V.
Av organisasjonen
I samme tidsskrift
IEEE Journal on Selected Areas in Communications

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 85 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf