Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China; Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen 518066, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
Visa övriga samt affilieringar
2022 (Engelska)Ingår i: IEEE Journal on Selected Areas in Communications, ISSN 0733-8716, E-ISSN 1558-0008, Vol. 40, nr 2, s. 611-625Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

To reduce uploading bandwidth and address privacy concerns, deep learning at the network edge has been an emerging topic. Typically, edge devices collaboratively train a shared model using real-time generated data through the Parameter Server framework. Although all the edge devices can share the computing workloads, the distributed training processes over edge networks are still time-consuming due to the parameters and gradients transmission procedures between parameter servers and edge devices. Focusing on accelerating distributed Convolutional Neural Networks (CNNs) training at the network edge, we present DynaComm, a novel scheduler that dynamically decomposes each transmission procedure into several segments to achieve optimal layer-wise communications and computations overlapping during run-time. Through experiments, we verify that DynaComm manages to achieve optimal layer-wise scheduling for all cases compared to competing strategies while the model accuracy remains untouched.

Ort, förlag, år, upplaga, sidor
IEEE, 2022. Vol. 40, nr 2, s. 611-625
Nyckelord [en]
Edge computing, deep learning training, dynamic scheduling, convolutional neural network
Nationell ämneskategori
Datorsystem
Forskningsämne
Distribuerade datorsystem
Identifikatorer
URN: urn:nbn:se:ltu:diva-87417DOI: 10.1109/jsac.2021.3118419ISI: 000742724700013Scopus ID: 2-s2.0-85119617658OAI: oai:DiVA.org:ltu-87417DiVA, id: diva2:1601138
Anmärkning

Validerad;2022;Nivå 2;2022-03-07 (joosat);

Funder: National Key R&D Program of China (2019YFB2101700, 2018YFB0804402); National Science Foundation of China (U1736115); Key Research and Development Project of Sichuan Province (21SYSX0082)

Tillgänglig från: 2021-10-07 Skapad: 2021-10-07 Senast uppdaterad: 2022-07-04Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Vasilakos, Athanasios V.

Sök vidare i DiVA

Av författaren/redaktören
Vasilakos, Athanasios V.
Av organisationen
Datavetenskap
I samma tidskrift
IEEE Journal on Selected Areas in Communications
Datorsystem

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 85 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf