A Free Stale Synchronous Parallel Strategy for Distributed Machine LearningShow others and affiliations
2019 (English)In: BDE 2019: Proceedings of the 2019 International Conference on Big Data Engineering, Association for Computing Machinery (ACM), 2019, p. 23-29Conference paper, Published paper (Refereed)
Abstract [en]
With the machine learning applications processing larger and more complex data, people tend to use multiple computing nodes to execute the machine learning tasks in distributed way. However, in real world, people always encounter a problem that a few nodes in system exhibit poor performance and drag down the efficiency of the whole system. In existing parallel strategies such as bulk synchronous parallel and stale synchronous parallel, these nodes with poor performance may not be monitored and found out in time. To address this problem, we proposed a free stale synchronous parallel (FSSP) strategy to free the system from the negative impact of those nodes. Our experimental results on some classical machine leaning algorithms and datasets demonstrated that FSSP strategy outperformed other existing parallel computing strategy.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019. p. 23-29
National Category
Computer Sciences
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-85972DOI: 10.1145/3341620.3341625ISI: 000506861000003Scopus ID: 2-s2.0-85071570962OAI: oai:DiVA.org:ltu-85972DiVA, id: diva2:1572766
Conference
International Conference on Big Data Engineering (BDE 2019), Hong Kong, June 11-13, 2019
Note
ISBN för värdpublikation: 978-1-4503-6091-3;
Finansiär: Xinjiang Natural Science Foundation (2016D01B010)
2021-06-242021-06-242021-06-24Bibliographically approved