Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fine-grained Powercap Allocation for Power-constrained Systems based on Multi-objective Machine Learning
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China.
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China. Cyberspace Security Research Center, Peng Cheng Laboratory, Shenzhen, China.
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China.
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China.
Show others and affiliations
2021 (English)In: IEEE Transactions on Parallel and Distributed Systems, ISSN 1045-9219, E-ISSN 1558-2183, Vol. 32, no 7, p. 1789-1801Article in journal (Refereed) Published
Abstract [en]

Power capping is an important solution to keep the system within a fixed power constraint. However, for the power-constrained systems, powercap needs to be reasonably allocated according to the workloads of compute nodes to achieve trade-offs among performance, energy and powercap. Thus it is necessary to model performance and energy and to predict the optimal powercap allocation strategies. Existing power allocation approaches have insufficient granularity within nodes. Modeling approaches usually model performance and energy separately, ignoring the correlation between objectives, and do not expose the Pareto-optimal powercap configurations. Therefore, this paper proposes an approach to predict the Pareto-optimal powercap configurations on the power-constrained system for MPI and OpenMP parallel applications. It uses the elaboratly designed micro-benchmarks and a small number of existing benchmarks to build the training set, then applies a multi-objective machine learning algorithm which combines the stacked single-target with extreme gradient boosting to build multi-objective models. The models can be used to predict the optimal PKG and DRAM powercap settings, helping compute nodes perform fine-grained powercap allocation. Compared with the reference configuration, our models can achieve an average powercap reduction of up to 48%, average energy reduction of up to around 20%, with only 10%~30% maximum performance drop.

Place, publisher, year, edition, pages
IEEE, 2021. Vol. 32, no 7, p. 1789-1801
Keywords [en]
Power capping, Performance and energy modeling, Pareto front, Multi-objective machine learning
National Category
Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-82362DOI: 10.1109/TPDS.2020.3045983ISI: 000621405200024Scopus ID: 2-s2.0-85098785097OAI: oai:DiVA.org:ltu-82362DiVA, id: diva2:1517083
Note

Validerad;2021;Nivå 2;2021-02-22 (johcin);

Finansiär: National Key Research and Development Program of China (2017YFB0202901), Key-Area Research and Development Program of Guangdong Province (2019B010136001), National Natural Science Foundation of China (61672186), Shenzhen Science and Technology Research and Development Foundation (JCYJ20190806143418198)

Available from: 2021-01-13 Created: 2021-01-13 Last updated: 2021-03-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Vasilakos, Athanasios V.

Search in DiVA

By author/editor
Vasilakos, Athanasios V.
By organisation
Computer Science
In the same journal
IEEE Transactions on Parallel and Distributed Systems
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 97 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf