Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
CoMA: Resource Monitoring of Docker Containers
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0002-3437-4540
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0002-4031-2872
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
Show others and affiliations
2015 (English)In: Proceedings of the 5th International Conference on Cloud Computing and Services Science (CLOSER 2015), SCITEPRESS Digital Library , 2015, Vol. 1, p. 145-154Conference paper, Published paper (Refereed)
Abstract [en]

This research paper presents CoMA, a Container Monitoring Agent, that oversees resource consumption of operating system level virtualization platforms, primarily targeting container-based platforms such as Docker. The core contribution is CoMA, together with a quantitative evaluation verifying the validity of the measurements reported by the agent for three metrics: CPU, memory and block I/O. The proof-of-concept is implemented for Docker-based systems and consists of CoMA, the Ganglia Monitoring System and the Host sFlow agent. This research is in line with the rising trend of container adoption which is due to the resource efficiency and ease of deployment. These characteristics have set containers in a position to topple virtual machines as the reigning virtualization technology in data centers.

Place, publisher, year, edition, pages
SCITEPRESS Digital Library , 2015. Vol. 1, p. 145-154
Keywords [en]
Docker, Containers, Containerization, OS-level Virtualization, Operating System Level Virtualization, Virtualization, Resource Monitoring, Cloud Computing, Data Centers, Ganglia, sFlow, Linux, Open-source, Virtual Machines
National Category
Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-31992DOI: 10.5220/0005448001450154Local ID: 65372d05-6cc9-4916-b19f-df00f4a146dcISBN: 978-989-758-104-5 (electronic)OAI: oai:DiVA.org:ltu-31992DiVA, id: diva2:1005226
Conference
International Conference on Cloud Computing and Services Science : 20/05/2015 - 22/05/2015
Projects
Cloudberry Datacenters
Note

Godkänd; 2015; Bibliografisk uppgift: The full text of this paper is only available to INSTICC members. ; 20150821 (larjim)

Available from: 2016-09-30 Created: 2016-09-30 Last updated: 2023-09-05Bibliographically approved
In thesis
1. Decentralized Location-aware Orchestration of Containerized Microservice Applications: Enabling Distributed Intelligence at the Edge
Open this publication in new window or tab >>Decentralized Location-aware Orchestration of Containerized Microservice Applications: Enabling Distributed Intelligence at the Edge
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Services that operate on public, private, or hybrid clouds, should always be available and reachable to their end-users or clients. However, a shift in the demand for current and future services has led to new requirements on network infrastructure, service orchestration, and Quality-of-Service (QoS). Services related to, for example, online-gaming, video-streaming, smart cities, smart homes, connected cars, or other Internet-of-Things (IoT) powered use cases are data-intensive and often have real-time and locality requirements. These have pushed for a new computing paradigm, Edge computing, based on moving some intelligence from the cloud to the edge of the network to minimize latency and data transfer. This situation has set new challenges for cloud providers, telecommunications operators, and content providers. This thesis addresses two issues in this problem area that call for distinct approaches and solutions. Both issues share the common objectives of improving energy-efficiency and mitigating network congestion by minimizing data transfer to boost service performance, particularly concerning latency, a prevalent QoS metric. The first issue is related to the demand for a highly scalable orchestrator that can manage a geographically distributed infrastructure to deploy services efficiently at clouds, edges, or a combination of these. We present an orchestrator using process containers as the virtualization technology for efficient infrastructure deployment in the cloud and at the edge. The work focuses on a Proof-of-Concept design and analysis of a scalable and resilient decentralized orchestrator for containerized applications, and a scalable monitoring solution for containerized processes. The proposed orchestrator deals with the complexity of managing a geographically dispersed and heterogeneous infrastructure to efficiently deploy and manage applications that operate across different geographical locations — thus facilitating the pursuit of bringing some of the intelligence from the cloud to the edge, in a way that is transparent to the applications. The results show this orchestrator’s ability to scale to 20 000 nodes and to deploy 30 000 applications in parallel. The resource search algorithm employed and the impact of location awareness on the orchestrator’s deployment capabilities were also analyzed and deemed favorable. The second issue is related to enabling fast real-time predictions and minimizing data transfer for data-intensive scenarios by deploying machine learning models at devices to decrease the need for the processing of data by upper tiers and to decrease prediction latency. Many IoT or edge devices are typically resource-scarce, such as FPGAs, ASICs, or low-level microcontrollers. Limited devices make running well-known machine learning algorithms that are either too complex or too resource-consuming unfeasible. Consequently, we explore developing innovative supervised machine learning algorithms to efficiently run in settings demanding low power and resource consumption, and realtime responses. The classifiers proposed are computationally inexpensive, suitable for parallel processing, and have a small memory footprint. Therefore, they are a viable choice for pervasive systems with one or a combination of these limitations, as they facilitate increasing battery life and achieving reduced predictive latency. An implementation of one of the developed classifiers deployed to an off-the-shelf FPGA resulted in a predictive throughput of 57.1 million classifications per second, or one classification every 17.485 ns.

Place, publisher, year, edition, pages
Luleå University of Technology, 2020
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
National Category
Computer Systems Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
urn:nbn:se:ltu:diva-79135 (URN)978-91-7790-617-9 (ISBN)978-91-7790-618-6 (ISBN)
Public defence
2020-09-28, A1545, LTU, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2020-06-09 Created: 2020-06-05 Last updated: 2023-09-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full texthttp://www.scitepress.org/DigitalLibrary/PublicationsDetail.aspx?ID=mmpx5QPWYWI=&t=1

Authority records

Jimenez, Lara LornaSimon, Miguel GomezSchelén, OlovKristiansson, JohanSynnes, KåreÅhlund, Christer

Search in DiVA

By author/editor
Jimenez, Lara LornaSimon, Miguel GomezSchelén, OlovKristiansson, JohanSynnes, KåreÅhlund, Christer
By organisation
Computer Science
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1735 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf