Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DOCMA: A Decentralized Orchestrator for Containerized Microservice Applications
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. (Pervasive and Mobile Computing (PMC))ORCID iD: 0000-0002-3437-4540
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.ORCID iD: 0000-0002-4031-2872
2019 (English)In: 2019 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications: IEEE Cloud Summit 2019, Washington, D.C., USA, USA: IEEE, 2019, p. 45-51Conference paper, Published paper (Refereed)
Abstract [en]

The advent of the Internet-of-Things and its associated applications are key business and technological drivers in industry. These pose challenges that modify the playing field for Internet and cloud service providers who must enable this new context. Applications and services must now be deployed not only to clusters in data centers but also across data centers and all the way to the edge. Thus, a more dynamic and scalable approach toward the deployment of applications in the edge computing paradigm is necessary. We propose DOCMA, a fully distributed and decentralized orchestrator for containerized microservice applications built on peer-to-peer principles to enable vast scalability and resiliency. Secure ownership and control of each application are provided that do not require any designated orchestration nodes in the system as it is automatic and self-healing. Experimental results of DOCMA's performance are presented to highlight its ability to scale.

Place, publisher, year, edition, pages
Washington, D.C., USA, USA: IEEE, 2019. p. 45-51
Series
IEEE Cloud Summit
Keywords [en]
Containers, microservices, orchestration, edge computing, cloud computing, virtualization
National Category
Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
URN: urn:nbn:se:ltu:diva-78413DOI: 10.1109/CloudSummit47114.2019.00014ISI: 000652194400008Scopus ID: 2-s2.0-85085467704OAI: oai:DiVA.org:ltu-78413DiVA, id: diva2:1422653
Conference
2019 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications, IEEE Cloud Summit 2019, 8-10 August, 2019, Washington, DC, USA
Note

ISBN för värdpublikation: 978-1-7281-3101-6, 978-1-7281-3102-3

Available from: 2020-04-08 Created: 2020-04-08 Last updated: 2023-09-05Bibliographically approved
In thesis
1. Decentralized Location-aware Orchestration of Containerized Microservice Applications: Enabling Distributed Intelligence at the Edge
Open this publication in new window or tab >>Decentralized Location-aware Orchestration of Containerized Microservice Applications: Enabling Distributed Intelligence at the Edge
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Services that operate on public, private, or hybrid clouds, should always be available and reachable to their end-users or clients. However, a shift in the demand for current and future services has led to new requirements on network infrastructure, service orchestration, and Quality-of-Service (QoS). Services related to, for example, online-gaming, video-streaming, smart cities, smart homes, connected cars, or other Internet-of-Things (IoT) powered use cases are data-intensive and often have real-time and locality requirements. These have pushed for a new computing paradigm, Edge computing, based on moving some intelligence from the cloud to the edge of the network to minimize latency and data transfer. This situation has set new challenges for cloud providers, telecommunications operators, and content providers. This thesis addresses two issues in this problem area that call for distinct approaches and solutions. Both issues share the common objectives of improving energy-efficiency and mitigating network congestion by minimizing data transfer to boost service performance, particularly concerning latency, a prevalent QoS metric. The first issue is related to the demand for a highly scalable orchestrator that can manage a geographically distributed infrastructure to deploy services efficiently at clouds, edges, or a combination of these. We present an orchestrator using process containers as the virtualization technology for efficient infrastructure deployment in the cloud and at the edge. The work focuses on a Proof-of-Concept design and analysis of a scalable and resilient decentralized orchestrator for containerized applications, and a scalable monitoring solution for containerized processes. The proposed orchestrator deals with the complexity of managing a geographically dispersed and heterogeneous infrastructure to efficiently deploy and manage applications that operate across different geographical locations — thus facilitating the pursuit of bringing some of the intelligence from the cloud to the edge, in a way that is transparent to the applications. The results show this orchestrator’s ability to scale to 20 000 nodes and to deploy 30 000 applications in parallel. The resource search algorithm employed and the impact of location awareness on the orchestrator’s deployment capabilities were also analyzed and deemed favorable. The second issue is related to enabling fast real-time predictions and minimizing data transfer for data-intensive scenarios by deploying machine learning models at devices to decrease the need for the processing of data by upper tiers and to decrease prediction latency. Many IoT or edge devices are typically resource-scarce, such as FPGAs, ASICs, or low-level microcontrollers. Limited devices make running well-known machine learning algorithms that are either too complex or too resource-consuming unfeasible. Consequently, we explore developing innovative supervised machine learning algorithms to efficiently run in settings demanding low power and resource consumption, and realtime responses. The classifiers proposed are computationally inexpensive, suitable for parallel processing, and have a small memory footprint. Therefore, they are a viable choice for pervasive systems with one or a combination of these limitations, as they facilitate increasing battery life and achieving reduced predictive latency. An implementation of one of the developed classifiers deployed to an off-the-shelf FPGA resulted in a predictive throughput of 57.1 million classifications per second, or one classification every 17.485 ns.

Place, publisher, year, edition, pages
Luleå University of Technology, 2020
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
National Category
Computer Systems Media and Communication Technology
Research subject
Pervasive Mobile Computing
Identifiers
urn:nbn:se:ltu:diva-79135 (URN)978-91-7790-617-9 (ISBN)978-91-7790-618-6 (ISBN)
Public defence
2020-09-28, A1545, LTU, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2020-06-09 Created: 2020-06-05 Last updated: 2023-09-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Jiménez, Lara LornaSchelén, Olov

Search in DiVA

By author/editor
Jiménez, Lara LornaSchelén, Olov
By organisation
Computer Science
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 485 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf