Recently, the convergence between Blockchain and IoT has been appealing in many domains including, but not limited to, healthcare, supply chain, agriculture, and telecommunication. Both Blockchain and IoT are sophisticated technologies whose feasibility and performance in large-scale environments are difficult to evaluate. Consequently, a trustworthy Blockchain-based IoT simulator presents an alternative to costly and complicated actual implementation. Our primary analysis finds that there has not been so far a satisfactory simulator for the creation and assessment of blockchain-based IoT applications, which is the principal impetus for our effort. Therefore, this study gathers the thoughts of experts about the development of a simulation environment for blockchain-based IoT applications. To do this, we conducted two different investigations. First, a questionnaire is created to determine whether the development of such a simulator would be of substantial use. Second, interviews are conducted to obtain participants’ opinions on the most pressing challenges they encounter with blockchain-based IoT applications. The outcome is a conceptual architecture for simulating blockchain-based IoT applications that we evaluate using two research methods; a questionnaire and a focus group with experts. All in all, we find that the proposed architecture is generally well-received due to its comprehensive range of key features and capabilities for blockchain-based IoT purposes.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business critical applications that leverage various cloud platforms. Such applications hosted on single/multiple cloud provider platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). This paper proposes, develops and validates CLAMBS—Cross-Layer Multi Cloud Application Monitoring and Benchmarking as-a-Service for efficient QoS monitoring and benchmarking of cloud applications hosted on multi-clouds environments. The major highlight of CLAMBS is its capability of monitoring and benchmarking individual application components such as databases and web servers, distributed across cloud layers (*-aaS), spread among multiple cloud providers. We validate CLAMBS using prototype implementation and extensive experimentation and show that CLAMBS efficiently monitors and benchmarks application components on multi-cloud platforms including Amazon EC2 and Microsoft Azure.
The service delivery model of cloud computing acts as a key enabler for big data analytics applications enhancing productivity, efficiency and reducing costs. The ever increasing flood of data generated from smart phones and sensors such as RFID readers, traffic cams etc require innovative provisioning and QoS monitoring approaches to continuously support big data analytics. To provide essential information for effective and efficient bid data analytics application QoS monitoring, in this paper we propose and develop CLAMS-Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework: (a) performs multi-cloud monitoring, and (b) addresses the issue of cross-layer monitoring of applications. We implement and demonstrate CLAMS functions on real-world multi-cloud platforms such as Amazon and Azure.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers, data processing frameworks, etc.) platforms. Application services hosted on single/multiple cloud provider platforms have diverse characteristics that require extensive monitoring mechanisms to aid in controlling run-time quality of service (e.g., access latency and number of requests being served per second, etc.). To provide essential real-time information for effective and efficient cloud application quality of service (QoS) monitoring, in this paper we propose, develop and validate CLAMS—Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework is capable of: (a) performing QoS monitoring of application components (e.g., database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g., Amazon and Azure); and (b) giving visibility into the QoS of individual application component, which is something not supported by current monitoring services and techniques. We conduct experiments on real-world multi-cloud platforms such as Amazon and Azure to empirically evaluate our framework and the results validate that CLAMS efficiently monitors applications running across multiple clouds.
Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.
Cyber-Physical Systems (CPS) is a very complex system where a new management layer must be developed. This chapter presents the benefits and challenges of deploying real-time Scheduling, Monitoring, and End-to-End SLA (SMeSLA) in CPS. We propose an SMeSLA conceptual architecture which allows end-users to submit their service level requirements to an SLA manager; as a result, scheduling and monitoring managers would operate accordingly. The SMeSLA management layer empowers CPS system to meet consumers’ satisfaction and achieve optimal performance. However, in order to successfully deploy SMeSLA in CPS, many technical and general challenges must be addressed.
Within cloud-based internet of things (IoT) applications, typically cloud providers employ Service Level Agreements (SLAs) to ensure the quality of their provisioned services. Similar to any other contractual method, an SLA is not immune to breaches. Ideally, an SLA stipulates consequences (e.g. penalties) imposed on cloud providers when they fail to conform to SLA terms. The current practice assumes trust in service providers to acknowledge SLA breach incidents and executing associated consequences. Recently, the Blockchain paradigm has introduced compelling capabilities that may enable us to address SLA enforcement more elegantly. This paper proposes and implements a blockchain-based approach for assessing SLA compliance and enforcing consequences. It employs a diagnostic accuracy method for validating the dependability of the proposed solution. The paper also benchmarks Hyperledger Fabric to investigate its feasibility as an underlying blockchain infrastructure concerning latency and transaction success/fail rates.
A Service Level Agreement (SLA) establishes the trustworthiness of service providers and consumers in several domains; including the Internet of Things (IoT). Given the proliferation of Blockchain technology, we find it compelling to reconsider the assumption of trust and centralised governance typically practised in SLA management including monitoring, compliance assessment, and penalty enforcement. Therefore, we argue that, such critical tasks should be operated by blockchain-based smart contracts in a non-repudiable manner beyond the influence of any SLA party. This paper envisions an IoT scenario wherein a firefighting station outsources end-to-end IoT operations to a specialised service provider. The contractual relationship between them is governed by an SLA which stipulates a set of quality requirements and violation consequences. The main contribution of this paper lies in designing, deploying and empirically experimenting a novel blockchain-based SLA monitoring and compliance assessment framework in the context of IoT. This is done by utilising Hyperledger Fabric (HLF), an enterprise-grade blockchain technology. Our work highlights a set of considerations and best practice at two sides, the IoT application monitoring-side and the blockchain-side. Moreover, it experimentally validates the reliability of the proposed monitoring approach, which collects relevant metrics from each IoT component and examines them against the quality requirements stated in the SLA. Finally, we propose a novel design for smart contracts at the blockchain-side, analyse and benchmark the performance, and demonstrate that the new design proves to successfully handle Multiversion Concurrency Control (MVCC) conflicts typically encountered in blockchain applications, while maintaining sound throughput and latency.
One of the main drivers behind blockchain adoption is a lack of trust among entities serving a common goal, but with different interests. Following the success of Bitcoin, several blockchain platforms have emerged, such as Ethereum and Hyperledger Fabric, to enable conducting distrusted processes in a non-repudiable manner. However, it is not safe to assume the applicability of conventional software design strategies to Blockchain-based solutions. In this paper, we assume an untrusted SLA (service level agreement) relationship between an IoT service provider and its consumer. We adopt Hyperledger Fabric for the purpose of implementing SLA compliance assessment. We design a smart contract that takes blockchain unique features into consideration. The design particularly accounts for the MVCC (multiversion concurrency control) mechanism, which while effective for resolving the double spending problem, causes read-write conflicts when high transmission rates are experienced between the IoT application and the blockchain. Using a fire station event monitoring scenario, we describe our smart contract design and solution for conflicting transactions. We experimentally evaluate our solution and demonstrate clear performance improvements in terms of throughput and latency.
In pursuit of effective service level agreement (SLA) monitoring and enforcement in the context of Internet of Things (IoT) applications, this article regards SLA management as a distrusted process that should not be handled by a single authority. Here, we aim to justify our view on the matter and propose a conceptual blockchain-based framework to cope with some limitations associated with traditional SLA management approaches.
As the Internet of Things (IoT) becomes a reality, millions of devices will be connected to IoT platforms in smart cities. These devices will cater to several areas within a smart city such as healthcare, logistics, and transportation. These devices are expected to generate significant amounts of data requests at high data rates, therefore, necessitating the performance benchmarking of IoT platforms to ascertain whether they can efficiently handle such devices. In this article, we present our results gathered from extensive performance evaluation of the cloud-based IoT platform, FIWARE. In particular, to study FIWARE’s performance, we developed a testbed and generated CoAP and MQTT data to emulate large-scale IoT deployments, crucial for future smart cities. We performed extensive tests and studied FIWARE’s performance regarding vertical and horizontal scalability. We present bottlenecks and limitations regarding FIWARE components and their cloud deployment. Finally, we discuss cost-efficient FIWARE deployment strategies that can be extremely beneficial to stakeholders aiming to deploy FIWARE as an IoT platform for smart cities.
The rapidly emerging Internet of Things supports many diverse applications including environmental monitoring. Air quality, both indoors and outdoors, proved to be a significant comfort and health factor for people. This paper proposes a smart context-aware system for indoor air quality monitoring and prediction called DisCPAQ. The system uses data streams from air quality measurement sensors to provide real-time personalised air quality service to users through a mobile app. The proposed system is agnostic to sensor infrastructure. The paper proposes a context model based on Context Spaces Theory, presents the architecture of the system and identifies challenges in developing large scale IoT applications. DisCPAQ implementation, evaluation and lessons learned are all discussed in the paper.
Modern big data processing systems are becoming very complex in terms of large-scale, high-concurrency and multiple talents. Thus, many failures and performance reductions only happen at run-time and are very difficult to capture. Moreover, some issues may only be triggered when some components are executed. To analyze the root cause of these types of issues, we have to capture the dependencies of each component in real-time. In this paper, we propose SmartMonit, a real-time big data monitoring system, which collects infrastructure information such as the process status of each task. At the same time, we develop a real-time stream processing framework to analyze the coordination among the tasks and the infrastructures. This coordination information is essential for troubleshooting the reasons for failures and performance reduction, especially the ones propagated from other causes.
Big data processing systems, such as Hadoop and Spark, usually work on large-scale, highly-concurrent, and multi-tenant environments that can easily cause hardware and software malfunctions or failures, thereby leading to performance degradation. Several systems and methods exist to detect big data processing systems' performance degradation, perform root-cause analysis, and even overcome the issues causing such degradation. However, these solutions focus on specific problems such as straggler and inefficient resource utilization. There is a lack of a generic and extensible framework to support the real-time diagnosis of big data systems. In this paper, we propose, develop and validate AutoDiagn. This generic and flexible framework provides holistic monitoring of a big data system while detecting performance degradation and enabling root-cause analysis. We present the implementation and evaluation of AutoDiagn that interacts with a Hadoop cluster deployed on a public cloud and tested with real-world benchmark applications. Experimental results show that AutoDiagn has a small resource footprint, high throughput and low latency.
Exploiting future opportunities and avoiding problematic upcoming events is the main characteristic of a proactively adapting system, leading to several benefits such as uninterrupted and efficient services. In the era when IoT applications are a tangible part of our reality, with interconnected devices almost everywhere, there is potential to leverage the diversity and amount of their generated data in order to act and take proactive decisions in several use cases, smart waste management as such. Our work focuses in devising a system for proactive adaptation of behavior, named ProAdaWM. We propose a reasoning model and system architecture that handles waste collection disruptions due to severe weather in a sustainable and efficient way using decision theory concepts. The proposed approach is validated by implementing a system prototype and conducting a case study.
The rapid evolution of the Internet of Things (IoT) facilitates the development of IoT applications in domains such as manufacturing, smart cities, retail, agriculture, etc. Such IoT applications collect data, analyze, and extract insightful information to enable decision-making and actuation. There is an unprecedented growth of IoT applications that automate decision-making and actuation without requiring human intervention, which we term autonomic IoT applications. The increasing scale of such applications necessitates holistic measurement and evaluation of application quality. Existing literature has evaluated quality from an end-user perspective, which may be unsuitable when dealing with the complexity of modern IoT applications, especially when they are autonomic. In this paper, we refer to IoT application quality as the aggregate quantitative value of various IoT quality metrics measured at each stage of the autonomic IoT application life cycle. We present an in-depth survey of current state-of-the-art techniques and approaches for evaluating quality of IoT applications. In particular, we survey various definitions to identify the factors that contribute to understanding and evaluating quality in IoT. Furthermore, we present open issues and identify future research directions towards realizing fine-grained quality evaluation of IoT applications. We envision that the identified research directions will, in turn, enable real-time diagnostics of IoT applications and make them quality-aware. This survey can serve as the basis for designing and developing modern, resilient quality-aware autonomic IoT applications.
The rapid evolution of the Internet of Things (IoT) is making way for the development of several IoT applications that require minimal or no human involvement in the data collection, transformation, knowledge extraction, and decision-making (actuation) process. To ensure that such IoT applications (we term them autonomic) function as expected, it is necessary to measure and evaluate their quality, which is challenging in the absence of any human involvement or feedback. Existing Quality of Experience (QoE) literature and most QoE definitions focuses on evaluating application quality from the lens of human receiving application services. However, in autonomic IoT applications, poor quality of decisions and resulting actions can degrade the application quality leading to economic and social losses. In this paper, we present a vision, survey and future directions for QoE research in IoT. We review existing QoE definitions followed by a survey of techniques and approaches in the literature used to evaluate QoE in IoT. We identify and review the role of data from the perspective of IoT architectures, which is a critical factor when evaluating the QoE of IoT applications. We conclude the paper by identifying and presenting our vision for future research in evaluating the QoE of autonomic IoT applications.
The MediaWise project aims to expand the scope ofexisting media delivery systems with novel cloud, personalizationand collaboration capabilities that can serve the needs of moreusers, communities, and businesses. The project develops aMediaWise Cloud platform that supports do-it-yourself creation,search, management, and consumption of multimedia content.The MediaWise Cloud supports pay-as-you-go models andelasticity that are similar to those offered by commerciallyavailable cloud services. However, unlike existing commercialCDN services providers such as Limelight Networks and Akamaithe MediaWise Cloud require no ownerships of computinginfrastructure and instead rely on the public Internet and publiccloud services (e.g., commercial cloud storage to store its content).In addition to integrating such public cloud services into a publiccloud-based Content Delivery Network, the MediaWise Cloudalso provides advanced Quality of Service (QoS) management asrequired for the delivery of streamed and interactive highresolution multimedia content. In this paper, we give a briefoverview of MediaWise Cloud architecture and present acomprehensive discussion on research objectives related to itsservice components. Finally, we also compare the featuressupported by the existing CDN services against the envisionedobjectives of MediaWise Cloud.
Cloud of Things (CoT) is a vision inspired by Internet of Things (IoT) and cloud computing where the IoT devices are connected to the clouds via the Internet for datastorage, processing, analytics and visualization. CoT ecosystem will encompass heterogeneous clouds, networks and devices to provide seamless service delivery, for example, in smart cites. To enable efficient service delivery, there is a need to guarantee a certain level of quality of service from both cloud and networksperspective. This paper discusses the Cloud of Things, cloud computing, networks and new quality of service management research issues arising due to realisation of CoT ecosystem vision.
The rise in the aging population worldwide is already negatively impacting healthcare systems due to the lack of resources. It is envisioned that the development of novel Internet of Things (IoT)-enabled smart city healthcare systems may not only alleviate the stress on the current healthcare systems but may significantly improve the overall quality of life of the elderly. As more elderly homes are fitted with IoT, and intelligent healthcare becomes the norm, there is a need to develop innovative augmented reality (AR) based applications and services that make it easier for caregivers to interact with such systems and assist the elderly on a daily basis. This paper proposes, develops, and validates an AR and IoT-enabled healthcare system to be used by caregivers. The proposed system is based on a smart city IoT middleware platform and provides a standardized, intuitive and non-intrusive way to deliver elderly person's information to caregivers. We present our prototype, and our experimental results show the efficiency of our system in IoT object detection and relevant information retrieval tasks. The average execution time, including object detection, communicating with a server, and rendering the results in the application, takes on average between 767ms and 1,283ms.
It is the era of information explosion and overload. The recommender systems can help people quickly get the expected information when facing the enormous data flood. Therefore, researchers in both industry and academia are also paying more attention to this area. The Collaborative Filtering Algorithm (CF) is one of the most widely used algorithms in recommender systems. However, it has difficulty in dealing with the problems of sparsity and scalability of data. This paper presents Category Preferred Canopy-K-means based Collaborative Filtering Algorithm (CPCKCF) to solve the challenges of sparsity and scalability of data. In particular, CPCKCF proposes the definition of the User-Item Category Preferred Ratio (UICPR), and use it to compute the UICPR matrix. The results can be applied to cluster the user data and find the nearest users to obtain prediction ratings. Our experimentation results performed using the MovieLens dataset demonstrates that compared with traditional user-based Collaborative Filtering algorithm, the proposed CPCKCF algorithm proposed in this paper improved computational efficiency and recommendation accuracy by 2.81%.
Cloud computing is a new paradigm shift which enables applications and related content (audio, video, text, images, etc.) to be provisioned in an on-demand manner and being accessible to anyone anywhere in the world without the need for owning expensive computing and storage infrastructures. Interactive multimedia content-driven applications in the domains of healthcare, aged-care, and education have emerged as one of the new classes of big data applications. This new generation of applications need to support complex content operations including production, deployment, consumption, personalisation, and distribution. However, to efficiently provision these applications on the Cloud data centres, there is a need to understand their run-time resource configurations. For example: (i) where to store and distribute the content to and from driven by end-user Service Level Agreements (SLAs)? (ii) how many content distribution servers to provision? and (iii) what Cloud VM configuration (number of instances, types, speed, etc.) to provision? In this paper, we present concepts and factors related to engineering such content-driven applications over public Clouds. Based on these concepts and factors, we propose a performance evaluation methodology for quantifying and understanding the runtime configuration these classes of applications. Finally, we conduct several benchmark driven experiments for validating the feasibility of the proposed methodology.
The cloud computing paradigm is continually evolving, and with it, the size and the complexity of its infrastructure. Assessing the performance of a cloud environment is an essential but an arduous task. Further, the energy consumed by data centers is steadily increasing and major components such as the storage systems need to be more energy efficient. Cloud simulation tools have proved quite useful to study these issues. However, these simulation tools lack mechanisms to study energy efficient storage in cloud systems. This paper contributes in the area of cloud computing by extending the widely used cloud simulator CloudSim. In this paper, we propose CloudSimDisk, a scalable module for modeling and simulation of energy-aware storage in cloud systems. We show how CloudSimDisk can be used to simulate energy-aware storage, and can be extended to study new algorithms for energy-awareness in cloud systems. Our simulation results proved to be in accordance with the analytical models that were developed to model energy consumption of hard disk drives in cloud systems. The source code of CloudSimDisk is also made available for the research community for further testing and development.
Today's research on Quality of Experience (QoE) mainly addresses multimedia services. With the introduction of the Internet of Things (IoT), there is a need for new ways of evaluating the QoE. Emerging IoT services, such as autonomous vehicles (AVs), are more complex and involve additional quality requirements, such as those related to machine-to-machine communication that enables self-driving. In fully autonomous cases, it is the intelligent machines operating the vehicles. Thus, it is not clear how intelligent machines will impact end-user QoE, but also how end users can alter and affect a self-driving vehicle. This article argues for a paradigm shift in the QoE area to cover the relationship between humans and intelligent machines. We introduce the term Quality of IoT-experience (QoIoT) within the context of AV, where the quality evaluation, besides end users, considers quantifying the perspectives of intelligent machines with objective metrics. Hence, we propose a novel architecture that considers Quality of Data (QoD), Quality of Network (QoN), and Quality of Context (QoC) to determine the overall QoIoT in the context of AVs. Finally, we present a case study to illustrate the use of QoIoT.
Connected and automated vehicles (CAVs) are envisioned to revolutionize the transportation industry, enabling autonomous processes and real-time exchange of information among vehicles and infrastructure. To safely navigate the roadways, CAVs rely on sensor readings and data from the surrounding vehicles. Hence, a fault or anomaly arising from the hardware, software, or the network can lead into devastating consequences regarding safety. This study investigates potential performance degradation caused by anomalies, by analyzing real-life vehicles’ sensory and network-related data. The aim is to utilize unsupervised learning for anomaly detection, with a goal to describe the cause and effect of the detected anomalies from a performance perspective. The results show around 93% F1-score when detecting anomalies imposed by the cellular network and the vehicle’s sensors. Moreover, with approximately 90% F1-score we can detect anomalous predictions from a deployed network-related ML model predicting cellular throughput and describe the root-causes behind the detected anomalies.
The use of video streaming services are increasing in the cellular networks, inferring a need to monitor video quality to meet users' Quality of Experience (QoE). The so-called no-reference (NR) models for estimating video quality metrics mainly rely on packet-header and bitstream information. However, there are situations where the availability of such information is limited due to tighten security and encryption, which necessitates exploration of alternative parameters for conducting video QoE assessment. In this study we collect real-live in-smartphone measurements describing the radio link of the LTE connection while streaming reference videos in uplink. The radio measurements include metrics such as RSSI, RSRP, RSRQ, and CINR. We then use these radio metrics to train a Random Forrest machine learning model against calculated video quality metrics from the reference videos. The aim is to estimate the Mean Opinion Score (MOS), PSNR, Frame delay, Frame skips, and Blurriness. Our result show 94% classification accuracy, and 85% model accuracy (R 2 value) when predicting the MOS using regression. Correspondingly, we achieve 89%, 84%, 85%, and 82% classification accuracy when predicting PSNR, Frame delay, Frame Skips, and Blurriness respectively. Further, we achieve 81%, 77%, 79%, and 75% model accuracy (R 2 value) regarding the same parameters using regression.
The Internet of Things (IoT) brings a set of unique and complex challenges to the field of Quality of Experience (QoE) evaluation. The state-of-the-art research in QoE mainly targets multimedia services, such as voice, video, and the Web, to determine quality perceived by end-users. Therein, main evaluation metrics involve subjective and objective human factors and network quality factors. Emerging IoT may also include intelligent machines within services, such as health-care, logistics, and manufacturing. The integration of new technologies such as machine-to-machine communications and artificial intelligence within IoT services may lead to service quality degradation caused by machines. In this article, we argue that evaluating QoE in the IoT services should also involve novel metrics for measuring the performance of the machines alongside metrics for end-users' QoE. This article extends the legacy QoE definition in the area of IoT and defines conceptual metrics for evaluating QoE using an industrial IoT case study.
The emergence of novel cellular network technologies, within 5G, are envisioned as key enablers of a new set of use-cases, including industrial automation, intelligent transportation, and tactile internet. The critical nature of the traffic requirements ranges from ultra-reliable communications, massive connectivity, and enhanced mobile broadband. Thus, the growing research on cellular network monitoring and prediction aims for ensuring a satisfied user-base and fulfillment of service level agreements. The scope of this study is to develop an approach for predicting the cellular link throughput of end-users, with a goal to benchmark the performance of network slices. First, we report and analyze a measurement study involving real-life cases, such as driving in urban, sub-urban, and rural areas, as well as tests in large crowded areas. Second, we develop machine learning models using lower-layer metrics, describing the radio environment, to predict the available throughput. The models are initially validated on the LTE network and then applied to a non-standalone 5G network. Finally, we suggest scaling the proposed model into the future standalone 5G network. We have achieved 93% and 84% R^2 accuracy, with 0.06 and 0.17 mean squared error, in predicting the end-user's throughput in LTE and non-standalone 5G network, respectively.
In mobile computing systems, users can access network services anywhere and anytime using mobile devices such as tablets and smart phones. Users usually have some expectations about the services provided to them by different service providers, for example, telecommunication operators and network providers. Users’ expectations along with additional factors such as cognitive and behavioural states, cost, and network quality of service (QoS) may determine their quality of experience (QoE). If users are not satisfied with their QoE, they may switch to different providers or may stop using a particular application or service. QoE measurement and prediction can benefit users to avail personalized services from service providers. On the other hand, it can help service providers to achieve lower user-operator switch-over. Users with mobile devices can roam in heterogeneous access networks (HANs). A mobile device may go through handoffs while roaming in HANs i.e., it may switch from one access point (AP) to another. These APs within a network can belong to different network technologies, for example, WLAN or 4G. Handoffs may cause severe QoE degradation due to increased delay and packet losses. Thus, there is a need to facilitate QoE-aware handoffs for users roaming in HANs. The mobile devices can learn from the prior network conditions and users' QoE to make timely and proactive QoE-aware handoffs.In this thesis, we propose, develop and validate a novel method, CaQoEM-Context-aware Quality of Experience, Modelling, Measurement and Prediction. CaQoEM is based on Bayesian networks and utility theory. It provides a straightforward and efficient way of dealing with a plethora of parameters to model, measure and predict QoE under uncertainty on a single scale. We validate CaQoEM using a number of case studies, user tests and simulations performed in OPNET. Our results validate that CaQoEM can efficiently model, measure and predict users' QoE. It achieves an average QoE prediction accuracy of 98.93% in stochastic wireless network conditions such as wireless signal fading, handoffs and wireless network congestion. We further extend our CaQoEM to develop SCaQoEM-Sequential Context-aware Quality of Experience Measurement and Prediction where we use dynamic Bayesian networks and utility theory to model, measure and predict users’ QoE over time. We performed a case study and our results validate the efficiency of SCaQoEM.In this thesis, we also propose, develop and validate a novel approach called PRONET-Proactive Context-aware Mobility Support in HANs. PRONET incorporates a novel method for QoE estimation and prediction using hidden Markov models and Multi-homed Mobile IP. It also incorporates a method for QoE-aware handoffs using Q-learning function. Using extensive simulations and experimental analysis, we show that PRONET achieves an average QoE prediction accuracy of 97%. Further, PRONET can maximize users’ QoE by reducing the average number of handoffs by 60.65%, compared to the state-of-the-art methods. The outcomes of this thesis have resulted in eleven peer-reviewed conference, workshop and journal papers along with three technical reports.
The next-generation cyber-physical systems (CPSs) will not only be limited to industries but will span across multiple application-areas regarding smart cities and regions. These CPSs will leverage the recent advancements in the areas of cloud computing, Internet of Things and big data to provision citizen-centric applications and services such as smart hybrid energy grids, smart waste management, smart healthcare and smart transportation. Challenges regarding context-awareness, quality of service and quality of experience, mobility management, middleware platforms, service level agreements, trust, and privacy needs to be solved to realize such CPSs. This chapter discusses these challenges in detail and proposes ICICLE – a context-aware IoT-enabled cyber-physical system as a blueprint for next-generation CPSs.
Emergency management systems deal with the dynamic processing of data, where response teams must continuously adapt to the changing conditions at the scene of the emergency. Response teams must make critical decisions in highly demanding situations using large volumes of sensor data. Mobile devices have limited processing, storage, and battery resources; therefore, the sensed data from the scene of the emergency must be transmitted and processed quickly using the best available networks and clouds. Mobile cloud computing (MCC) is expected to play a critical role in the computation and storage offloading of sensor data to the best available clouds. However, applications running on mobile devices using clouds and heterogeneous access networks such as Wi-Fi and 3G are prone to unpredictable cloud workloads, network congestion, and handoffs. This article presents M2C2, a system for mobility management in MCC, that supports mechanisms for multihoming, cloud and network probing, and cloud and network selection. Through a prototype implementation and experiments, the authors show that M2C2 supports mobility efficiently in MCC scenarios such as emergency management.
Mobile devices have become an integral part of our daily lives. Applications running on these devices may avail storage and compute resources from the cloud(s). Further, a mobile device may also connect to heterogeneous access networks (HANs) such as WiFi and LTE to provide ubiquitous network connectivity to mobile applications. These devices have limited resources (compute, storage and battery) that may lead to servicedisruptions. In this context, mobile cloud computing enables offloading of computing and storage to the cloud. However, applications running on mobile devices using clouds and HANs are prone to unpredictable cloud workloads, network congestion and handoffs. To run these applications efficiently the mobile device requires the best possible cloud and network resources while roaming in HANs. This paper proposes, develops and validates a novel system called M2C2 which supports mechanismsfor: i.) multihoming, ii.) cloud and network probing, and iii.) cloudand network selection. We built a prototype system and performed extensive experimentation to validate our proposed M2C2. Our results analysis shows that the proposed system supports mobility efficiently in mobile cloud computing.
Cloud performance diagnosis and prediction is a challenging problem due to the stochastic nature of the cloud systems. Cloud performance is affected by a large set of factors such as virtual machine types, regions, workloads, wide area network delay and bandwidth. Therefore, necessitating the determination of complex relationships between these factors. The current research in this area does not address the challenge of modeling the uncertain and complex relationships between these factors. Further, the challenge of cloud performance prediction under uncertainty has not garnered sufficient attention. This paper proposes, develops and validates ALPINE, a Bayesian system for cloud performance diagnosis and prediction. ALPINE incorporates Bayesian networks to model uncertain and complex relationships between several factors mentioned above. It handles missing, scarce and sparse data to diagnose and predict stochastic cloud performance efficiently. We validate our proposed system using extensive real data and show that it predicts cloud performance with high accuracy of 91.93%.
In this paper, we develop a novel context-aware approach for quality of experience (QoE) modeling, reasoning and inferencing in mobile and pervasive computing environments. The proposed model is based upon a state-space approach and Bayesian networks for QoE modeling and reasoning. We further extend this context model to incorporate influence diagrams for efficient QoE inferencing. Our approach accommodates user, device and quality of service (QoS) related context parameters to determine the overall QoE of the user. This helps in user-related media, network and device adaptation. We perform experimentation to validate the proposed approach and the results verify its modeling and inferencing capabilities.
Quality of Experience (QoE) as an aggregate of Quality of Service (QoS) and human user-related metrics will be the key success factor for current and future mobile computing systems. QoE measurement and prediction are complex tasks as they may involve a large parameter space such as location, delay, jitter, packet loss and user satisfaction just to name a few. These tasks necessitate the development of practical context-aware QoE models that efficiently determine relationships between user context and QoE parameters. In this paper, we propose, develop and validate a novel decision-theoretic approach called CaQoEM for QoE modelling, measurement and prediction. We address the challenge of QoE measurement and prediction where each QoE parameter can be measured on a different scale and may involve different units of measurement. CaQoEM is context-aware and uses Bayesian networks and utility theory to measure and predict users' QoE under uncertainty. We validate CaQoEM using extensive experimentation, user studies and simulations. The results soundly demonstrate that CaQoEM correctly measures range-defined QoE using a bipolar scale. For QoE prediction, an overall accuracy of 98.93\% was achieved using 10-fold cross validation in multiple diverse network conditions such as vertical handoffs, wireless signal fading and wireless network congestion.
This paper presents a novel context-aware methodology for modelling and measuring user-perceived quality of experience (QoE) over time. In particular, we create a context-aware model for QoE modelling and measurement using dynamic Bayesian networks (DBN) and a context-aware state-space approach. The proposed model is then used to infer and determine users’ QoE in a sequential manner. We performed experimentation to validate the proposed model. The results prove thatit can efficiently model, reason and measure QoE of the users’.
In this paper, we pioneer a context-aware approach for quality of experience (QoE) modeling, reasoning and inferencing in mobile and pervasive computing environments. The proposed model is based upon Context Spaces Theory (CST) and influence diagrams (IDs) to handle uncertain and hidden complex inter-dependencies between user-perceived and network level QoS and to calculate overall QoE of the users. This helps in user-related media, network and device adaptation, creating user-level SLAs and minimizing network churn. We perform experimentation to validate the proposed approach and the results verify its modeling and inferencing capabilities.
This paper presents a blueprint for proactive context-aware mobility support architecture for heterogeneous access networks called PROET. In particular, we leverage upon the principles of cognitive networking to support proactive contextawareness for user-centric application adaptation via quality-ofexperience (QoE) provisioning. Our proposed architecture is built upon Port-based Multi-homed Mobile IPv6 (PM-MIPv6) solution to support several applications via path diversity. In this paper our contributions are two-fold. Firstly, we identify and present gaps in our research domain related to mobility, QoE, cognitive networks and cross-layer design. We then present our architecture for providing seamless mobility in heterogeneous access networks. Currently, we are in the process of collecting results via our test bed and prototype implementation for 802.11g and HSDPA wireless networks
This paper presents a pioneering context-aware approach for quality of experience (QoE) measurement and prediction. The proposed approach incorporates an intuitive context-aware framework and decision theory. It is capable of incorporating several QoE related classes and context information to correctly measure and predict the overall QoE on a single scale. Our approach can be used in measuring and predicting QoE in both lab and living-lab settings based on user, device and network related context parameters. The predicted QoE can be beneficial for network operators to minimize network churn and can help application developers to build smart user-centric applications. We perform extensive experimentation and the results validate our approach.
Measuring and predicting users quality of experience (QoE) in dynamic network conditions is a challenging task. This paper presents results related to a decision-theoretic methodology incorporating Bayesian networks (BNs) and utility theory for quality of experience (QoE) measurement and prediction in mobile computing scenarios. In particular, we show how both generative and discriminative BNs can be used to measure and predict users QoE accurately for voice applications under several wireless network conditions such as wireless signal fading, vertical handoffs, wireless network congestion and normal hotspot traffic. Through extensive simulation studies and results analysis, we show that our proposed methodology can achieve an average accuracy of 98.70% using three different types of Bayesian network.
Quality of Experience (QoE) based handoffs inheterogeneous access networks (HAN) necessitates accurate QoEestimation and prediction. The current approaches to QoE-awarehandoffs are limited. These approaches either lack the availabilityof underlying probing mechanism or lack the availability ofQoE estimation and prediction mechanism. In this paper, wepropose, develop and validate a novel method for QoE estimationand prediction using passive probing mechanisms. Ourmethod is based on hidden Markov models and multi-homedmobility management protocol. Using extensive simulations andexperimental studies, we show that our method achieves QoEestimation accuracy of 100% and prediction accuracy of 97% inHAN without using additional probe packets for QoE estimationand prediction.
Forecasting the thermal load demand for residential buildings assists in optimizing energy production and developing demand response strategies in a smart grid system. However, the presence of a large number of factors such as outdoor temperature, district heating operational parameters, building characteristics and occupant behavior, make thermal load forecasting a challenging task. This paper presents an efficient model for thermal load forecast in buildings with different variations of heat load consumption across both winter and spring seasons using a Bayesian Network. The model has been validated by utilizing the realistic district heating data of three residential buildings from the district heating grid of the city of Skellefteå, Sweden over a period of four months. The results from our model shows that the current heat load consumption and outdoor temperature forecast have the most influence on the heat load forecast. Further, our model outperforms state-of-the-art methods for heat load forecasting by achieving a higher average accuracy of 77.97% by utilizing only 10% of the training data for a forecast horizon of 1 hour.
The ageing population worldwide is constantly rising, both in urban and regional areas. There is a need for IoT-based remote health monitoring systems that take care of the health of elderly people without compromising their convenience and preference of staying at home. However, such systems may generate large amounts of data. The key research challenge addressed in this paper is to efficiently transmit healthcare data within the limit of the existing network infrastructure, especially in remote areas. In this paper, we identified the key network requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, we studied the network communication protocols such as CoAP, MQTT and HTTP to understand the needs of such a system, in particular the bandwidth requirements and the volume of generated data. Subsequently, we have proposed IReHMo - an IoT-based remote health monitoring architecture that efficiently delivers healthcare data to the servers. The CoAP-based IReHMo implementation helps to reduce up to 90% volume of generated data for a single sensor event and up to 56% required bandwidth for a healthcare scenario. Finally, we conducted a scalability analysis to determine the feasibility of deploying IReHMo in large numbers in regions of north Sweden.
Microservices have emerged as a new approach for developing and deploying cloud applications that require higher levels of agility, scale, and reliability. To this end, a microservice-based cloud application architecture advocates decomposition of monolithic application components into independent software components called "microservices". As the independent microservices can be developed, deployed, and updated independently of each other, it leads to complex run-time performance monitoring and management challenges. To solve this problem, we propose a generic monitoring framework, Multi-microservices Multi-virtualization Multi-cloud (M3) that monitors the performance of microservices deployed across heterogeneous virtualization platforms in a multi-cloud environment. We validated the efficacy and efficiency of M3 using a Book-Shop application executing across AWS and Azure.