The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed
Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.
Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.
Despite evidence of rising popularity of video on the web (or VOW), little is known about how users access video. However, such a characterization can greatly benefit the design of multimedia systems such as web video proxies and VOW servers. Hence, this paper presents an analysis of trace data obtained from an ongoing VOW experiment in Lulea University of Technology, Sweden. This experiment is unique as video material is distributed over a high bandwidth network allowing users to make access decisions without the network being a major factor. Our analysis revealed a number of interesting discoveries regarding user VOW access. For example, accesses display high temporal locality: several requests for the same video title often occur within a short time span. Accesses also exhibited spatial locality of reference whereby a small number of machines accounted for a large number of overall requests. Another finding was a browsing pattern where users preview the initial portion of a video to find out if they are interested. If they like it, they continue watching, otherwise they halt it. This pattern suggests that caching the first several minutes of video data should prove effective. Lastly, the analysis shows that, contrary to previous studies, ranking of video titles by popularity did not fit a Zipfian distribution.
Online social media has completely transformed how we communicate with each other. While online discussion platforms are available in the form of applications and websites, an emergent outcome of this transformation is the phenomenon of ‘opinion leaders’. A number of previous studies have been presented to identify opinion leaders in online discussion networks. In particular, Feng (2016 Comput. Hum. Behav. 54, 43–53. (doi:10.1016/j.chb.2015.07.052)) has identified five different types of central users besides outlining their communication patterns in an online communication network. However, the presented work focuses on a limited time span. The question remains as to whether similar communication patterns exist that will stand the test of time over longer periods. Here, we present a critical analysis of the Feng framework both for short-term as well as for longer periods. Additionally, for validation, we take another case study presented by Udanor et al. (2016 Program 50, 481–507. (doi:10.1108/PROG-02-2016-0011)) to further understand these dynamics. Results indicate that not all Feng-based central users may be identifiable in the longer term. Conversation starter and influencers were noted as opinion leaders in the network. These users play an important role as information sources in long-term discussions. Whereas network builder and active engager help in connecting otherwise sparse communities. Furthermore, we discuss the changing positions of opinion leaders and their power to keep isolates interested in an online discussion network.
This study examines the research landscape of smart learning environments by conducting a comprehensive bibliometric analysis of the field over the years. The study focused on the research trends, scholar’s productivity, and thematic focus of scientific publications in the field of smart learning environments. A total of 1081 data consisting of peer-reviewed articles were retrieved from the Scopus database. A bibliometric approach was applied to analyse the data for a comprehensive overview of the trend, thematic focus, and scientific production in the field of smart learning environments. The result from this bibliometric analysis indicates that the first paper on smart learning environments was published in 2002; implying the beginning of the field. Among other sources, “Computers & Education,” “Smart Learning Environments,” and “Computers in Human Behaviour” are the most relevant outlets publishing articles associated with smart learning environments. The work of Kinshuk et al., published in 2016, stands out as the most cited work among the analysed documents. The United States has the highest number of scientific productions and remained the most relevant country in the smart learning environment field. Besides, the results also showed names of prolific scholars and most relevant institutions in the field. Keywords such as “learning analytics,” “adaptive learning,” “personalized learning,” “blockchain,” and “deep learning” remain the trending keywords. Furthermore, thematic analysis shows that “digital storytelling” and its associated components such as “virtual reality,” “critical thinking,” and “serious games” are the emerging themes of the smart learning environments but need to be further developed to establish more ties with “smart learning”. The study provides useful contribution to the field by clearly presenting a comprehensive overview and research hotspots, thematic focus, and future direction of the field. These findings can guide scholars, especially the young ones in field of smart learning environments in defining their research focus and what aspect of smart leaning can be explored.
Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.
Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles
The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.
Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.
Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability
It is my pleasure to welcome you to the sixth Demonstration Session at the IEEE Conference on Local Computer Networks (LCN) 2014. We were looking for demonstrations for all topics covered by the main conference as well as all the workshops held in conjunction with the conference. The technical demonstrations were strongly encouraged to show innovative and original research. The main purpose of the demo session is to provide demonstrations that validate important research issues and/or show innovative prototypes.
A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.
Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.
This article describes the architecture and service enablers developed in the NIMO project. Furthermore, it identifies future challenges and knowledge gaps in upcoming ICT service development for public sector units empowering citizens with enhanced tools for interaction and participation. We foresee crowdsourced applications where citizens contribute with dynamic, timely and geographically spread gathered information.
An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.
Service Oriented Architecture (SOA) offers a flexible paradigm for information flow among collaborating organizations. As information moves out of an organization boundary, various security concerns may arise, such as confidentiality, integrity, and authenticity that needs to be addressed. Moreover, verifying the correctness of the communication protocol is also an important factor. This paper focuses on the formal verification of the xDAuth protocol, which is one of the prominent protocols for identity management in cross domain scenarios. We have modeled the information flow of xDAuth protocol using High Level Petri Nets (HLPN) to understand protocol information flow in a distributed environment. We analyze the rules of information flow using Z language while Z3 SMT solver is used for verification of the model. Our formal analysis and verification results reveal the fact that the protocol fulfills its intended purpose and provides the security for the defined protocol specific properties, e.g. secure secret key authentication, Chinese wall security policy and secrecy specific properties, e.g. confidentiality, integrity, authenticity.
We have designed a smart middleware and are working on implementing and deploying it as a landmark architectural piece run by nodes of the Future Internet. The middleware we propose aims as a main goal at making applications and services fully network-aware in the sense that they utilize network resources and tune their own demands based on what the underlying networks can offer. On the other hand, the smart middleware makes networks service aware in the sense that networks also adapt their configuration to service demands. Challenges of the Future Internet such as scalability, increased interoperability, and smart collaboration between engines are addressed by our middleware. Focusing on 3GPP networks including IP Multimedia Subsystem (IMS) networks, this paper presents the architectural design of our middleware and shows in two showcases how and why it is capable of addressing future challenges. Scenarios such as service and network-aware coordinated load balancing and "always-best-connected" as a service verify the merits of our smart middleware design.
The fifth generation (5G) wireless network is expected to have dense deployments of cells in order to provide efficient Internet and cellular connections. The cloud radio access network (C-RAN) emerges as one of the 5G solutions to steer the network architecture and control resources beyond the legacy radio access technologies. The C-RAN decouples the traffic management operations from the radio access technologies leading to a new combination of virtualized network core and fronthaul architecture. In this paper, we first investigate the power consumption impact due to the aggressive deployments of low-power neighborhood femtocell networks (NFNs) under the umbrella of a coordinated multipoint (CoMP) macrocell. We show that power savings obtained from employing low power NFN start to decline as the density of deployed femtocells exceed certain threshold. The analysis considers two CoMP sites at the cell-edge and intra-cell areas. Second, to restore the power efficiency and network stabilization, a C-RAN model is proposed to restructure the NFN into clusters to ease the energy burden in the evolving 5G systems. Tailoring this to traffic load, selected clusters will be switched off to save power when they operate with low traffic loads
Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business critical applications that leverage various cloud platforms. Such applications hosted on single/multiple cloud provider platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). This paper proposes, develops and validates CLAMBS—Cross-Layer Multi Cloud Application Monitoring and Benchmarking as-a-Service for efficient QoS monitoring and benchmarking of cloud applications hosted on multi-clouds environments. The major highlight of CLAMBS is its capability of monitoring and benchmarking individual application components such as databases and web servers, distributed across cloud layers (*-aaS), spread among multiple cloud providers. We validate CLAMBS using prototype implementation and extensive experimentation and show that CLAMBS efficiently monitors and benchmarks application components on multi-cloud platforms including Amazon EC2 and Microsoft Azure.
The service delivery model of cloud computing acts as a key enabler for big data analytics applications enhancing productivity, efficiency and reducing costs. The ever increasing flood of data generated from smart phones and sensors such as RFID readers, traffic cams etc require innovative provisioning and QoS monitoring approaches to continuously support big data analytics. To provide essential information for effective and efficient bid data analytics application QoS monitoring, in this paper we propose and develop CLAMS-Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework: (a) performs multi-cloud monitoring, and (b) addresses the issue of cross-layer monitoring of applications. We implement and demonstrate CLAMS functions on real-world multi-cloud platforms such as Amazon and Azure.
Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers, data processing frameworks, etc.) platforms. Application services hosted on single/multiple cloud provider platforms have diverse characteristics that require extensive monitoring mechanisms to aid in controlling run-time quality of service (e.g., access latency and number of requests being served per second, etc.). To provide essential real-time information for effective and efficient cloud application quality of service (QoS) monitoring, in this paper we propose, develop and validate CLAMS—Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework is capable of: (a) performing QoS monitoring of application components (e.g., database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g., Amazon and Azure); and (b) giving visibility into the QoS of individual application component, which is something not supported by current monitoring services and techniques. We conduct experiments on real-world multi-cloud platforms such as Amazon and Azure to empirically evaluate our framework and the results validate that CLAMS efficiently monitors applications running across multiple clouds.
Distributed ledgers and blockchain technologies can improve system security and trustworthiness by providing immutable replicated histories of data. Blockchain is a linked list of blocks containing digitally signed transactions, a cryptographic hash of the previous block, and a timestamp stored in a decentralized and distributed network. The Internet of Things (IoT) is one of the application domains in which security based on blockchain is discussed. In this article, we review the structure and architectures of distributed IoT systems and explain the motivations, challenges, and needs of blockchain to secure such systems. However, there are substantial threats and attacks to blockchain that must be understood, as well as suitable approaches to mitigate them. We, therefore, survey the most common attacks to blockchain systems and the solutions to mitigate them, with the objective of assessing how malicious these attacks are in the IoT context.
Evolving traceability requirements increasingly challenge manufacturing supply chain actors to collect tamperproof and auditable evidence about what inputs they process, in what way these inputs are used, and what the resulting process outputs are. Traceability solutions based on blockchain technology have shown ways to satisfy the requirements of creating a tamper-proof and auditable trail of traceability data. However, the existing solutions struggle to meet the increasing storage requirements necessary to create an evidence trail using manufacturing data. In this paper, we show a way to create a tamper-proof and auditable evolving product story that uses a decentralized file system called the InterPlanetary File System (IPFS). We also show how using linked data can help auditors derive a traceable product story from such an accumulating evidence trail. The solution proposed herein can supplement existing blockchain-based traceability solutions and enable traceability in global manufacturing supply chains where forming a consortium incurs prohibitive costs and where storage requirements are high.
We propose a resilient cache replacement approach based on a Value of sensed Information (VoI) policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation) and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs). These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures). These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.
Within cloud-based internet of things (IoT) applications, typically cloud providers employ Service Level Agreements (SLAs) to ensure the quality of their provisioned services. Similar to any other contractual method, an SLA is not immune to breaches. Ideally, an SLA stipulates consequences (e.g. penalties) imposed on cloud providers when they fail to conform to SLA terms. The current practice assumes trust in service providers to acknowledge SLA breach incidents and executing associated consequences. Recently, the Blockchain paradigm has introduced compelling capabilities that may enable us to address SLA enforcement more elegantly. This paper proposes and implements a blockchain-based approach for assessing SLA compliance and enforcing consequences. It employs a diagnostic accuracy method for validating the dependability of the proposed solution. The paper also benchmarks Hyperledger Fabric to investigate its feasibility as an underlying blockchain infrastructure concerning latency and transaction success/fail rates.
In pursuit of effective service level agreement (SLA) monitoring and enforcement in the context of Internet of Things (IoT) applications, this article regards SLA management as a distrusted process that should not be handled by a single authority. Here, we aim to justify our view on the matter and propose a conceptual blockchain-based framework to cope with some limitations associated with traditional SLA management approaches.
In view of evolving the Internet infrastructure, ICN is promoting a communication model that is fundamentally different from the traditional IP address-centric model. The ICN approach consists of the retrieval of content by (unique) names, regardless of origin server location (i.e., IP address), application, and distribution channel, thus enabling in-network caching/replication and content-based security. The expected benefits in terms of improved data dissemination efficiency and robustness in challenging communication scenarios indicate the high potential of ICN as an innovative networking paradigm in the IoT domain. IoT is a challenging environment, mainly due to the high number of heterogeneous and potentially constrained networked devices, and unique and heavy traffic patterns. The application of ICN principles in such a context opens new opportunities, while requiring careful design choices. This article critically discusses potential ways toward this goal by surveying the current literature after presenting several possible motivations for the introduction of ICN in the context of IoT. Major challenges and opportunities are also highlighted, serving as guidelines for progress beyond the state of the art in this timely and increasingly relevant topic.
Assigned as expert reviewer in the national quality assessment of higher education in the area of Computer Science/ICT/Media technology.
Opponent/External Reviewer for Licentiate thesis. Tahir Nawaz Minhas: "Network Impact on Quality of Experience of Mobile Video". Blekinge Institute of Technology, Karlskrona, Sweden on March 28, 2012
Demands from users of future generation networks will include seamless access to services across multiple wireless access networks. Handsets and laptops will typically be equipped with several network interfaces and have enough computing power and battery capacity to connect to several access networks simultaneously within and over administrative domains in a multi-homed manner.Existing technologies and solutions solve parts of the problems raised in such a scenario. However, a remaining important research challenge that still needs to be solved is to find an optimal mobility management solution offering terminal mobility as well as session, service, and personal mobility. Furthermore, a uniform and scalable mechanism for quality of service provisioning needs to be defined and integrated in an overall architecture offering differentiated handling of real-time and non real-time flows. The architecture will be based on the Internet Protocol as its least common denominator, but there are still a lot of research questions to be answered.This article surveys existing solutions and outlines future work in the area of cross-layer designed mobility management solutions and integrated quality of service support using a cross-layer design and policy-based approach.
Fourth generation (4G) wireless systems targeting 100 Mb/s for highly mobile scenarios and 1 Gb/s for low mobility communication are soon to be deployed on a broad basis with LTE-Advanced and IEEE 802.16m as the two candidate systems. Traditional applications spanning everything from voice, video, and data to new machine-to-machine (M2M) applications with billions of connected devices transmitting sensor data will in a soon future use these networks. Still, interworking solutions integrating those new 4G networks with existing legacy wireless networks are important building blocks in order to achieve cost-efficient solutions, offer smooth migration paths from legacy systems, and to provide means for load balancing among different radio access technologies.This article categorizes and analyzes different interworking solutions for heterogeneous wireless networks and provides suggestions for further research.
The aim of this thesis is to define a solution offering end-users seamless mobility in a multi-radio access technology environment. Today an increasing portion of cell phones and PDAs have more than one radio access technology and wireless access networks of various types are commonly available with overlapping coverage. This creates a heterogeneous network environment in which mobile devices can use several networks in parallel. In such environment the device needs to select the best network for each application to use available networks wisely. Selecting the best network for individual applications constitutes a major core problem.The thesis proposes a host-based solution for access network selection in heterogeneous wireless networking environments. Host-based solutions use only information available in mobile devices and are independent of information available in the networks to which these devices are attached. The host-based decision mechanism proposed in this thesis takes a number of constraints into account including network characteristics and mobility patterns in terms of movement speed of the user. The thesis also proposes a solution for network-based mobility management contrasting the other proposals using a host-based approach. Finally, this thesis proposes an architecture supporting mobility for roaming users in heterogeneous environments avoiding the need for scanning the medium when performing vertical handovers.Results include reduced handover latencies achieved by allowing hosts to use multihoming, bandwidth savings on the wireless interface by removing the tunneling overhead, and handover guidance through the usage of directory-based solutions instead of scanning the medium. User-perceived quality of voice calls measured on the MOS (Mean Opinion Score) scale shows no or very little impact from the mobility support procedures proposed in this thesis. Results also include simulation models, real-world prototypes, and testbeds that all could be used in future work. The proposed solutions in this thesis are mainly evaluated using simulations and experiments with prototypes in live testbeds. Analytical methods are used to complement some results from simulations and experiments
This thesis proposes and evaluates architectures and algorithms for access network selection in heterogeneous networking environments. The ultimate goal is to select the best access network at any time taking a number of constraints into account including user requirements and network characteristics. The proposed architecture enables global roaming between access networks within an operator's domain, as well as across operators without any changes in the data and control plane of the access networks being required. Also, the proposed architecture includes an algorithm for measuring performance of access networks that can be used on a number of access technologies being wired or wireless. The proposed access network selection algorithm also has an end-to-end perspective giving a network performance indication of user traffic being communicated. The contributions of this thesis include an implementation of a simulation model in OPNET Modeler, a proposal of a metric at the network layer for heterogeneous access networks, an implementation of a real-world prototype, a study of multimedia applications on perceived quality of service, an access network selection algorithm for highly mobile users and vehicular networks, and an extension of the mentioned access network selection algorithm to support cross-layer decision making taking application layer and datalink layer metrics into account.
IP Mobility Management allowing for handover and location management to be handled by the network layer has been around for a couple of 20 years or so now. A number of protocols have been proposed including the GPRS Tunneling Protocol (GTP), Mobile IP, Proxy Mobile IP, and the Locator/Identifier Split Protocol (LISP) being the best known and most discussed in the research literature. This paper overviews existing solutions and suggests a new distributed and dynamic mobility management scheme and evaluates it against existing static approaches.
Mobility management is today handled at various layers in the network stack including the datalink layer, the network layer, the transport layer and the application layer. Also, cross-layer designed solutions exist and the most known example is the work performed by IEEE currently standardizing media-independent handover services under the name of IEEE 802.21.This paper proposes a new interworking scheme for Mobile IP and the Session Initiation Protocol being the most popular solutions for mobility management at the network and application layers respectively. The goal is to deliver seamless mobility for both TCP-based and UDP-based applications taking the best features from each mobility management scheme. In short, this paper proposes that TCP connections are handled through Mobile IP while UDP-based connection-less applications may use the Session Initiation Protocol for handling mobility. The two mobility management solutions are integrated into one common solution.
Many cell-phones and Personal Digital Assistants (PDAs) are equipped with multiple radio interfaces. Because of this, devices need to have ways of efficiently selecting the most suitable access network across multiple technologies based on the physical location of the device as well as user-defined parameters such as cost, bandwidth, and battery consumption. The IEEE has standardized the 802.21 framework for media-independent handovers where dynamic selection of network interfaces is an important feature. This paper describes and evaluates a novel architecture which extends the IEEE 802.21 information service. The architecture is based on a three-layer structure with Location-to-Service Translation (LoST) servers, service provider servers and independent evaluator servers. Evaluator servers are populated with information on coverage and quality of service as provided by trusted users. The proposed architecture allows for competition at all levels and scales well due to its distributed nature. A prototype has been developed and is presented in the paper.