large number of the population around the world suffers from various disabilities. Disabilities affect not only children but also adults of different professions. Smart technology can assist the disabled population and lead to a comfortable life in an enhanced living environment (ELE). In this paper, we propose an effective voice pathology assessment system that works in a smart home framework. The proposed system takes input from various sensors, and processes the acquired voice signals and electroglottography (EGG) signals. Co-occurrence matrices in different directions and neighborhoods from the spectrograms of these signals were obtained. Several features such as energy, entropy, contrast, and homogeneity from these matrices were calculated and fed into a Gaussian mixture model-based classifier. Experiments were performed with a publicly available database, namely, the Saarbrucken voice database. The results demonstrate the feasibility of the proposed system in light of its high accuracy and speed. The proposed system can be extended to assess other disabilities in an ELE.
Biofeedback signals are important elements in critical care applications, such as monitoring ECG data of a patient, discovering patterns from large amount of ECG data sets, detecting outliers from ECG data, etc. Because the signal data update continuously and the sampling rates may be different, time-series data stream is harder to be dealt with compared to traditional historical time-series data. For the pattern discovery problem on time-series streams, Toyoda proposed the CrossMatch (CM) approach to discover the patterns between two time-series data streams (sequences), which requires only O(n) time per data update, where n is the length of one sequence. CM, however, does not support normalization, which is required for some kinds of sequences (e.g. EEG data, ECG data). Therefore, we propose a normalized-CrossMatch approach (NCM) that extends CM to enforce normalization while maintaining the same performance capabilities
Time series data stream mining has attracted considerable research interest in recent years. Pattern discovery is a challenging problem in time series data stream mining. Because the data update continuously and the sampling rates may be different, dynamic time warping (DTW)-based approaches are used to solve the pattern discovery problem in time series data streams. However, the naive form of the DTW-based approach is computationally expensive. Therefore, Toyoda proposed the CrossMatch (CM) approach to discover the patterns between two time series data streams (sequences), which requires only O(n) time per data update, where n is the length of one sequence. CM, however, does not support normalization, which is required for some kinds of sequences (e.g. stock prices, ECG data). Therefore, we propose a normalized-CrossMatch approach that extends CM to enforce normalization while maintaining the same performance capabilities.
There is broad consensus that remote health monitoring will benefit all stakeholders in the healthcare system and that it has the potential to save billions of dollars. Among the major concerns that are preventing the patients from widely adopting this technology are data privacy and security. Wireless Medical Sensor Networks (MSNs) are the building blocks for remote health monitoring systems. This paper helps to identify the most challenging security issues in the existing authentication protocols for remote patient monitoring and presents a lightweight public-key-based authentication protocol for MSNs. In MSNs, the nodes are classified into sensors that report measurements about the human body and actuators that receive commands from the medical staff and perform actions. Authenticating these commands is a critical security issue, as any alteration may lead to serious consequences. The proposed protocol is based on the Rabin authentication algorithm, which is modified in this paper to improve its signature signing process, making it suitable for delay-sensitive MSN applications. To prove the efficiency of the Rabin algorithm, we implemented the algorithm with different hardware settings using Tmote Sky motes and also programmed the algorithm on an FPGA to evaluate its design and performance. Furthermore, the proposed protocol is implemented and tested using the MIRACL (Multiprecision Integer and Rational Arithmetic C/C++) library. The results show that secure, direct, instant and authenticated commands can be delivered from the medical staff to the MSN nodes
The limited available fossil fuels and the call for sustainable environment have brought about new technologies for the high efficiency in the use of fossil fuels and introduction of renewable energy. Smart grid is an emerging technology that can fulfill such demands by incorporating advanced information and communications technology (ICT). The pervasive deployment of the advanced ICT, especially the smart metering, will generate big energy data in terms of volume, velocity, and variety. The generated big data can bring huge benefits to the better energy planning, efficient energy generation and distribution. As such data involve end users’ privacy and secure operation of the critical infrastructure, there will be new security issues. This paper is to survey and discuss new findings and developments in the existing big energy data analytics and security. Several taxonomies have been proposed to express the intriguing relationships of various variables in the field.
Millions of dedicated sensors are deployed in smart cities to enhance quality of urban living. Communication technologies are critical for connecting these sensors and transmitting events to sink. In control systems of mobile wireless sensor networks (MWSNs), mobile nodes are constantly moving to detect events, while static nodes constitute the communication infrastructure for information transmission. Therefore, how to communicate with sink quickly and effectively is an important research issue for control systems of MWSNs. In this paper, a communication scheme named first relay node selection based on fast response and multihop relay transmission with variable duty cycle (FRAVD) is proposed. The scheme can effectively reduce the network delay by combining first relay node selection with node duty cycles setting. In FRAVD scheme, first, for the first relay node selection, we propose a strategy based on fast response, that is, select the first relay node from adjacent nodes in the communication range within the shortest response time, and guarantee that the remaining energy and the distance from sink of the node are better than the average. Then for multihop data transmission of static nodes, variable duty cycle is introduced novelty, which utilizes the residual energy to improve the duty cycle of nodes in far-sink area, because nodes adopt a sleep-wake asynchronous mode, increasing the duty cycle can significantly improve network performance in terms of delays and transmission reliability. Our comprehensive performance analysis has demonstrated that compared with the communication scheme with fixed duty cycle, the FRAVD scheme reduces the network delay by 24.17%, improves the probability of finding first relay node by 17.68%, while also ensuring the network lifetime is not less than the previous researches, and is a relatively efficient low-latency communication scheme.
This paper firstly investigates the problem of uplink power control in cognitive radio networks (CRNs) with multiple primary users (PUs) and multiple second users (SUs) considering channel outage constraints and interference power constraints, where PUs and SUs compete with each other to maximize their utilities. We formulate a Stackelberg game to model this hierarchical competition, where PUs and SUs are considered to be leaders and followers, respectively. We theoretically prove the existence and uniqueness of robust Stackelberg equilibrium for the noncooperative approach. Then, we apply the Lagrange dual decomposition method to solve this problem, and an efficient iterative algorithm is proposed to search the Stackelberg equilibrium. Simulation results show that the proposed algorithm improves the performance compared with those proportionate game schemes.
Spreading across many parts of the world and presently hard striking California, extended droughts could even potentially threaten reliable electricity production and local water supplies, both of which are critical for data center operation. While numerous efforts have been dedicated to reducing data centers’ energy consumption, the enormity of data centers’ water footprints is largely neglected and, if still left unchecked, may handicap service availability during droughts. In this paper, we propose a water-aware workload management algorithm, called WATCH (WATer-constrained workload sCHeduling in data centers), which caps data centers’ long-term water consumption by exploiting spatio-temporal diversities of water efficiency and dynamically dispatching workloads among distributed data centers. We demonstrate the effectiveness of WATCH both analytically and empirically using simulations: based on only online information, WATCH can result in a provably-low operational cost while successfully capping water consumption under a desired level. Our results also show that WATCH can cut water consumption by 20 percent while only incurring a negligible cost increase even compared to state-of-the-art cost-minimizing but water-oblivious solution. Sensitivity studies are conducted to validate WATCH under various settings.
Secure real-time data about goods in transit in supply chains needs bandwidth having capacity that is not fulfilled with the current infrastructure. Hence, 5G-enabled Internet of Things (IoT) in mobile edge computing is intended to substantially increase this capacity. To deal with this issue, we design a new efficient lightweight blockchain-enabled RFID-based authentication protocol for supply chains in 5G mobile edge computing environment, called LBRAPS. LBRAPS is based on bitwise exclusive-or (XOR), one-way cryptographic hash and bitwise rotation operations only. LBRAPS is shown to be secure against various attacks. Moreover, the simulation-based formal security verification using the broadly-accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool assures that LBRAPS is secure. Finally, it is shown that LBRAPS has better trade-off among its security and functionality features, communication and computation costs as compared to those for existing protocols.
This paper presents two new energy balanced routing protocols for Underwater Acoustic Sensor Networks (UASNs); Efficient and Balanced Energy consumption Technique (EBET) and Enhanced EBET (EEBET). The first proposed protocol avoids direct transmission over long distance to save sufficient amount of energy consumed in the routing process. The second protocol overcomes the deficiencies in both Balanced Transmission Mechanism (BTM) and EBET techniques. EBET selects relay node on the basis of optimal distance threshold which leads to network lifetime prolongation. The initial energy of each sensor node is divided into energy levels for balanced energy consumption. Selection of high energy level node within transmission range avoids long distance direct data transmission. The EEBET incorporates depth threshold to minimize the number of hops between source node and sink while eradicating backward data transmissions. The EBET technique balances energy consumption within successive ring sectors, while, EEBET balances energy consumption of the entire network. In EEBET, optimum number of energy levels are also calculated to further enhance the network lifetime. Effectiveness of the proposed schemes is validated through simulations where these are compared with two existing routing protocols in terms of network lifetime, transmission loss, and throughput. The simulations are conducted under different network radii and varied number of nodes.
Research into the requirements for mobile services has seen a growing interest in the fields of cloud technology and vehicular applications. Integrating cloud computing and storage with vehicles is a way to increase accessibility to multimedia services, and inspire myriad potential applications and research topics. This paper presents an overview of the characteristics of cloud computing, and introduces the basic concepts of vehicular networks. An architecture for multimedia cloud computing is proposed to suit subscription service mechanisms. The tendency to equip vehicles with advanced and embedded devices such as diverse sensors increases the capabilities of vehicles to provide computation and collection of multimedia content in the form of the vehicular network. Then, the taxonomy of cloud-based vehicular networks is addressed from the standpoint of the service relationship between the cloud computing and vehicular networks. In this paper, we identify the main considerations and challenges for cloud based vehicular networks regarding multimedia services, and propose potential research directions to make multimedia services achievable. More specifically, we quantitatively evaluate the performance metrics of these researches. For example, in the proposed broadcast storm mitigation scheme for vehicular networks, the packet delivery ratio and the normalized throughput can both achieve about 90%, making the proposed scheme a useful candidate for multimedia data exchange. Moreover, in the video uplinking scenarios, the proposed scheme is favorably compared with two well-known schedulers, M-LWDF and EXP, with the performance much closer to the optimum
With advancements in information and communication technology (ICT), there is an increase in the number of users availing remote healthcare applications. The data collected about the patients in these applications varies with respect to volume, velocity, variety, veracity, and value. To process such a large collection of heterogeneous data is one of the biggest challenges that needs a specialized approach. To address this issue, a new fuzzy rule-based classifier for big data handling using cloud-based infrastructure is presented in this paper, with an aim to provide Healthcare-as-a-Service (HaaS) to the users located at remote locations. The proposed scheme is based upon the cluster formation using the modified Expectation-Maximization (EM) algorithm and processing of the big data on the cloud environment. Then, a fuzzy rule-based classifier is designed for an efficient decision making about the data classification in the proposed scheme. The proposed scheme is evaluated with respect to different evaluation metrics such as classification time, response time, accuracy and false positive rate. The results obtained are compared with the standard techniques to confirm the effectiveness of the proposed scheme.
The prospect of Line-of-Business Services (LoBSs) for infrastructure of Emerging Sensor Networks (ESNs) is exciting. Access control remains a top challenge in this scenario as the service provider's server contains a lot of valuable resources. LoBSs' users are very diverse as they may come from a wide range of locations with vastly different characteristics. Cost of joining could be low and in many cases, intruders are eligible users conducting malicious actions. As a result, user access should be adjusted dynamically. Assessing LoBSs' risk dynamically based on both frequency and threat degree of malicious operations is therefore necessary. In this paper, we proposed a Quantitative Risk Assessment Model (QRAM) involving frequency and threat degree based on value at risk. To quantify the threat degree as an elementary intrusion effort, we amend the influence coefficient of risk indexes in the network security situation assessment model. To quantify threat frequency as intrusion trace effort, we make use of multiple behavior information fusion. Under the influence of intrusion trace, we adapt the historical simulation method of value at risk to dynamically access LoBSs' risk. Simulation based on existing data is used to select appropriate parameters for QRAM. Our simulation results show that the duration influence on elementary intrusion effort is reasonable when the normalized parameter is 1000. Likewise, the time window of intrusion trace and the weight between objective risk and subjective risk can be set to 10 s and 0.5, respectively. While our focus is to develop QRAM for assessing the risk of LoBSs for infrastructure of ESNs dynamically involving frequency and threat degree, we believe it is also appropriate for other scenarios in cloud computing.
With the ongoing shift of network services to the application layer also the monitoring systems focus more on the data from the application layer. The increasing speed of the network links, together with the increased complexity of application protocol processing, require a new way of hardware acceleration. We propose a new concept of hardware acceleration for flexible flow-based application level traffic monitoring which we call Software Defined Monitoring. Application layer processing is performed by monitoring tasks implemented in the software in conjunction with a configurable hardware accelerator. The accelerator is a high-speed application-specific processor tailored to stateful flow processing. The software monitoring tasks control the level of detail retained by the hardware for each flow in such a way that the usable information is always retained, while the remaining data is processed by simpler methods. Flexibility of the concept is provided by a plugin-based design of both hardware and software, which ensures adaptability in the evolving world of network monitoring. Our high-speed implementation using FPGA acceleration board in a commodity server is able to perform a 100 Gb/s flow traffic measurement augmented by a selected application-level protocol analysis
We propose a framed slotted Aloha-based adaptive method for robust communication between autonomous wireless nodes competing to access a channel under unknown network conditions such as adversarial disruptions. With energy as a scarce resource, we show that in order to disrupt communications, our method forces the reactive adversary to incur higher energy cost relative to a legitimate node. Consequently, the adversary depletes its energy resources and stops attacking the network. Using the proposed method, a transmitter node changes the number of selected time slots and the access probability in each selected time slot based on the number of unsuccessful transmissions of a data packet. On the receiver side, a receiver node changes the probability of listening in a time slot based on the number of unsuccessful communication attempts of a packet. We compare the proposed method with two other framed slotted Aloha-based methods in terms of average energy consumption and average time required to communicate a packet. For performance evaluation, we consider scenarios in which: (1) Multiple nodes compete to access a channel. (2) Nodes compete in the presence of adversarial attacks. (3) Nodes compete in the presence of channel errors and capture effect.
In this tutorial paper, we discuss and compare cooperative content delivery (CCD) techniques that exploit multiple wireless interfaces available on mobile devices to efficiently satisfy the already massive and rapidly growing user demand for content. The discussed CCD techniques include simultaneous use of wireless interfaces, opportunistic use of wireless interfaces, and aggregate use of wireless interfaces. We provide a taxonomy of different ways in which multiple wireless interfaces are exploited for CCD, and also discuss the real measurement studies that evaluate the content delivery performance of different wireless interfaces in terms of energy consumption and throughput. We describe several challenges related to the design of CCD methods using multiple interfaces, and also explain how new technological developments can help in accelerating the performance of such CCD methods. The new technological developments discussed in this paper include wireless interface aggregation, network caching, and the use of crowdsourcing. We provide a case study for selection of devices in a group for CCD using multiple interfaces. We consider this case study based on the observation that in general different CCD users can have different link qualities in terms of transmit/receive performance, and selection of users with good link qualities for CCD can accelerate the content delivery performance of wireless networks. Finally, we discuss some open research issues relating to CCD using multiple interfaces.
In Named-Data Networking (NDN), content is cached in network nodes and served for future requests. This property of NDN allows attackers to inject poisoned content into the network and isolate users from valid content sources. Since a digital signature is embedded in every piece of content in NDN architecture, poisoned content is discarded if routers perform signature verification; however, if every content is verified by every router, it would be overly expensive to do. In our preliminary work, we have suggested a content verification scheme that minimizes unnecessary verification and favors already verified content in the content store, which reduces the verification overhead by as much as 90% without failing to detect every piece of poisoned content. Under this scheme, however, routers are vulnerable to verification attack, in which a large amount of unverified content is accessed to exhaust system resources. In this paper, we carefully look at the possible concerns of our preliminary work, including verification attack, and present a simple but effective solution. The proposed solution mitigates the weakness of our preliminary work and allows this paper to be deployed for real-world applications.
Millimeter-wave (mmWave) communication is the rising technology for next-generation wireless transmission. Benefited by its abundant bandwidth and short wavelength, mmWave is advanced in multi-gigabit transmittability and beamforming. In contrast, the short wavelength also makes mmWave easily blocked by obstacles. In order to bypass these obstacles, relays are widely needed in mmWave communications. Unmanned autonomous vehicles (UAVs), such as drones and self-driving robots, enable the mobile relays in real applications. Nevertheless, it is challenging for a UAV to find its optimal relay location automatically. On the one hand, it is difficult to find the location accurately due to the complex and dynamic wireless environment; on the other hand, most applications require the relay to forward data immediately, so the autonomous process should be fast. To tackle this challenge, we propose a novel method AutoRelay specialized for mmWave communications. In AutoRelay, the UAV samples the link qualities of mmWave beams while moving. Based on the real-time sampling, the UAV gradually adjusts its path to approach the optimal location by leveraging compressive sensing theory to estimate the link qualities in candidate space, which increases the accuracy and save the time. Performance results demonstrate that AutoRelay outperforms existing methods in achieving an accurate and efficient relay strategy.
Due to the widespread popularity of Internet-enabled devices, Industrial Internet of Things (IIoT) becomes popular in recent years. However, as the smart devices share the information with each other using an open channel, i.e., Internet, so security and privacy of the shared information remains a paramount concern. There exist some solutions in the literature for preserving security and privacy in IIoT environment. However, due to their heavy computation and communication overheads, these solutions may not be applicable to wide category of applications in IIoT environment. Hence, in this paper, we propose a new Biometric-based Privacy Preserving User Authentication (BP2UA) scheme for cloud-based IIoT deployment. BP2UA consists of strong authentication between users and smart devices using pre-established key agreement between smart devices and the gateway node. The formal security analysis of BP2UA using the well-known ROR model is provided to prove its session key security. Moreover, an informal security analysis of BP2UA is also given to show its robustness against various types of known attacks. The computation and communication costs of BP2UA in comparison to the other existing schemes of its category demonstrate its effectiveness in the IIoT environment. Finally, the practical demonstration of BP2UA is also done using the NS2 simulation.
Energy is one of the most valuable resources of the modern era and needs to be consumed in an optimized manner by an intelligent usage of various smart devices, which are major sources of energy consumption nowadays. With the popularity of low-voltage DC appliances such as-LEDs, computers, and laptops, there arises a need to design new solutions for self-sustainable smart energy buildings containing these appliances. These smart buildings constitute the next generation smart cities. Keeping focus on these points, this article proposes a cloud-assisted DC nanogrid for self-sustainable smart buildings in next generation smart cities. As there may be a large number of such smart buildings in different smart cities in the near future, a huge amount of data with respect to demand and generation of electricity is expected to be generated from all such buildings. This data would be of heterogeneous types as it would be generated from different types of appliances in these smart buildings. To handle this situation, we have used a cloud-based infrastructure to make intelligent decisions with respect to the energy usage of various appliances. This results in an uninterrupted DC power supply to all low-voltage DC appliances with minimal dependence on the grid. Hence, the extra burden on the main grid in peak hours is reduced as buildings in smart cities would be self-sustainable with respect to their energy demands
We propose a genetic algorithm (GA) by taking into account the correlation between the current best candidate with the other candidates in the population. In this paper we propose a new selection operator and in addition we have designed a prediction operator which works on an archive of selected candidates. We test our proposed algorithm on the problem definitions for the CEC 2014 special session and competition on single objective real-parameter numerical optimization.
Over the years, advanced IT technologies have facilitated the emergence of new ways of generating and gathering data rapidly, continuously, and largely and are associated with a new research and application branch, namely, data stream mining (DSM). Among those multiple scenarios of DSM, the Internet of Things (IoT) plays a significant role, with a typical meaning of a tough and challenging computational case of big data. In this paper, we describe a self-adaptive approach to the pre-processing step of data stream classification. The proposed algorithm allows different divisions with both variable numbers and lengths of sub-windows under a whole sliding window on an input stream, and clustering-based particle swarm optimization (CPSO) is adopted as the main metaheuristic search method to guarantee that its stream segmentations are effective and adaptive to itself. In order to create a more abundant search space, statistical feature extraction (SFX) is applied after variable partitions of the entire sliding window. We validate and test the effort of our algorithm with other temporal methods according to several IoT environmental sensor monitoring datasets. The experiments yield encouraging outcomes, supporting the reality that picking significant appropriate variant sub-window segmentations heuristically with an incorporated clustering technique merit would allow these to perform better than others
With the improvement of mobile device performance, the requirement of equivalent dose description in intensity-modulated radiation therapy is increasing in mobile multimedia for health-care. The emergence of mobile cloud computing will provide cloud servers and storage for IMRT mobile applications, thus realizing visualized radiotherapy in a real sense.Equivalent uniform dose (EUD) is a biomedical indicator based on the dose measure. In this study, the dose volume histogram is used to describe the dose distribution of different tissues in target and nontarget regions. The traditional definition of equivalent uniform dose such as the exponential form and the linear form has only a few parameters in the model for fast calculation. However, there is no close relationship between this traditional definition and the dose volume histogram.In order to establish the consistency between the equivalent uniform dose and the dose volume histogram, this paper proposes a novel definition of equivalent uniform dose based on the volume dose curve, called VD-EUD. By using a unique organic volume weight curve, it is easy to calculate VD-EUD for different dose distributions. In the definition, different weight curves are used to represent the biological effects of different organs. For the target area, we should be more careful about those voxels with low dose (cold point); thus, the weight curve is monotonically decreasing. While for the nontarget area, the curve is monotonically increasing. Furthermore, we present the curves for parallel, serial and mixed organs of nontarget areas separately, and we define the weight curve form with only two parameters. Medical doctors can adjust the curve interactively according to different patients and organs. We also propose a fluence map optimization model with the VD-EUD constraint, which means the proposed EUD constraint will lead to a large feasible solution space.We compare the generalized equivalent uniform dose (gEUD) and the proposed VD-EUD by experiments, which show that the VD-EUD has a closer relationship with the dose volume histogram. If the biological survival probability is equivalent to the VD-EUD, the feasible solution space would be large, and the target areas can be covered.By establishing a personalized organic weight curve, medical doctors can have a unique VD-EUD for each patient. By using the flexible and adjustable equivalent uniform dose definition, we can establish VD-EUD-based fluence map optimization model, which will lead to a larger solution space than the traditional dose volume constraint-based model. The VD-EUD is a new definition; thus, we need more clinical testing and verification.
In recent years, Radio Frequency Identification (RFID) applications of various kinds have been blooming. However, along with the stunning advancement have come all sorts of security and privacy issues, for RFID tags oftentimes store private data and so the permission to read a tag or any other kind of access needs to be carefully controlled. Therefore, of all the RFID-related researches released so far, a big portion focuses on the issue of authentication. There have been so many cases where the legal access to or control over a tag needs to be switched from one reader to another, which has encouraged the development of quite a number of different kinds of ownership transfer protocols. On the other hand, not only has the need for ownership transfer been increasing, but a part of it has also been evolving from individual ownership transfer into group ownership transfer. However, in spite of the growing need for practical group ownership transfer services, little research has been done to offer an answer to the need. In this paper, we shall present a new RFID time-bound group ownership delegate protocol based on homomorphic encryption and quadratic residues. In addition, in order to provide more comprehensive service, on top of mutual authentication and ownership delegation, we also offer options for the e-th time verification as well as the revocation of earlier delegation.
Wireless body area networks (WBANs) support the inter-operability of biomedical sensors and medical institutions with convenience and high-efficiency, which makes it an appropriate solution for the pervasive healthcare. Typically, WBANs comprise in-body or around-body sensor nodes for collecting data of physiological feature. Therefore, the efficient medium access control (MAC) protocol is a crucial paramount to coordinate these devices and forward data to the medical center in an efficient and reliable way. However, the extensive use of wireless channel and coexistence of WBANs may result in inevitable interference which will cause performance degradation. Besides, contention-based access in single channel in WBANs is less efficient for dense medical traffic on account of large packet delay, energy consumption and low priority starvation. To address these issues above, we propose a multi-channel MAC (MC-MAC) scheme to obtain better network performance. Considering the characteristic and emergency degree of medical traffic, we introduce a novel channel mapping and selection mechanism, cooperating with conflict avoidance strategy, to organize nodes to access available channels without collisions. In addition, we have evaluated the performance of MC-MAC and the standard IEEE 802.15.6 via simulation and hardware test. The test is conducted by hardware platform based on prototype system of WBANs. Both of the analysis and simulation results show that MC-MAC outperforms the IEEE 802.15.6 in terms of packet delay, throughput, packet error rate and frame error rate
To accommodate the trend toward mass customization launched by intelligent manufacturing, the paper proposes the adoption of model-integrated computing (MIC) paradigm in the motion control system development process for enhancing flexibility and robustness. Hierarchical structural and behavioral diversities in motion control system are considered during the implementation of MIC paradigm. For design-phase implementation, a motion-control-domain-specific modeling language is developed, and formal semantics are integrated. With regard to execution-phase implementation, a real-time runtime framework compliant with the IEC 61499 standard is proposed. Extensions of function block chain and priority-based event propagation are proposed. Dynamically extendable FB types library for motion control domain is constructed. A prototype three-axis motion control system is modeled using the proposed modelling language and is then deployed to the implemented framework to prove the feasibility of the adoption of the MIC paradigm in motion control domain
Graphics processing unit (GPU) accelerated processing performs significant efficiency in many multimedia applications. With the development of GPU cloud computing, more and more cloud providers focus on GPU-accelerated services. Since the high maintenance cost and different speedups for various applications, GPU-accelerated services still need a different pricing strategy. Thus, in this paper, we propose an optimal GPU-accelerated multimedia processing service pricing strategy for maximize the profits of both cloud provider and users. We first analyze the revenues and costs of the cloud provider and users when users adopt GPU-accelerated multimedia processing services then state the profit functions of both the cloud provider and users. With a game theory based method, we find the optimal solutions of both the cloud provider’s and users’ profit functions. Finally, through large scale simulations, our pricing strategy brings higher profit to the cloud provider and users compared to the original pricing strategy of GPU cloud services.
The impact of both maximal ratio combining (MRC) and relay selection on the physical layer security in wireless communication systems is studied by analyzing some important factors, including the probability characteristic of the legitimate receiver (Bob) and malicious eavesdropper (Eve)'s end-to-end signal-to-noise ratio (SNR), the secrecy outage probability, and the average secrecy channel capacity over Rayleigh fading Channel. We assume that Bob receives its data from both the relay and the source by cooperative communication, provided that MRC is employed at the receiver. Compared to the conventional MRC methods, a higher spatial diversity order can be exploited by performing relay selection in the proposed method, as validated via both theoretical analysis and numerical simulation. The theoretical closed-form expressions for some figures of merit, e.g., the secrecy outage probability and the average secrecy capacity, are all consistent with the numerical results
There have been many recent advances in wireless communication technologies, particularly in the area of wireless sensor networks, which have undergone rapid development and been successfully applied in the consumer electronics market. Therefore, wireless networks (WNs) have been attracting more attention from academic communities and other domains. From an industrial perspective, WNs present many advantages including flexibility, low cost, easy deployment and so on. Therefore, WNs can play a vital role in the Industry 4.0 framework, and can be used for smart factories and intelligent manufacturing systems. In this paper, we present an overview of industrial WNs (IWNs), discuss IWN features and related techniques, and then provide a new architecture based on quality of service and quality of data for IWNs. We also propose some applications for IWNs and IWN standards. Then, we will use a case from our previous achievements to explain how to design an IWN under Industry 4.0. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make IWNs truly ubiquitous for a wide range of applications.
To protect confidential communications from eavesdropping attacks in wireless networks, we propose a novel anti-eavesdropping scheme named AE-Shelter. In our proposed scheme, we place a number of friendly jammers at a circular boundary to protect legitimate communications. The jammers sending artificial noise can mitigate the eavesdropping capability of wiretapping the confidential information. We also establish a theoretical model to evaluate the performance of AE-Shelter. Our results show that our proposed AE-Shelter scheme can significantly reduce the eavesdropping risk without significantly degrading the network performance.
Modelling the human brain as a complex network has provided a powerful mathematical framework to characterize the structural and functional architectures of the brain. In the past decade, the combination of non-invasive neuroimaging techniques and graph theoretical approaches enable us to map human structural and functional connectivity patterns (i.e., connectome) at the macroscopic level. One of the most influential findings is that human brain networks exhibit prominent small-world organization. Such a network architecture in the human brain facilitates efficient information segregation and integration at low wiring and energy costs, which presumably results from natural selection under the pressure of a cost-efficiency balance. Moreover, the small-world organization undergoes continuous changes during normal development and aging and exhibits dramatic alterations in neurological and psychiatric disorders. In this review, we survey recent advances regarding the small-world architecture in human brain networks and highlight the potential implications and applications in multidisciplinary fields, including cognitive neuroscience, medicine and engineering. Finally, we highlight several challenging issues and areas for future research in this rapidly growing field.
The rapid development of the latest distributed computing paradigm, i.e., cloud computing, generates a highly fragmented cloud market composed of numerous cloud providers and offers tremendous parallel computing ability to handle Big Data problems. One of the biggest challenges in Multi-clouds is efficient workflow scheduling. Although the workflow scheduling problem has been studied extensively, there are still very few primal works tailored for Multi-cloud environments. Moreover, the existing research works either fail to satisfy the Quality of Service (QoS) requirements, or do not consider some fundamental features of cloud computing such as heterogeneity and elasticity of computing resources. In this paper, a scheduling algorithm which is called Multi-Clouds Partial Critical Paths with Pretreatment (MCPCPP) for Big Data workflows in Multi-clouds is presented. This algorithm incorporates the concept of Partial Critical Paths, and aims to minimize the execution cost of workflow while satisfying the defined deadline constraint. Our approach takes into considerations the essential characteristics of Multi-clouds such as the charge per time interval, various instance types from different cloud providers as well as homogeneous intra-bandwidth vs. heterogeneous inter-bandwidth. Various types of workflows are used for evaluation purpose and our experimental results show that the MCPCPP is promising.
Ubiquitous health monitoring is a mobile health service with the aim of monitoring patients’ conditions anytime and anywhere by collecting and transferring biosignal data from patients to health-service providers (e.g., healthcare centers). As a critical issue in ubiquitous health monitoring, wireless resource allocation can influence the performance of health monitoring, and the majority of work in wireless resource allocation for health monitoring has focused on quality-of-service oriented allocation schemes with primary challenges at the physical and MAC layers. Recently, quality-of-experience (QoE) oriented resource allocation schemes in wireless health monitoring have attracted attentions as a promising design to a better service of healthcare monitoring. In this paper, we overview the metrics of assessing the quality of medical images, and discuss the performance of these metrics in QoE-oriented resource allocation for health monitoring. We start with addressing the state-of-the-art QoE metrics by providing a taxonomy of the different metrics employed in assessing medical images. We then present the design of resource allocation schemes for health monitoring. After that, we present a case study to compare the performance of different classes of metrics in designing resource allocation schemes. We end the paper with a few open issues in the design of novel QoE metrics for resource allocation in health monitoring.
A device-to-device (D2D) assisted cellular network is pervasive to support ubiquitous healthcare applications, since it is expected to bring the significant benefits of improving user throughput, extending the battery life of mobiles, etc. However, D2D and cellular communications in the same network may cause cross-tier interference (CTI) to each other. Also a critical issue of using D2D assisted cellular networks under a healthcare scenario is the electromagnetic interference (EMI) caused by RF transmission, and a high level of EMI may lead to a critical malfunction of medical equipments. In consideration of CTI and EMI, we study the problem of optimizing the energy efficiency (EE) across the mobile users in different priorities (different levels of emergency) within the Internet of vehicles for mobile health, and propose a penalty-function algorithm of power control to solve the aforementioned problem. Numerical results demonstrate that the proposed algorithm can achieve remarkable improvements in terms of EE, while ensuring an allowable level of EMI on medical equipments.
A device-to-device (D2D) assisted cellular network is pervasive to support ubiquitous healthcare applications, since it is expected to bring the significant benefits of improving user throughput, extending the battery life of mobiles, etc. However, D2D and cellular communications in the same network may cause cross-tier interference (CTI) to each other. Also a critical issue of using D2D assisted cellular networks under a healthcare scenario is the electromagnetic interference (EMI) caused by RF transmission, and a high level of EMI may lead to a critical malfunction of medical equipments. In consideration of CTI and EMI, we study the problem of optimizing individual channel rates of the mobile users in different priorities (different levels of emergency) within the Internet of vehicles for mobile health, and propose an algorithm of controlling the transmit power to solve the above-mentioned problem under a gametheoretical framework. Numerical results show that the proposed algorithm can converge linearly to the optimum, while ensuring an allowable level of EMI on medical equipments.
This survey makes an overview of the most recent applications on the neural networks for the computer-aided medical diagnosis (CAMD) over the past decade. CAMD can facilitate the automation of decision making, extraction and visualization of complex characteristics for clinical diagnosis purposes. Over the past decade, neural networks have attained considerable research interest and are widely employed to complex CAMD systems in diverse clinical application domains, such as detecting diseases, classification of diseases, testing the compatibility of new drugs, etc. Overall, this paper reviews the state-of-the-art of neural networks for CAMD. It helps the readers understand the topic of neural networks for CAMD by summarizing the findings addressed in recent academic papers as well as presenting a few open issues of developing the research on this topic.
Wireless technologies are pervasive to support ubiquitous healthcare applications. However, RF transmission in wireless technologies can lead to electromagnetic interference (EMI) on medical sensors under a healthcare scenario, and a high level of EMI may lead to a critical malfunction of medical sensors. In view of EMI to medical sensors, we propose a joint power and rate control algorithm under game theoretic framework to schedule data transmission at each of wireless sensors. The objective of such a game is to maximize the utility of each wireless user subject to the EMI constraints for medical sensors. We show that the proposed game has a unique Nash equilibrium and our joint power and rate control algorithm would converge to the Nash equilibrium. Numerical results illustrate that the proposed algorithm can achieve robust performance against the variations of mobile hospital environments
Wireless technologies are pervasive to support ubiquitous healthcare applications. However, a critical issue of using wireless communications under a healthcare scenario is the electromagnetic interference (EMI) caused by RF transmission, and a high level of EMI may lead to a critical malfunction of medical sensors. In consideration of EMI on medical sensors, we study the optimization of quality of service (QoS) within the whole Internet of vehicles for E-health and propose a novel model to optimize the QoS by allocating the transmit power of each user. Our results show that the optimal power control policy depends on the objective of optimization problems: a greedy policy is optimal to maximize the summation of QoS of each user, whereas a fair policy is optimal to maximize the product of QoS of each user. Algorithms are taken to derive the optimal policies, and numerical results of optimizing QoS are presented for both objectives and QoS constraints.
Presents an introduction to the issue which focuses on data mining in cyber, physical, and social computing
We propose a cooperative MAC protocol with rapid relay selection (RRS-CMAC) to improve the cooperation efficiency and multiple access performance in wireless ad hoc networks. In this protocol, if the data rate between a sender and its recipient is low, an optimal relay is selected by a rate differentiation phase (RDP), priority differentiation phase (PDP), and contention resolution phase (CRP) for relays with the same priority. In the RDP, each contending relay determines its data rate level based on the data rate from the sender to itself and that from itself to the recipient, and then broadcasts busy tones to its neighbor nodes or senses the channel according to the values of its binary digits, which are determined by its data rate level. Relays with the highest data rate levels win and continue to the next phase. In PDP, these winning relays send busy tones or sense the channel according to their own priority values, with the highest priority relays winning in this phase. Then CRP is performed using k-round contention resolution (k-CR) to select a unique optimal relay. Relays sending busy tones earliest and for the longest duration proceed to the next round, while others, sensing a busy tone, subsequently withdraw from contention. A packet piggyback mechanism is adopted to allow data packet transmission without reservation if the winning relay has a packet to send, and the direct transmission rate to its recipient is high. This reduces reservation overhead and improves channel utilization. Both theoretical analysis and simulation results show that the throughput of the proposed protocol is better than those of the CoopMACA and 2rcMAC protocols.
Time-domain enhanced inter-cell interference coordination (eICIC) is an effective technique to reduce the cross-tier inter-cell interference (ICI) in long term evolution (LTE)-based heterogeneous small cell networks (HetSCNs). This paper first clarifies two main communication scenarios in HetSCNs, i.e., macrocells deployed with femtocells (macro-femto) and with picocells (macro-pico). Then, the main challenges in HetSCNs, particularly the severe cross-tier ICI in macro-femto caused by femtocells with closed subscribe group (CSG) access or in macro-pico caused by picocells with range expansion are analyzed. Based on the prominent feature of dominant interference in HetSCNs, the main idea of time-domain interference coordination and two basic schemes in the eICIC standardization, i.e., almost blank subframe (ABS) and orthogonal frequency division multiplexing symbol shift are presented, with a systematic introduction to the interactions of these techniques with other network functions. Then, given macro-femto and macro-pico HetSCNs, an overview is provided on the advanced designs of ABS-based eICIC, including self-optimized designs with regard to key parameters such as ABS muting ratio, and joint optimized designs of ABS-based eICIC and other radio resource management techniques, such as user association and power control. Finally, the open issues and future research directions are discussed.
We consider the crucial problem of maximizing the system-wide performance which takes into account request processing throughput, smartphone user experience and system stability in a participatory sensing system with cooperative smartphones. Three important controls need to be made, i.e., 1) request admission control, 2) task allocation, and 3) task scheduling on smartphones. It is highly challenging to achieve the optimal system-wide performance, given arbitrary unknown arrivals of sensing requests, intrinsic tradeoff between request processing throughput and smartphone user experience degradation, and heterogenous requests. Little existing work has studied this crucial problem of maximizing the system-wide performance of a participatory sensing system as a whole. In response to the challenges, we propose an optimal online control approach to maximize the system-wide performance of a participatory sensing system. Exploiting the stochastic Lyapunov optimization techniques, it derives the optimal online control strategies for request admission control, task allocation and task scheduling on smartphones. The most salient feature of our approach is that the achieved system-wide performance is arbitrarily close to the optimum, despite unpredictable and arbitrary request arrivals. Rigorous theoretical analysis and comprehensive simulation evaluation jointly demonstrate the efficacy of our online control approach.
Internet of Things will serve communities across the different domains of life. Tracking mobile targets is one important system engineering application in IOT, and the resource of embedded devices and objects working under IoT implementation are constrained. Thus, building a scheme to make full use of energy is key issue for mobile target tracking applications. To achieve both energy efficiency and high monitoring performance, an effective Knowledge-aware Proactive Nodes Selection (KPNS) system is proposed in this paper. The innovations of KPNS are as follows: 1) the number of proactive nodes are dynamically adjusted based on prediction accuracy of target trajectory. If the prediction accuracy is high, the number of proactive nodes in the non-main predicted area will be decreased. If prediction accuracy of moving trajectory is low, large number of proactive nodes will be selected to enhance monitoring quality. 2) KPNS takes full advantage of energy to further enhance target tracking performance by properly selecting more proactive nodes in the network. We evaluated the efficiency of KPNS with both theory analysis and simulation based experiments. The experimental results demonstrate that compared with Probability-based target Prediction and Sleep Scheduling strategy (PPSS), KPNS scheme improves the energy efficiency by 60%, and can reduce target missing rate and tracking delay to 66%, 75% respectively.
Efficient urban transportation systems are widely accepted as essential infrastructure for smart cities, and they can highly increase a city°s vitality and convenience for residents. The three core pillars of smart cities can be considered to be data mining technology, IoT, and mobile wireless networks. Enormous data from IoT is stimulating our cities to become smarter than ever before. In transportation systems, data-driven management can dramatically enhance the operating efficiency by providing a clear and insightful image of passengers° transportation behavior. In this article, we focus on the data validity problem in a cellular network based transportation data collection system from two aspects: Internal time discrepancy and data loss. First, the essence of time discrepancy was analyzed for both automated fare collection (AFC) and automated vehicular location (AVL) systems, and it was found that time discrepancies can be identified and rectified by analyzing passenger origin inference success rate using different time shift values and evolutionary algorithms. Second, the algorithmic framework to handle location data loss and time discrepancy was provided. Third, the spatial distribution characteristics of location data loss events were analyzed, and we discovered that they have a strong and positive relationship with both high passenger volume and shadowing effects in urbanized areas, which can cause severe biases on passenger traffic analysis. Our research has proposed some data-driven methodologies to increase data validity and provided some insights into the influence of IoT level data loss on public transportation systems for smart cities.
While cloud computing is gaining popularity, diverse security and privacy issues are emerging that hinder the rapid adoption of this new computing paradigm. And the development of defensive solutions is lagging behind. To ensure a secure and trustworthy cloud environment it is essential to identify the limitations of existing solutions and envision directions for future research. In this paper, we have surveyed critical security and privacy challenges in cloud computing, categorized diverse existing solutions, compared their strengths and limitations, and envisioned future research directions.
In recent years, there are increasing interests in using path identifiers (PIDs) as inter-domain routing objects. However, the PIDs used in existing approaches are static, which makes it easy for attackers to launch distributed denial-ofservice (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of D-PID, a framework that uses PIDs negotiated between neighboring domains as inter-domain routing objects. In DPID, the PID of an inter-domain path connecting two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate PIDs, how to maintain ongoing communications when PIDs change. We build a 42-node prototype comprised by six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.
In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided shallow models based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Given the Convolutional Neural Network (CNN) successes in learning hierarchical abstractions from low-level data such as tfMRI time series, in this work we propose a novel scalable distributed deep CNN autoencoder model and apply it for fMRI big data analysis. This model aims to both learn the complex hierarchical structures of the tfMRI big data and to leverage the processing power of multiple GPUs in a distributed fashion. To deploy such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow, leveraging from a large cluster of GPU nodes over cloud. Experimental results from applying the model on the Human Connectome Project (HCP) data show that the proposed model is efficient and scalable toward tfMRI big data modeling and analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data.
Cloud providers provision their various resources such as CPUs, memory, and storage in the form of virtual machine (VM) instances which are then allocated to the users. The users are charged based on a pay-as-you-go model, and their payments should be determined by considering both their incentives and the incentives of the cloud providers. Auction markets can capture such incentives, where users name their own prices for their requested VMs. We design an auction-based online mechanism for VM provisioning, allocation, and pricing in clouds that considers several types of resources. Our proposed online mechanism makes no assumptions about future demand of VMs, which is the case in real cloud settings. The proposed online mechanism is invoked as soon as a user places a request or some of the allocated resources are released and become available. The mechanism allocates VM instances to selected users for the period they are requested for, and ensures that the users will continue using their VM instances for the entire requested period. In addition, the mechanism determines the payment the users have to pay for using the allocated resources. We prove that the mechanism is incentive-compatible, that is, it