Change search
Refine search result
1234 1 - 50 of 178
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Abdelaziz, Ahmed
    et al.
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Ang, Tanfong
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Sookhak, Mehdi
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Khan, Suleman
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liew, Cheesun
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Akhunzada, Adnan
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Survey on network virtualization using openflow: Taxonomy, opportunities, and open issues2016In: KSII Transactions on Internet and Information Systems, ISSN 1976-7277, Vol. 10, no 10, 4902-4932 p.Article in journal (Refereed)
    Abstract [en]

    The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed

  • 2.
    Agreste, Santa
    et al.
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    De Meo, Pasquale
    of Ancient and Modern Civilizations, University of Messina.
    Fiumara, Giacomo
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccione, Giuseppe
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccolo, Sebastiano
    Department of Management Engineering - Engineering Systems Division at the Technical University of Denmark.
    Rosaci, Domenico
    DIIES Department, University of Reggio Calabria Via Graziella.
    Sarné, Giuseppe M. L.
    DICEAM Department, University of Reggio Calabria Via Graziella.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An empirical comparison of algorithms to findcommunities in directed graphs and theirapplication in Web Data Analytics2017In: IEEE Transactions on Big Data, E-ISSN 2332-7790, Vol. 3, no 3, 289-306 p.Article in journal (Refereed)
    Abstract [en]

    Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.

  • 3.
    Ahmad, Iftikhar
    et al.
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Noor, Rafidah Md
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Ali, Ihsan
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Characterizing the role of vehicular cloud computing in road traffic management2017In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 13, no 5Article in journal (Refereed)
    Abstract [en]

    Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles

  • 4.
    Ahmed, Ejaz
    et al.
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Yaqoob, Ibrar
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Targio Hashem, Ibrahim Abaker
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Khan, Imran
    Schneider Electric Industries, Grenoble.
    Ahmed, Abdelmuttlib Ibrahim Abdalla
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    The role of big data analytics in Internet of Things2017In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069Article in journal (Refereed)
    Abstract [en]

    The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.

  • 5.
    Akbar, Mariam
    et al.
    COMSATS Institute of Information Technology, Islamabad.
    Javaid, Nadeem
    COMSATS Institute of Information Technology, Islamabad.
    Kahn, Ayesha Hussain
    COMSATS Institute of Information Technology, Islamabad.
    Imran, Muhammad Al
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Shoaib, Muhammad
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility2016In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 16, no 3, 404Article in journal (Refereed)
    Abstract [en]

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability

  • 6.
    Alam, Quratulain
    et al.
    Department of Computer Sciences, Institute of Management Sciences, Peshawar.
    Tabbasum, Saher
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Malik, Saif U.R.
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Malik, Masoom
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Ali, Tamleek
    Department of Computer Sciences, Institute of Management Sciences, Peshawar.
    Akhunzada, Adnan
    Center for Mobile Cloud Computing Research (C4MCCR), University of Malaya, 50603 Kuala Lumpur.
    Khan, Samee U.
    Department of electrical and computer engineering, North Dakota State University, Fargo, ND.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Buyya, Rajkumar
    Cloud Computing and Distributed Systems, (CLOUDS) Laboratory, Department of Computing and Information Systems, The University of Melbourne.
    Formal Verification of the xDAuth Protocol2016In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 11, no 9, 1956-1969 p.Article in journal (Refereed)
    Abstract [en]

    Service Oriented Architecture (SOA) offers a flexible paradigm for information flow among collaborating organizations. As information moves out of an organization boundary, various security concerns may arise, such as confidentiality, integrity, and authenticity that needs to be addressed. Moreover, verifying the correctness of the communication protocol is also an important factor. This paper focuses on the formal verification of the xDAuth protocol, which is one of the prominent protocols for identity management in cross domain scenarios. We have modeled the information flow of xDAuth protocol using High Level Petri Nets (HLPN) to understand protocol information flow in a distributed environment. We analyze the rules of information flow using Z language while Z3 SMT solver is used for verification of the model. Our formal analysis and verification results reveal the fact that the protocol fulfills its intended purpose and provides the security for the defined protocol specific properties, e.g. secure secret key authentication, Chinese wall security policy and secrecy specific properties, e.g. confidentiality, integrity, authenticity.

  • 7.
    Al-Dulaimi, Anwer
    et al.
    ECE, University of Toronto.
    Anpalagan, Alagan
    WINCORE Lab, Ryerson University, Toronto.
    Bennis, Mehdi
    University of Oulu, Centre for Wireless Communications, University of Oulu.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    5G Green Communications: C-RAN Provisioning of CoMP and Femtocells for Power Management2016In: 2015 IEEE International Conference on Ubiquitous Wireless Broadband (ICUWB): Montreal, Canada, 4-7 October 2015, Piscataway, NJ: IEEE Communications Society, 2016, 7324392Conference paper (Refereed)
    Abstract [en]

    The fifth generation (5G) wireless network is expected to have dense deployments of cells in order to provide efficient Internet and cellular connections. The cloud radio access network (C-RAN) emerges as one of the 5G solutions to steer the network architecture and control resources beyond the legacy radio access technologies. The C-RAN decouples the traffic management operations from the radio access technologies leading to a new combination of virtualized network core and fronthaul architecture. In this paper, we first investigate the power consumption impact due to the aggressive deployments of low-power neighborhood femtocell networks (NFNs) under the umbrella of a coordinated multipoint (CoMP) macrocell. We show that power savings obtained from employing low power NFN start to decline as the density of deployed femtocells exceed certain threshold. The analysis considers two CoMP sites at the cell-edge and intra-cell areas. Second, to restore the power efficiency and network stabilization, a C-RAN model is proposed to restructure the NFN into clusters to ease the energy burden in the evolving 5G systems. Tailoring this to traffic load, selected clusters will be switched off to save power when they operate with low traffic loads

  • 8.
    Al-Turjman, Fadi M.
    et al.
    Department of Computer Engineering, Middle East Technical University, Northern Cyprus Campus.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Value-Based Caching in Information-Centric Wireless Body Area Networks2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 1, 181Article in journal (Refereed)
    Abstract [en]

    We propose a resilient cache replacement approach based on a Value of sensed Information (VoI) policy. To resolve and fetch content when the origin is not available due to isolated in-network nodes (fragmentation) and harsh operational conditions, we exploit a content caching approach. Our approach depends on four functional parameters in sensory Wireless Body Area Networks (WBANs). These four parameters are: age of data based on periodic request, popularity of on-demand requests, communication interference cost, and the duration for which the sensor node is required to operate in active mode to capture the sensed readings. These parameters are considered together to assign a value to the cached data to retain the most valuable information in the cache for prolonged time periods. The higher the value, the longer the duration for which the data will be retained in the cache. This caching strategy provides significant availability for most valuable and difficult to retrieve data in the WBANs. Extensive simulations are performed to compare the proposed scheme against other significant caching schemes in the literature while varying critical aspects in WBANs (e.g., data popularity, cache size, publisher load, connectivity-degree, and severe probabilities of node failures). These simulation results indicate that the proposed VoI-based approach is a valid tool for the retrieval of cached content in disruptive and challenging scenarios, such as the one experienced in WBANs, since it allows the retrieval of content for a long period even while experiencing severe in-network node failures.

  • 9.
    Amadeo, Marcia
    et al.
    Universita Mediterranea di Reggio Calabria.
    Campolo, Claudia
    Universita Mediterranea di Reggio Calabria. Telecommunications.
    Quevedo, Jose
    Universidade de Aveiro, Inst Telecomunicacoes.
    Corujo, Daniel
    Universidade de Aveiro, Inst Telecomunicacoes.
    Molinaro, Antonella
    Universita Mediterranea di Reggio Calabria. Telecommunications.
    Iera, Antonio
    Universita Mediterranea di Reggio Calabria.
    Aguiar, Rui L.
    Universidade de Aveiro, Inst Telecomunicacoes.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Information-Centric Networking for the Internet of Things: Challenges and Opportunities2016In: IEEE Network, ISSN 0890-8044, E-ISSN 1558-156X, Vol. 30, no 2, 92-100 p., 7437030Article in journal (Refereed)
    Abstract [en]

    In view of evolving the Internet infrastructure, ICN is promoting a communication model that is fundamentally different from the traditional IP address-centric model. The ICN approach consists of the retrieval of content by (unique) names, regardless of origin server location (i.e., IP address), application, and distribution channel, thus enabling in-network caching/replication and content-based security. The expected benefits in terms of improved data dissemination efficiency and robustness in challenging communication scenarios indicate the high potential of ICN as an innovative networking paradigm in the IoT domain. IoT is a challenging environment, mainly due to the high number of heterogeneous and potentially constrained networked devices, and unique and heavy traffic patterns. The application of ICN principles in such a context opens new opportunities, while requiring careful design choices. This article critically discusses potential ways toward this goal by surveying the current literature after presenting several possible motivations for the introduction of ICN in the context of IoT. Major challenges and opportunities are also highlighted, serving as guidelines for progress beyond the state of the art in this timely and increasingly relevant topic.

  • 10.
    Baqer Mollah, Muhammad
    et al.
    Jahangirnagar University.
    Azad, Md. Abul Kalam
    Jahangirnagar University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Secure Data Sharing and Searching at the Edge of Cloud-Assisted Internet of Things2017In: IEEE Cloud Computing, E-ISSN 2325-6095, Vol. 4, no 1, 34-42 p., 7879139Article in journal (Refereed)
    Abstract [en]

    Over the last few years, smart devices are able to communicate with each other and with Internet/cloud from short to long range. As a consequence, a new paradigm is introduced called Internet of Things (IoT). However, by utilizing cloud computing, resource limited IoT smart devices can get various benefits like offload data storage and processing burden at cloud. To support latency sensitive, real-time data processing, mobility and high data rate IoT applications, working at the edge of the network offers more benefits than cloud. In this paper, we propose an efficient data sharing scheme that allows smart devices to securely share data with others at the edge of cloud-assisted IoT. In addition, we also propose a secure searching scheme to search desired data within own/shared data on storage. Finally, we analyze the performance based on processing time of our proposed scheme. The results demonstrate that our scheme has potential to be effectively used in IoT applications

  • 11.
    Baqer Mollah, Muhammad
    et al.
    Department of Computer Science and Engineering, Jahangirnagar University, Dhaka.
    Kalam Azad, Md. Abul
    Department of Computer Science and Engineering, Jahangirnagar University, Dhaka.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Security and privacy challenges in mobile cloud computing: Survey and way ahead2017In: Journal of Network and Computer Applications, ISSN 1084-8045, E-ISSN 1095-8592, Vol. 84, 38-54 p.Article in journal (Refereed)
    Abstract [en]

    The rapid growth of mobile computing is seriously challenged by the resource constrained mobile devices. However, the growth of mobile computing can be enhanced by integrating mobile computing into cloud computing, and hence a new paradigm of computing called mobile cloud computing emerges. In here, the data is stored in cloud infrastructure and the actual execution is shifted to cloud environment so that a mobile user is set free from resource constrained issue of existing mobile devices. Moreover, to avail the cloud services, the communications between mobile devices and clouds are held through wireless medium. Thus, some new classes of security and privacy challenges are introduced. The purpose of this survey is to present the main security and privacy challenges in this field which have grown much interest among the academia and research community. Although, there are many challenges, corresponding security solutions have been proposed and identified in literature by many researchers to counter the challenges. We also present these recent works in short. Furthermore, we compare these works based on different security and privacy requirements, and finally present open issues.

  • 12.
    Batalla, Jordi Mongay
    et al.
    National Institute of Telecommunications, Warsaw University of Technology.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Gajewski, Mariusz
    National Institute of Telecommunications, Poland..
    Secure Smart Homes: Opportunities and Challenges2017In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 50, no 5, p75:1-75:32 p.Article in journal (Refereed)
    Abstract [en]

    The Smart Home concept integrates smart applications in the daily human life. In recent years, Smart Homes have increased security and management challenges due to the low capacity of small sensors, multiple connectivity to the Internet for efficient applications (use of big data and cloud computing) and heterogeneity of home systems, which require inexpert users to configure devices and micro-systems. This article presents current security and management approaches in Smart Homes and shows the good practices imposed on the market for developing secure systems in houses. At last, we propose future solutions for efficiently and securely managing the Smart Homes

  • 13.
    Bera, Samaresh
    et al.
    Computer Science and Engineering Department, Indian Institute of Technology, Kharagpur, 721302, India..
    Misra, Sudip
    Computer Science and Engineering Department, Indian Institute of Technology, Kharagpur, 721302, India..
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Software-Defined Networking for Internet of Things: a Survey2017In: IEEE Internet of Things Journal, ISSN 2327-4662, Vol. 12, no 12, 2933-2944 p.Article in journal (Refereed)
    Abstract [en]

    Internet of things (IoT) facilitates billions of devices to be enabled with network connectivity to collect and exchange real-time information for providing intelligent services. Thus, IoT allows connected devices to be controlled and accessed remotely in the presence of adequate network infrastructure. Unfortunately, traditional network technologies such as enterprise networks and classic timeout-based transport protocols are not capable of handling such requirements of IoT in an efficient, scalable, seamless, and cost-effective manner. Besides, the advent of software-defined networking (SDN) introduces features that allow the network operators and users to control and access the network devices remotely, while leveraging the global view of the network. In this respect, we provide a comprehensive survey of different SDN-based technologies, which are useful to fulfill the requirements of IoT, from different networking aspects – edge, access, core, and data center networking. In these areas, the utility of SDN-based technologies is discussed, while presenting different challenges and requirements of the same in the context of IoT applications. We present a synthesized overview of the current state of IoT development. We also highlight some of the future research directions and open research issues based on the limitations of the existing SDN-based technologies.

  • 14.
    Cai, Hongming
    et al.
    School of Software, Shanghai Jiao Tong University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Web of Things Data Storage2017In: Managing the Web of Things: Linking the Real World to the Web, Elsevier, 2017, 325-354 p.Chapter in book (Refereed)
    Abstract [en]

    With the wide spread of Web of Things (WoT) technology, massive data are generated by huge amounts of distributed sensors and different applications. WoT related applications have emerged as an important area for both engineers and researchers. As a consequence, how to acquire, integrate, store, process and use these data has become an urgent and important problem for enterprises to achieve their business goals. Based on data processing functional analysis, a framework is provided to identify the representation, management, and disposing areas of WoT data. Several associated functional modules are defined and described in terms of their key characteristics and capabilities. Then, current researches in WoT applications are organized and compared to show the state-of-the-art achievements in literature from the view of data processing process. Next, some WoT storage techniques are discussed to enable WoT applications to move into cloud platforms. Lastly, based on application requirement analysis, some future technical tendencies are also proposed.

  • 15.
    Cai, Hongming
    et al.
    School of Software, Shanghai Jiao Tong University.
    Xu, Boyi
    College of Economics and Management, Shanghai Jiao Tong University.
    Jiang, Lihong
    School of Software, Shanghai Jiao Tong University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    IoT-Based Big Data Storage Systems in Cloud Computing: Perspectives and Challenges2017In: IEEE Internet of Things Journal, ISSN 2327-4662, Vol. 4, no 1, 75-87 p., 7600359Article in journal (Refereed)
    Abstract [en]

    Internet of Things (IoT) related applications have emerged as an important field for both engineers and researchers, reflecting the magnitude and impact of data-related problems to be solved in contemporary business organizations especially in cloud computing. This paper first provides a functional framework that identifies the acquisition, management, processing and mining areas of IoT big data, and several associated technical modules are defined and described in terms of their key characteristics and capabilities. Then current research in IoT application is analyzed, moreover, the challenges and opportunities associated with IoT big data research are identified. We also report a study of critical IoT application publications and research topics based on related academic and industry publications. Finally, some open issues and some typical examples are given under the proposed IoT-related research framework

  • 16.
    Cao, Liang
    et al.
    Nanjing University of Posts and Telecommunications.
    Wang, Yufeng
    Nanjing University of Posts and Telecommunications.
    Zhang, Bo
    Nanjing University of Posts and Telecommunications.
    Jin, Qun
    Waseda University, Japan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    GCHAR: An efficient Group-based Context–aware human activity recognition on smartphone2017In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848Article in journal (Refereed)
    Abstract [en]

    With smartphones increasingly becoming ubiquitous and being equipped with various sensors, nowadays, there is a trend towards implementing HAR (human activity recognition) algorithms and applications on smartphones, including health monitoring, self-managing system and fitness tracking etc. However, one of main issues of the existing HAR schemes is that the classification accuracy is relatively low, and in order to improve the accuracy, high computation overhead is needed. In this paper, an efficient Group-based Context-aware classification method for human activity recognition on smartphones, GCHAR is proposed, which exploits hierarchical group-based scheme to improve the classification efficiency, and reduces the classification error through context awareness rather than the intensive computation. Specifically, GCHAR designs the two-level hierarchical classification structure, i.e., inter-group and inner-group, and utilizes the previous state and transition logic (so-called context awareness) to detect the transitions among activity groups. In comparison with other popular classifiers such as RandomTree, Bagging, J48, BayesNet, KNN and Decision Table, etc., thorough experiments on the realistic dataset (UCI HAR repository) demonstrate that GCHAR achieves the best classification accuracy, reaching 94.1636%, and time consumption in training stage of GCHAR is four times shorter than the simple Decision Table and is decreased by 72.21% in classification stage in comparison with BayesNet

  • 17.
    Challa, Srinavi
    et al.
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad .
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad .
    Odelu, Vanga
    Department of Computer Science and Engineering, Indian Institute of Information Technology Chittoor.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala .
    Kumari, Sari
    Department of Mathematics, Ch. Charan Singh University, Meerut .
    Khan, Muhammad Khurram
    Center of Excellence in Information Assurance, King Saud University, Riyadh.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An efficient ECC-based provably secure three-factor user authentication and key agreement protocol for wireless healthcare sensor networks2017In: Computers & electrical engineering, ISSN 0045-7906, E-ISSN 1879-0755Article in journal (Refereed)
    Abstract [en]

    We first show the security limitations of a recent user authentication scheme proposed for wireless healthcare sensor networks. We then present a provably secure three-factor user authentication and key agreement protocol for wireless healthcare sensor networks. The proposed scheme supports functionality features, such as dynamic sensor node addition, password as well as biometrics update, smart card revocation along with other usual features required for user authentication in wireless sensor networks. Our scheme is shown to be secure through the rigorous formal security analysis under the Real-Or-Random (ROR) model and broadly-accepted Burrows-Abadi-Needham (BAN) logic. Furthermore, the simulation through the widely-known Automated Validation of Internet Security Protocols and Applications (AVISPA) tool shows that our scheme is also secure. High security, and low communication and computation costs make our scheme more suitable for practical application in healthcare applications as compared to other related existing schemes.

  • 18.
    Chatterjee, Santanu
    et al.
    Research Center Imarat, Defence Research and Development Organization, Hyderabad.
    Roy, Sandip
    Department of Computer Science and Engineering, Asansol Engineering College, Asansol.
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad.
    Chattopadhyay, Samiran
    Department of Information Technology, Jadavpur University, Salt Lake City.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Secure Biometric-Based Authentication Schemeusing Chebyshev Chaotic Map for Multi-ServerEnvironment2016In: IEEE Transactions on Dependable and Secure Computing, ISSN 1545-5971, E-ISSN 1941-0018Article in journal (Refereed)
    Abstract [en]

    Abstract: Multi-server environment is the most common scenario for a large number of enterprise class applications. In this environment, user registration at each server is not recommended. Using multi-server authentication architecture, user can manage authentication to various servers using single identity and password. We introduce a new authentication scheme for multi-server environments using Chebyshev chaotic map. In our scheme, we use the Chebyshev chaotic map and biometric verification along with password verification for authorization and access to various application servers. The proposed scheme is light-weight compared to other related schemes. We only use the Chebyshev chaotic map, cryptographic hash function and symmetric key encryption-decryption in the proposed scheme. Our scheme provides strong authentication, and also supports biometrics & password change phase by a legitimate user at any time locally, and dynamic server addition phase. We perform the formal security verification using the broadly-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that the presented scheme is secure. In addition, we use the formal security analysis using the Burrows-Abadi-Needham (BAN) logic along with random oracle models and prove that our scheme is secure against different known attacks. High security and significantly low computation and communication costs make our scheme is very suitable for multi-server environments as compared to other existing related schemes.

  • 19.
    Chen, Feng
    et al.
    Parallel Computing Laboratory, Institute of Software Chinese Academy of Sciences, Beijing.
    Deng, Pan
    Parallel Computing Laboratory, Institute of Software Chinese Academy of Sciences, Beijing.
    Wan, Jiafu
    School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou.
    Zhang, Daqiang
    School of Software Engineering, Tongji University, Shanghai.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rong, Xiaohui
    Chinese Academy of Civil Aviation Science and Technology, Beijing.
    Data mining for the internet of things: Literature review and challenges2015In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 2015, 431047Article in journal (Refereed)
    Abstract [en]

    The massive data generated by the Internet of Things (IoT) are considered of high business value, and data mining algorithms can be applied to IoT to extract hidden information from data. In this paper, we give a systematic way to review data mining in knowledge view, technique view, and application view, including classification, clustering, association analysis, time series analysis and outlier analysis. And the latest application cases are also surveyed. As more and more devices connected to IoT, large volume of data should be analyzed, the latest algorithms should be modified to apply to big data. We reviewed these algorithms and discussed challenges and open research issues. At last a suggested big data mining system is proposed

  • 20.
    Chen, Lin
    et al.
    Laboratorie Recherche Informatique (LRI-CNRS UMR 8623), Université Paris-Sud.
    Li, Yong
    Department of Electronic Engineering, Tsinghua University, Bejing.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Oblivious Neighbor Discovery for Wireless Devices with Directional Antennas2016In: IEEE INFOCOM 2016: The 35th Annual IEEE International Conference on Computer Communications, San Francisco, 10-14 April 2016, Piscataway, NJ: IEEE Communications Society, 2016, 7524570Conference paper (Refereed)
    Abstract [en]

    Neighbor discovery, the process of discovering all neighbors in a device's communication range, is one of the bootstrapping networking primitives of paramount importance and is particularly challenging when devices have directional antennas instead of omni-directional ones. In this paper, we study the following fundamental problem which we term as oblivious neighbor discovery: How can neighbor nodes with heterogeneous antenna configurations and without clock synchronization discover each other within a bounded delay in a fully decentralised manner without any prior coordination? We first establish a theoretical framework on oblivious neighbor discovery and establish the performance bound of any neighbor discovery protocol achieving oblivious discovery. Guided by the theoretical results, we then design an oblivious neighbor discovery protocol and prove that it achieves guaranteed oblivious discovery with order-minimal worst-case discovery delay in the asynchronous and heterogeneous environment. We further demonstrate how our protocol can be configured to achieve a desired trade-off between average and worst-case performance

  • 21.
    Chen, Lin
    et al.
    Laboratory Recherche Informatique (LRI-CNRS UMR 8623), Université Paris-Sud.
    Li, Yong
    Department of Electronic Engineering, Tsinghua University, Bejing.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On Oblivious Neighbor Discovery in Distributed Wireless Networks With Directional Antennas: Theoretical Foundation and Algorithm Design2017In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 25, no 4, 1982-1993 p.Article in journal (Refereed)
    Abstract [en]

    Neighbor discovery, one of the most fundamental bootstrapping networking primitives, is particularly challenging in decentralized wireless networks where devices have directional antennas. In this paper, we study the following fundamental problem, which we term oblivious neighbor discovery: How can neighbor nodes with heterogeneous antenna configurations discover each other within a bounded delay in a fully decentralised manner without any prior coordination or synchronisation? We establish a theoretical framework on the oblivious neighbor discovery and the performance bound of any neighbor discovery algorithm achieving oblivious discovery. Guided by the theoretical results, we then devise an oblivious neighbor discovery algorithm, which achieves guaranteed oblivious discovery with order-minimal worst case discovery delay in the asynchronous and heterogeneous environment. We further demonstrate how our algorithm can be configured to achieve a desired tradeoff between average and worst case performance.

  • 22.
    Chen, Yifan
    et al.
    Southern University of Science and Technology.
    Nakano, Tadashi
    Osaka University.
    Kosmas, Panagiotis
    King’s College London.
    Yuen, Chau
    Singapore University of Technology and Design.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Asvial, Muhamad
    University of Indonesia.
    Green Touchable Nanorobotic Sensor Networks2016In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 54, no 11, 136-142 p.Article in journal (Refereed)
    Abstract [en]

    Recent advancements in biological nanomachineshave motivated the research on nanoroboticsensor networks (NSNs), where thenanorobots are green (i.e., biocompatible andbiodegradable) and touchable (i.e., externallycontrollable and continuously trackable). In theformer aspect, NSNs will dissolve in an aqueousenvironment after finishing designated tasksand are harmless to the environment. In the latteraspect, NSNs employ cross-scale interfacesto interconnect the in vivo environment and itsexternal environment. Specifically, the in-messagingand out-messaging interfaces for nanorobotsto interact with a macro-unit are defined.The propagation and transient characteristicsof nanorobots are described based on the existingexperimental results. Furthermore, planningof nanorobot paths is discussed by taking intoaccount the effectiveness of region-of-interestdetection and the period of surveillance. Finally,a case study on how NSNs may be applied tomicrowave breast cancer detection is presented

  • 23.
    Cheng, Jie
    et al.
    Shannon Cognitive Computing Laboratory, Huawei Technologies Company.
    Liu, Yaning
    Harbin Institute of Technology Shenzhen Graduate School,.
    Ye, Qiang
    University of Prince Edward Island, Charlottetown.
    Du, Hongwei
    Harbin Institute of Technology Shenzhen Graduate School,.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    DISCS: a distributed coordinate system based on robust nonnegative matrix completion2017In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 25, no 2, 934-947 p.Article in journal (Refereed)
    Abstract [en]

    Many distributed applications, such as BitTorrent, need to know the distance between each pair of network hosts in order to optimize their performance. For small-scale systems, explicit measurements can be carried out to collect the distance information. For large-scale applications, this approach does not work due to the tremendous amount of measurements that have to be completed. To tackle the scalability problem, network coordinate system (NCS) was proposed to solve the scalability problem by using partial measurements to predict the unknown distances. However, the existing NCS schemes suffer seriously from either low prediction precision or unsatisfactory convergence speed. In this paper, we present a novel distributed network coordinate system (DISCS) that utilizes a limited set of distance measurements to achieve high-precision distance prediction at a fast convergence speed. Technically, DISCS employs the innovative robust nonnegative matrix completion method to improve the prediction accuracy. Through extensive experiments based on various publicly-available data sets, we found that DISCS outperforms the state-of-the-art NCS schemes in terms of prediction precision and convergence speed, which clearly shows the high usability of DISCS in real-life Internet applications.

  • 24.
    Chude-Okonkwo, Uche A.K.
    et al.
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Malekian, Reza
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Maharaj, B.T.
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Molecular Communication and Nanonetwork for Targeted Drug Delivery: a survey2017In: IEEE Communications Surveys and Tutorials, ISSN 1553-877X, E-ISSN 1553-877XArticle in journal (Refereed)
    Abstract [en]

    Molecular communication (MC) and molecular network (MN) are communication paradigms that use biochemical signalling to achieve information exchange among naturally and artificially synthesized nanosystems. Among the envisaged application areas of MC and MN is the field of nanomedicine where the subject of targeted drug delivery (TDD) is at the forefront. Typically, when someone gets sick, therapeutic drugs are administered to the person for healing purpose. Since no therapeutic drug can be effective until it is delivered to the target site in the body, different modalities to improve the delivery of drugs to the targeted sites are being explored in contemporary research. The most promising of these modalities is TDD. TDD modality promises a smart localization of appropriate dose of therapeutic drugs to the targeted part of the body at reduced system toxicity. Research in TDD has been going on for many years in the field of medical science; however, the translation of expectations and promises to clinical reality has not been satisfactorily achieved because of several challenges. The exploration of TDD ideas under the MC and MN paradigms is considered as an option to addressing these challenges and to facilitate the translation of TDD from the bench to the patients’ bedsides. Over the past decade, there have been some research efforts made in exploring the ideas of TDD on the MC and MN platforms. While the number of research output in terms of scientific articles is few at the moment, the desire in the scientific community to participate in realizing the goal of TDD is quite high as is evidence from the rise in research output over the last few years. To increase awareness and provide the multidisciplinary research community with the necessary background information on TDD, this paper presents a visionary survey of this subject within the domain of MC and MN. We start by introducing in an elaborate manner, the motivation behind the application of MC and MN paradigms to the study and implementation of TDD. Specifically, an explanation on how MC-based TDD concepts differ from traditional TDD being explored under the field of medical science is provided. We also summarize the taxonomy of the different perspectives through which MC-based TDD research can be viewed. System models and design challenges/requirements for developing MC-based TDD are discussed. Various metrics that can be used to evaluate the performance of MC-based TDD systems are highlighted. We also provide a discussion on the envisaged path from contemporary research activities to clinical implementation of the MC-based TDD. Finally, we discuss issues such as informatics and software tools, as well as issues that border on the requirement for standards and regulatory policies in MC-based TDD research and practice.

  • 25.
    Deng, Ruilong
    et al.
    Department of Electrical and Computer Engineering, University of Alberta, Edmonton.
    Xiao, Gaoxi
    School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
    Lu, Rongxing
    Faculty of Computer Science, University of New Brunswick,.
    Liang, Hao
    Department of Electrical and Computer Engineering, University of Alberta, Edmonton.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    False data injection on state estimation in power systems - attacks, impacts, and defense: a survey2017In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 13, no 2, 411-423 p., 7579185Article in journal (Refereed)
    Abstract [en]

    The accurately estimated state is of great importance for maintaining a stable running condition of power systems. To maintain the accuracy of the estimated state, bad data detection (BDD) is utilized by power systems to get rid of erroneous measurements due to meter failures or outside attacks. However, false data injection (FDI) attacks, as recently revealed, can circumvent BDD and insert any bias into the value of the estimated state. Continuous works on constructing and/or protecting power systems from such attacks have been done in recent years. This survey comprehensively overviews three major aspects: constructing FDI attacks; impacts of FDI attacks on electricity market; and defending against FDI attacks. Specifically, we first explore the problem of constructing FDI attacks, and further show their associated impacts on electricity market operations, from the adversary's point of view. Then, from the perspective of the system operator, we present countermeasures against FDI attacks. We also outline the future research directions and potential challenges based on the above overview, in the context of FDI attacks, impacts, and defense.

  • 26.
    Ding, Guoru
    et al.
    National Mobile Communications Research Laboratory, Southeast University, Nanjing, China.
    Wu, Fan
    Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
    Wu, Qihui
    Department of Electronics and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
    Tang, Shaojie
    Department of Information Systems, University of Texas at Dallas, Richardson, TX, USA.
    Song, Fei
    College of Communications Engineering, PLA University of Science and Technology, Nanjing, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Tsiftsis, Theodoros A.
    School of Engineering, Nazarbayev University, Astana, Kazakhstan.
    Robust Online Spectrum Prediction With Incomplete and Corrupted Historical Observations2017In: IEEE Transactions on Vehicular Technology, ISSN 0018-9545, E-ISSN 1939-9359, Vol. 66, no 9, 8022-8036 p.Article in journal (Refereed)
    Abstract [en]

    A range of emerging applications, from adaptive spectrum sensing to proactive spectrum mobility, depend on the ability to foresee spectrum state evolution. Despite a number of studies appearing about spectrum prediction, fundamental issues still remain unresolved: 1) The existing studies do not explicitly account for anomalies, which may incur serious performance degradation; 2) they focus on the design of batch spectrum prediction algorithms, which limit the scalability to analyze massive spectrum data in real time; 3) they assume the historical data are complete, which may not hold in reality. To address these issues, we develop a Robust Online Spectrum Prediction (ROSP) framework, with incomplete and corrupted observations, in this paper. We first present data analytics of real-world spectrum measurements to reveal the correlation structures of spectrum evolution and to analyze the impact of anomalies on the rank distribution of spectrum matrices. Then, from a spectral–temporal 2-D perspective, we formulate the ROSP as a joint optimization problem of matrix completion and recovery by effectively integrating the time series forecasting techniques and develop an alternating direction optimization method to efficiently solve it. We apply ROSP to a wide range of real-world spectrum matrices of popular wireless services. Experiment results show that ROSP outperforms state-of-the-art spectrum prediction schemes.

  • 27.
    Ding, Ming
    et al.
    Data 61, Australia.
    Lopez-Perez, David
    Bell Labs Alcatel-Lucent.
    Xue, Ruiqi
    Shanghai Jiao Tong University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Wen
    Shanghai Key Lab of Navigation and Location Based Services, Shanghai Jiao Tong University, and School of Electronic Engineering and Automation, Guilin University of Electronic Technology.
    On Dynamic Time Division Duplex Transmissions for Small Cell Networks2016In: IEEE Transactions on Vehicular Technology, ISSN 0018-9545, E-ISSN 1939-9359, Vol. 65, no 11, 8933-8951 p.Article in journal (Refereed)
    Abstract [en]

    otivated by the promising benefits of dynamic Time Division Duplex (TDD), in this paper, we use a unified framework to investigate both the technical issues of applying dynamic TDD in homogeneous small cell networks (HomSCNs), and the feasibility of introducing dynamic TDD into heterogeneous networks (HetNets). First, HomSCNs are analyzed, and a small cell BS scheduler that dynamically and independently schedules DL and UL subframes is presented, such that load balancing between the DL and the UL traffic can be achieved. Moreover, the effectiveness of various inter-link interference mitigation (ILIM) schemes as well as their combinations, is systematically investigated and compared. Besides, the interesting possibility of partial interference cancellation (IC) is also explored. Second, based on the proposed schemes, the joint operation of dynamic TDD together with cell range expansion (CRE) and almost blank subframe (ABS) in HetNets is studied. In this regard, scheduling polices in small cells and an algorithm to derive the appropriate macrocell traffic off-load and ABS duty cycle under dynamic TDD operation are proposed. Moreover, the full IC and the partial IC schemes are investigated for dynamic TDD in HetNets. The user equipment (UE) packet throughput performance of the proposed/discussed schemes is benchmarked using system-level simulations.

  • 28.
    Dinh, Thanh
    et al.
    School of Electronic Engineering, Soongsil University.
    Kim, Younghan
    School of Electronic Engineering, Soongsil University.
    Gu, Tao
    School of Computer Science, RMIT University, Melbourne.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    L-MAC: A Wake-up Time Self-learning MAC Protocol for Wireless Sensor Networks2016In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069, Vol. 105, 33-46 p.Article in journal (Refereed)
    Abstract [en]

    This paper analyzes the trade-off issue between energy efficiency and packet delivery latency among existing duty-cycling MAC protocols in wireless sensor networks for low data-rate periodic-reporting applications. We then propose a novel and practical wake-up time self-Learning MAC (L-MAC) protocol in which the key idea is to reuse beacon messages of receiver-initiated MAC protocols to enable nodes to coordinate their wakeup time with their parent nodes without incurring extra communication overhead. Based on the self-learning mechanism we propose, L-MAC builds an on-demand staggered scheduler to allow any node to forward packets continuously to the sink node. We present an analytical model, and conduct extensive simulations and experiments on Telosb sensors to show that L-MAC achieves significant higher energy efficiency compared to state-of-the-art asynchronous MAC protocols and a similar result of latency compared to synchronous MAC protocols. In particular, under QoS requirements with an upper bound value for one-hop packet delivery latency within 1 s and a lower bound value for packet delivery ratio within 95%, results show that the duty cycle of L-MAC is improved by more than 3.8 times and the end-to-end packet delivery latency of L-MAC is reduced by more than 7 times compared to those of AS-MAC and other state-of-the-art MAC protocols, respectively, in case of the packet generation interval of 1 minute. L-MAC hence achieves high performance in both energy efficiency and packet delivery latency.

  • 29.
    Dinh, Thanh
    et al.
    School of Electronic Engineering, Soongsil University, Seoul 06978, South Korea.
    Kim, Younghan
    School of Electronic Engineering, Soongsil University, Seoul 06978, South Korea.
    Gu, Tau
    School of Computer Science, Royal Melbourne Institute of Technology University, Melbourne.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Adaptive Low-Power Listening Protocol for Wireless Sensor Networks in Noisy Environments2017In: IEEE Systems Journal, ISSN 1932-8184, E-ISSN 1937-9234Article in journal (Refereed)
    Abstract [en]

    This paper investigates the energy consumption minimizationproblem for wireless sensor networks running low-powerlistening (LPL) protocols in noisy environments. We observe thatthe energy consumption by false wakeups (i.e., wakeup without receivingany packet) of a node in noisy environments can be a dominantfactor in many cases while the false wakeup rate is spatiallyand temporarily dynamic. Based on this observation, without carefullyconsidering the impact of false wakeups, the energy efficientperformance of LPL nodes in noisy environments may significantlydeviate from the optimal performance. To address this problem,we propose a theoretical framework incorporating LPL temporalparameters with the false wakeup rate and the data rate. We thenformulate an energy consumption minimization problem of LPLin noisy environments and address the problem by a simplifiedand practical approach. Based on the theoretical framework, wedesign an efficient adaptive protocol for LPL (APL) in noisy environments.Through extensive experimental studies with Telosbnodes in real environments, we show that APL achieves 20%–40%energy efficient improvement compared to existing LPL protocolsunder various network conditions.

  • 30.
    D'Orazio, Christian Javier
    et al.
    School of Information Technology and Mathematical Sciences, University of South Australia, Australia.
    Rongxing, Lu
    Faculty of Computer Science, University of New Brunswick, Fredericton, NB, Canada.
    Choo, Kim Kwang Raymond
    School of Information Technology and Mathematical Sciences, University of South Australia, Australia.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Markov adversary model to detect vulnerable iOS devices and vulnerabilities in iOS apps2017In: Applied Mathematics and Computation, ISSN 0096-3003, E-ISSN 1873-5649, Vol. 293, 523-544 p.Article in journal (Refereed)
    Abstract [en]

    With the increased convergence of technologies whereby a user can access, store and transmit data across different devices in real-time, risks will arise from factors such as lack of appropriate security measures in place and users not having requisite levels of security awareness and not fully understanding how security measures can be used to their advantage. In this paper, we adapt our previously published adversary model for digital rights management (DRM) apps and demonstrate how it can be used to detect vulnerable iOS devices and to analyse (non-DRM) apps for vulnerabilities that can potentially be exploited. Using our adversary model, we investigate several (jailbroken and non-jailbroken) iOS devices, Australian Government Medicare Expert Plus (MEP) app, Commonwealth Bank of Australia app, Western Union app, PayPal app, PocketCloud Remote Desktop app and Simple Transfer Pro app, and reveal previously unknown vulnerabilities. We then demonstrate how the identified vulnerabilities can be exploited to expose the user's sensitive data and personally identifiable information stored on or transmitted from the device. We conclude with several recommendations to enhance the security and privacy of user data stored on or transmitted from these devices.

  • 31.
    Du, Wei
    et al.
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai.
    Leung, Sunney Yung Sun
    Institute of Textile and Clothing, The Hong Kong Polytechnic University, Hong Kong.
    Tang, Yang
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Differential Evolution With Event-Triggered Impulsive Control2017In: IEEE Transactions on Cybernetics, ISSN 2168-2267, E-ISSN 2168-2275, Vol. 7, no 1, 244-257 p.Article in journal (Refereed)
    Abstract [en]

    Differential evolution (DE) is a simple but powerful evolutionary algorithm, which has been widely and successfully used in various areas. In this paper, an event-triggered impulsive (ETI) control scheme is introduced to improve the performance of DE. Impulsive control (IPC), the concept of which derives from control theory, aims at regulating the states of a network by instantly adjusting the states of a fraction of nodes at certain instants, and these instants are determined by event-triggered mechanism (ETM). By introducing IPC and ETM into DE, we hope to change the search performance of the population in a positive way after revising the positions of some individuals at certain moments. At the end of each generation, the IPC operation is triggered when the update rate of the population declines or equals to zero. In detail, inspired by the concepts of IPC, two types of impulses are presented within the framework of DE in this paper: 1) stabilizing impulses and 2) destabilizing impulses. Stabilizing impulses help the individuals with lower rankings instantly move to a desired state determined by the individuals with better fitness values. Destabilizing impulses randomly alter the positions of inferior individuals within the range of the current population. By means of intelligently modifying the positions of a part of individuals with these two kinds of impulses, both exploitation and exploration abilities of the whole population can be meliorated. In addition, the proposed ETI is flexible to be incorporated into several state-of-the-art DE variants. Experimental results over the IEEE Congress on Evolutionary Computation (CEC) 2014 benchmark functions exhibit that the developed scheme is simple yet effective, which significantly improves the performance of the considered DE algorithms. 

  • 32.
    Du, Wei
    et al.
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai , Institute of Textiles and Clothing, The Hong Kong Polytechnic University.
    Tang, Yang
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai.
    Leung, Sunney Yung Sun
    Institute of Textile and Clothing, The Hong Kong Polytechnic University.
    Tong, Le
    Institute of Textile and Clothing, The Hong Kong Polytechnic University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Qian, Feng
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai , Institute of Textiles and Clothing, The Hong Kong Polytechnic University.
    Robust Order Scheduling in the Fashion Industry: a Multi-Objective Optimization Approach2017In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050Article in journal (Refereed)
    Abstract [en]

    In the fashion industry, order scheduling focuses on the assignment of production orders to appropriate production lines. In reality, before a new order can be put into production, a series of activities known as pre-production events need to be completed. In addition, in real production process, owing to various uncertainties, the daily production quantity of each order is not always as expected. In this research, by considering the pre-production events and the uncertainties in the daily production quantity, robust order scheduling problems in the fashion industry are investigated with the aid of a multi-objective evolutionary algorithm (MOEA) called nondominated sorting adaptive differential evolution (NSJADE). The experimental results illustrate that it is of paramount importance to consider pre-production events in order scheduling problems in the fashion industry. We also unveil that the existence of the uncertainties in the daily production quantity heavily affects the order scheduling.

  • 33.
    Fan, Qingfeng
    et al.
    Laboratoire DAVID, University of Versailles-Saint-Quentin.
    Xiong, Naixue
    School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Department of Business and Computer Science, Southwestern Oklahoma State University.
    Zeitouni, Karine
    Laboratoire DAVID, University of Versailles-Saint-Quentin.
    Wu, Qiongli
    Laboratory Applied Mathematics and Systems, Ecole Centrale de Paris.
    Tian, Yu-Chu
    School of Electrical Engineering and Computer Science, Queensland University of Technology.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Game Balanced Multi-factor Multicast Routing in Sensor Grid Networks2016In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 367-368, 550-572 p.Article in journal (Refereed)
    Abstract [en]

    In increasingly important sensor grid networks, multicast routing is widely used in date aggregation and distributed query processing. It requires multicast trees for efficient data transmissions. However, sensor nodes in such networks typically have limited resources and computing power. Efforts have been made to consider the space, energy and data factors separately to optimize the network performance. Considering these factors simultaneously, this paper presents a game balance based multi-factor multicast routing approach for sensor grid networks. It integrates the three factors into a unified model through a linear combination. The model is standardized and then solved theoretically by using the concept of game balance from game theory. The solution gives Nash equilibrium, implying a well balanced result for all the three factors. The theoretic results are implemented in algorithms for cluster formation, cluster core selection, cluster tree construction, and multicast routing. Extensive simulation experiments show that the presented approach gives mostly better overall performance than benchmark methods

  • 34.
    Fan, Ye
    et al.
    Department of Information and Communication Engineering, Xi’an Jiaotong University, Xi’an, China.
    Liao, Xuewen
    Department of Information and Communication Engineering, Xi’an Jiaotong University, Xi’an, China.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Physical Layer Security Based on Interference Alignment in K-User MIMO Y Wiretap Channels2017In: IEEE Access, E-ISSN 2169-3536, Vol. 5, 5747-5759 p.Article in journal (Refereed)
    Abstract [en]

    This paper studies the secure degree of freedom (SDOF) of the multiway relay wiretap system K -user MIMO Y wiretap channel, where each legitimate user equipped with M antennas intends to convey independent signals via an intermediate relay with N antennas. There exists an eavesdropper which is equipped with Neantennas close to the legitimate users. In order to eliminate the multiuser interference and keep the system security, interference alignment is mainly utilized in the proposed broadcast wiretap channel (BWC) and multi-access BWC (MBWC), and cooperative jamming is adopted as a supplementary approach in the MBWC model. The general feasibility conditions of the interference alignment are deduced asM≥K−1,2M>N and N≥((K(K−1))/2) . In the BWC model, we have deduced the secure degrees of freedom (SDOF) asKmin{M,N}−min{Ne,K(K−1)/2} , which is always a positive value. While in the MBWC model, the SDOF is given by Kmin{M,N} . Finally, since the relay transmits the synthesized signals of the legal signal and the jamming signal in the MBWC model, we propose a power allocation scheme to maximize the secrecy rate. Simulation results demonstrate that our proposed power allocation scheme can improve secrecy rate under various antenna configurations.

  • 35.
    Femminella, Marco
    et al.
    Department of Engineering, University of Perugia CNIT RU.
    Reali, Gianluca
    Department of Engineering, University of Perugia CNIT RU.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Molecular Communications Model for Drug Delivery2015In: IEEE Transactions on Nanobioscience, ISSN 1536-1241, E-ISSN 1558-2639, Vol. 14, no 8, 935-945 p.Article in journal (Refereed)
    Abstract [en]

    This paper considers the scenario of a targeted drug delivery system, which consists of deploying a number of biological nanomachines close to a biological target (e.g., a tumor), able to deliver drug molecules in the diseased area. Suitably located transmitters are designed to release a continuous flow of drug molecules in the surrounding environment, where they diffuse and reach the target. These molecules are received when they chemically react with compliant receptors deployed on the receiver surface. In these conditions, if the release rate is relatively high and the drug absorption time is significant, congestion may happen, essentially at the receiver site. This phenomenon limits the drug absorption rate and makes the signal transmission ineffective, with an undesired diffusion of drug molecules elsewhere in the body. The original contribution of this paper consists of a theoretical analysis of the causes of congestion in diffusion-based molecular communications. For this purpose, it is proposed a reception model consisting of a set of pure loss queuing systems. The proposed model exhibits an excellent agreement with the results of a simulation campaign made by using the Biological and Nano-Scale communication simulator version 2 (BiNS2), a well-known simulator for molecular communications, whose reliability has been assessed through in vitro experiments. The obtained results can be used in rate control algorithms to optimally determine the optimal release rate of molecules in drug delivery applications

  • 36.
    Fong, Simon
    et al.
    Department of Computer and Information Science, University of Macau.
    Han, Dong
    Department of Computer and Information Science, University of Macau.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Comparative study of incremental learning algorithms in multidimensional outlier detection on data stream2015In: Improving Knowledge Discovery through the Integration of Data Mining Techniques, Hershey, PA: IGI Global, 2015, 54-73 p.Chapter in book (Refereed)
    Abstract [en]

    Multi-dimensional outlier detection (MOD) over data streams is one of the most significant data stream mining techniques. When multivariate data are streaming in high speed, outliers are to be detected efficiently and accurately. Conventional outlier detection method is based on observing the full dataset and its statistical distribution. The data is assumed stationary. However, this conventional method has an inherent limitation-it always assumes the availability of the entire dataset. In modern applications, especially those that operate in the real time environment, the data arrive in the form of live data feed; they are dynamic and ever evolving in terms of their statistical distribution and concepts. Outlier detection should no longer be done in batches, but in incremental manner. In this chapter, we investigate into this important concept of MOD. In particular, we evaluate the effectiveness of a collection of incremental learning algorithms which are the underlying pattern recognition mechanisms for MOD. Specifically, we combine incremental learning algorithms into three types of MOD-Global Analysis, Cumulative Analysis and Lightweight Analysis with Sliding Window. Different classification algorithms are put under test for performance comparison

  • 37.
    Fong, Simon
    et al.
    Department of Computer and Information Science, University of Macau.
    Ji, Jinyan
    Department of Computer and Information Science, Faculty of Science and Technology, University of Macau.
    Gong, Xueyuan
    Department of Computer and Information Science, University of Macau.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Advances of applying metaheuristics to data mining techniques2015In: Improving Knowledge Discovery through the Integration of Data Mining Techniques, Hershey, PA: IGI Global, 2015, 75-103 p.Chapter in book (Refereed)
    Abstract [en]

    Metaheuristics have lately gained popularity among researchers. Their underlying designs are inspired by biological entities and their behaviors, e.g. schools of fish, colonies of insects, and other land animals etc. They have been used successfully in optimization applications ranging from financial modeling, image processing, resource allocations, job scheduling to bioinformatics. In particular, metaheuristics have been proven in many combinatorial optimization problems. So that it is not necessary to attempt all possible candidate solutions to a problem via exhaustive enumeration and evaluation which is computationally intractable. The aim of this paper is to highlight some recent research related to metaheuristics and to discuss how they can enhance the efficacy of data mining algorithms. An upmost challenge in Data Mining is combinatorial optimization that, often lead to performance degradation and scalability issues. Two case studies are presented, where metaheuristics improve the accuracy of classification and clustering by avoiding local optima.

  • 38.
    Fong, Simon
    et al.
    Department of Computer and Information Science, University of Macau.
    Wong, Raymond K.
    School of Computer Science and Engineering, University of New South Wales, Sydney.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Accelerated PSO Swarm Search Feature Selection for Data Stream Mining Big Data2016In: IEEE Transactions on Services Computing, ISSN 1939-1374, E-ISSN 1939-1374, Vol. 9, no 1, 33-45 p.Article in journal (Refereed)
    Abstract [en]

    Big Data though it is a hype up-springing many technical challenges that confront both academic research communities and commercial IT deployment, the root sources of Big Data are founded on data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. Feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when it comes to mining over high dimensional data the search space from which an optimal feature subset is derived grows exponentially in size, leading to an intractable demand in computation. In order to tackle this problem which is mainly based on the high-dimensionality and streaming format of data feeds in Big Data, a novel lightweight feature selection is proposed. The feature selection is designed particularly for mining streaming data on the fly, by using accelerated particle swarm optimization (APSO) type of swarm search that achieves enhanced analytical accuracy within reasonable processing time. In this paper, a collection of Big Data with exceptionally large degree of dimensionality are put under test of our new feature selection algorithm for performance evaluation.

  • 39.
    Fu, Zhangjie
    et al.
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Huang, Fengxiao
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Sun, Xingming
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Enabling Semantic Search based on ConceptualGraphs over Encrypted Outsourced Data2017In: IEEE Transactions on Services Computing, ISSN 1939-1374, E-ISSN 1939-1374Article in journal (Refereed)
    Abstract [en]

    Currently, searchable encryption is a hot topic in the field of cloud computing. The existing achievements are mainly focused on keyword-based search schemes, and almost all of them depend on predefined keywords extracted in the phases of index construction and query. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval and cannot completely match users’ search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, for the first time, we define and solve the problems of semantic search based on conceptual graphs(CGs) over encrypted outsourced data in clouding computing (SSCG).We firstly employ the efficient measure of ”sentence scoring” in text summarization and Tregex to extract the most important and simplified topic sentences from documents. We then convert these simplified sentences into CGs. To perform quantitative calculation of CGs, we design a new method that can map CGs to vectors. Next, we rank the returned results based on ”text summarization score”. Furthermore, we propose a basic idea for SSCG and give a significantly improved scheme to satisfy the security guarantee of searchable symmetric encryption (SSE). Finally, we choose a real-world dataset – ie., the CNN dataset to test our scheme. The results obtained from the experiment show the effectiveness of our proposed scheme.

  • 40.
    Gao, Deyun
    et al.
    School of Electronic and Information Engineering, Beijing Jiaotong University.
    Rao, Ying
    China Academy of Electronics and Information Technology, Beijing.
    Foh, Huang Chen
    5GIC, Institute for Communication Systems, Department of Electrical and Electronic Engineering, University of Surrey.
    Zhang, Hongke
    School of Electronic and Information Engineering, Beijing Jiaotong University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    PMNDN: Proxy Based Mobility Support Approach in Mobile NDN Environment2017In: IEEE Transactions on Network and Service Management, ISSN 1932-4537, E-ISSN 1932-4537, Vol. 14, no 1, 191-203 p.Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the source mobility problem that exists in the current named data networking (NDN) architecture and propose a proxy-based mobility support approach named PMNDN to overcome the problem. PMNDN proposes using a proxy to efficiently manage source mobility. Besides, functionalities of the NDN access routers are extended to track the mobility status of a source and signal Proxy about a handoff event. With this design, a mobile source does not need to participate in handoff signaling which reduces the consumption of limited wireless bandwidth. PMNDN also features an ID that is structurally similar to the content name so that routing scalability of NDN architecture is maintained and addressing efficiency of Interest packets is improved. We illustrate the performance advantages of our proposed solution by comparing the handoff performance of the mobility support approaches with that in NDN architecture and current Internet architecture via analytical and simulation investigation. We show that PMNDN offers lower handoff cost, shorter handoff latency, and less packet losses during the handoff process

  • 41.
    Ghulam, Muhammad
    et al.
    Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh .
    Alhamid, Mohammed F
    Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh .
    Hossain, M. Shamim
    Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh .
    Almogren, Ahmad S.
    Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh .
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Enhanced Living by Assessing Voice Pathology Using a Co-Occurrence Matrix2017In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 17, no 2, 267Article in journal (Refereed)
    Abstract [en]

    large number of the population around the world suffers from various disabilities. Disabilities affect not only children but also adults of different professions. Smart technology can assist the disabled population and lead to a comfortable life in an enhanced living environment (ELE). In this paper, we propose an effective voice pathology assessment system that works in a smart home framework. The proposed system takes input from various sensors, and processes the acquired voice signals and electroglottography (EGG) signals. Co-occurrence matrices in different directions and neighborhoods from the spectrograms of these signals were obtained. Several features such as energy, entropy, contrast, and homogeneity from these matrices were calculated and fed into a Gaussian mixture model-based classifier. Experiments were performed with a publicly available database, namely, the Saarbrucken voice database. The results demonstrate the feasibility of the proposed system in light of its high accuracy and speed. The proposed system can be extended to assess other disabilities in an ELE.

  • 42.
    Gong, Xueyuan
    et al.
    Department of Computer and Information Science, University of Macau.
    Fong, Simon
    Department of Computer and Information Science, University of Macau.
    Si, Yainwhar
    Department of Computer and Information Science, University of Macau.
    Biuk-Agha, Robert P.
    School of Computer Science and Engineering, University of New South Wales, Sydney.
    Wong, Raymond K.
    School of Computer Science and Engineering, University of New South Wales, Sydney.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Normalized Cross-Match: Pattern Discovery Algorithm from Biofeedback Signals2016In: Trends and Applications in Knowledge Discovery and Data Mining: PAKDD 2016 Workshops, BDM, MLSDA, PACC, WDMBF, Auckland, New Zealand, April 19, 2016, Revised Selected Papers, Encyclopedia of Global Archaeology/Springer Verlag, 2016, 169-180 p.Conference paper (Refereed)
    Abstract [en]

    Biofeedback signals are important elements in critical care applications, such as monitoring ECG data of a patient, discovering patterns from large amount of ECG data sets, detecting outliers from ECG data, etc. Because the signal data update continuously and the sampling rates may be different, time-series data stream is harder to be dealt with compared to traditional historical time-series data. For the pattern discovery problem on time-series streams, Toyoda proposed the CrossMatch (CM) approach to discover the patterns between two time-series data streams (sequences), which requires only O(n) time per data update, where n is the length of one sequence. CM, however, does not support normalization, which is required for some kinds of sequences (e.g. EEG data, ECG data). Therefore, we propose a normalized-CrossMatch approach (NCM) that extends CM to enforce normalization while maintaining the same performance capabilities

  • 43.
    Gong, Yueyuan
    et al.
    Department of Computer and Information Science, University of Macau.
    Fong, Simon
    Department of Computer and Information Science, University of Macau.
    Wong, Raymond K.
    School of Computer Science and Engineering, University of New South Wales, Sydney.
    Mohammed, Sabah
    Department of Computer Science, Lakehead University, Thunder Bay.
    Faidhi, Jinan
    Department of Computer Science, Lakehead University, Thunder Bay.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Discovering sub-patterns from time series using a normalized cross-match algorithm2016In: Journal of Supercomputing, ISSN 0920-8542, E-ISSN 1573-0484, Vol. 72, no 10, 3850-3867 p.Article in journal (Refereed)
    Abstract [en]

    Time series data stream mining has attracted considerable research interest in recent years. Pattern discovery is a challenging problem in time series data stream mining. Because the data update continuously and the sampling rates may be different, dynamic time warping (DTW)-based approaches are used to solve the pattern discovery problem in time series data streams. However, the naive form of the DTW-based approach is computationally expensive. Therefore, Toyoda proposed the CrossMatch (CM) approach to discover the patterns between two time series data streams (sequences), which requires only O(n) time per data update, where n is the length of one sequence. CM, however, does not support normalization, which is required for some kinds of sequences (e.g. stock prices, ECG data). Therefore, we propose a normalized-CrossMatch approach that extends CM to enforce normalization while maintaining the same performance capabilities.

  • 44.
    Hayajneh, Thaier
    et al.
    School of Engineering and Computing Sciences, New York Institute of Technology.
    Mohd, Bassam Jamil
    Computer Engineering Department, Hashemite University.
    Imran, Muhammad Al
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Almashaqbeh, Ghada
    Computer Science Department, Columbia University, New York.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Secure Authentication for Remote Patient Monitoring with Wireless Medical Sensor Networks2016In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 16, no 4, 424Article in journal (Refereed)
    Abstract [en]

    There is broad consensus that remote health monitoring will benefit all stakeholders in the healthcare system and that it has the potential to save billions of dollars. Among the major concerns that are preventing the patients from widely adopting this technology are data privacy and security. Wireless Medical Sensor Networks (MSNs) are the building blocks for remote health monitoring systems. This paper helps to identify the most challenging security issues in the existing authentication protocols for remote patient monitoring and presents a lightweight public-key-based authentication protocol for MSNs. In MSNs, the nodes are classified into sensors that report measurements about the human body and actuators that receive commands from the medical staff and perform actions. Authenticating these commands is a critical security issue, as any alteration may lead to serious consequences. The proposed protocol is based on the Rabin authentication algorithm, which is modified in this paper to improve its signature signing process, making it suitable for delay-sensitive MSN applications. To prove the efficiency of the Rabin algorithm, we implemented the algorithm with different hardware settings using Tmote Sky motes and also programmed the algorithm on an FPGA to evaluate its design and performance. Furthermore, the proposed protocol is implemented and tested using the MIRACL (Multiprecision Integer and Rational Arithmetic C/C++) library. The results show that secure, direct, instant and authenticated commands can be delivered from the medical staff to the MSN nodes

  • 45.
    Hu, Jiankun
    et al.
    University of New South Wales, Canberra.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Energy Big Data Analytics and Security: Challenges and Opportunities2016In: IEEE Transactions on Smart Grid, ISSN 1949-3053, E-ISSN 1949-3061, Vol. 7, no 5, 2423-2436 p.Article in journal (Refereed)
    Abstract [en]

    The limited available fossil fuels and the call for sustainable environment have brought about new technologies for the high efficiency in the use of fossil fuels and introduction of renewable energy. Smart grid is an emerging technology that can fulfill such demands by incorporating advanced information and communications technology (ICT). The pervasive deployment of the advanced ICT, especially the smart metering, will generate big energy data in terms of volume, velocity, and variety. The generated big data can bring huge benefits to the better energy planning, efficient energy generation and distribution. As such data involve end users’ privacy and secure operation of the critical infrastructure, there will be new security issues. This paper is to survey and discuss new findings and developments in the existing big energy data analytics and security. Several taxonomies have been proposed to express the intriguing relationships of various variables in the field.

  • 46.
    Imran, Muhammad Al
    et al.
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University, King Saud University, Riyadh.
    Ullah, Sana
    Polytechnic Institute of Porto.
    Yasar, Ansar-Ul-Haque
    Hasselt University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Enabling Technologies for Next-Generation Sensor Networks:: Prospects, Issues, Solutions, and Emerging Trends2015In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 2015, 634268Article in journal (Refereed)
    Abstract [en]

    This paper firstly investigates the problem of uplink power control in cognitive radio networks (CRNs) with multiple primary users (PUs) and multiple second users (SUs) considering channel outage constraints and interference power constraints, where PUs and SUs compete with each other to maximize their utilities. We formulate a Stackelberg game to model this hierarchical competition, where PUs and SUs are considered to be leaders and followers, respectively. We theoretically prove the existence and uniqueness of robust Stackelberg equilibrium for the noncooperative approach. Then, we apply the Lagrange dual decomposition method to solve this problem, and an efficient iterative algorithm is proposed to search the Stackelberg equilibrium. Simulation results show that the proposed algorithm improves the performance compared with those proportionate game schemes.

  • 47.
    Islam, Mohammad A.
    et al.
    Florida International University, Miami.
    Ren, Shaolei
    University of California, Riverside, CA.
    Quan, Gang
    Florida International University, Miami.
    Shakir, Muhammad Zeeshan
    Texas A&M University, Qatar.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Water-Constrained Geographic Load Balancing in Data Centers2017In: IEEE Transactions on Cloud Computing, ISSN 2168-7161, Vol. 5, no 2, 208-220 p., 7152842Article in journal (Refereed)
    Abstract [en]

    Spreading across many parts of the world and presently hard striking California, extended droughts could even potentially threaten reliable electricity production and local water supplies, both of which are critical for data center operation. While numerous efforts have been dedicated to reducing data centers’ energy consumption, the enormity of data centers’ water footprints is largely neglected and, if still left unchecked, may handicap service availability during droughts. In this paper, we propose a water-aware workload management algorithm, called WATCH (WATer-constrained workload sCHeduling in data centers), which caps data centers’ long-term water consumption by exploiting spatio-temporal diversities of water efficiency and dynamically dispatching workloads among distributed data centers. We demonstrate the effectiveness of WATCH both analytically and empirically using simulations: based on only online information, WATCH can result in a provably-low operational cost while successfully capping water consumption under a desired level. Our results also show that WATCH can cut water consumption by 20 percent while only incurring a negligible cost increase even compared to state-of-the-art cost-minimizing but water-oblivious solution. Sensitivity studies are conducted to validate WATCH under various settings.

  • 48.
    Javaid, Nadeem
    et al.
    COMSATS Institute of Information Technology, Islamabad.
    Shah, Mehreen
    Allama Iqbal Open University, Islamabad.
    Ahmad, Ashfaq
    COMSATS Institute of Information Technology, Islamabad.
    Imran, Muhammad Al
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Khan, Majid Iqbal
    COMSATS Institute of Information Technology, Islamabad.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Enhanced Energy Balanced Data Transmission Protocol for Underwater Acoustic Sensor Networks2016In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 16, no 4, 487Article in journal (Refereed)
    Abstract [en]

    This paper presents two new energy balanced routing protocols for Underwater Acoustic Sensor Networks (UASNs); Efficient and Balanced Energy consumption Technique (EBET) and Enhanced EBET (EEBET). The first proposed protocol avoids direct transmission over long distance to save sufficient amount of energy consumed in the routing process. The second protocol overcomes the deficiencies in both Balanced Transmission Mechanism (BTM) and EBET techniques. EBET selects relay node on the basis of optimal distance threshold which leads to network lifetime prolongation. The initial energy of each sensor node is divided into energy levels for balanced energy consumption. Selection of high energy level node within transmission range avoids long distance direct data transmission. The EEBET incorporates depth threshold to minimize the number of hops between source node and sink while eradicating backward data transmissions. The EBET technique balances energy consumption within successive ring sectors, while, EEBET balances energy consumption of the entire network. In EEBET, optimum number of energy levels are also calculated to further enhance the network lifetime. Effectiveness of the proposed schemes is validated through simulations where these are compared with two existing routing protocols in terms of network lifetime, transmission loss, and throughput. The simulations are conducted under different network radii and varied number of nodes.

  • 49.
    Jiau, Mingkai
    et al.
    Department of Electronic Engineering, National Taipei University of Technology.
    Huang, Shihchia
    Department of Electronic Engineering, National Taipei University of Technology.
    Hwang, Jenqneng
    Department of Electrical Engineering, University of Washington, Seattle.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Multimedia Services in Cloud-Based Vehicular Networks2015In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 7, no 3, 62-79 p., 7166430Article in journal (Refereed)
    Abstract [en]

    Research into the requirements for mobile services has seen a growing interest in the fields of cloud technology and vehicular applications. Integrating cloud computing and storage with vehicles is a way to increase accessibility to multimedia services, and inspire myriad potential applications and research topics. This paper presents an overview of the characteristics of cloud computing, and introduces the basic concepts of vehicular networks. An architecture for multimedia cloud computing is proposed to suit subscription service mechanisms. The tendency to equip vehicles with advanced and embedded devices such as diverse sensors increases the capabilities of vehicles to provide computation and collection of multimedia content in the form of the vehicular network. Then, the taxonomy of cloud-based vehicular networks is addressed from the standpoint of the service relationship between the cloud computing and vehicular networks. In this paper, we identify the main considerations and challenges for cloud based vehicular networks regarding multimedia services, and propose potential research directions to make multimedia services achievable. More specifically, we quantitatively evaluate the performance metrics of these researches. For example, in the proposed broadcast storm mitigation scheme for vehicular networks, the packet delivery ratio and the normalized throughput can both achieve about 90%, making the proposed scheme a useful candidate for multimedia data exchange. Moreover, in the video uplinking scenarios, the proposed scheme is favorably compared with two well-known schedulers, M-LWDF and EXP, with the performance much closer to the optimum

  • 50.
    Jindal, Anish
    et al.
    CSE Department, Thapar University.
    Dua, Amit
    Department of Computer Science and Information Systems, BITS Pilani.
    Kumar, Neeraj
    CSE Department, Thapar University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rodrigues, Joel J.P.C.
    dNational Institute of Telecommunications (Inatel), Brazil.
    An efficient fuzzy rule-based big data analytics scheme for providing healthcare-as-a-service2017In: IEEE International Conference on Communications, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, 7996965Conference paper (Refereed)
    Abstract [en]

    With advancements in information and communication technology (ICT), there is an increase in the number of users availing remote healthcare applications. The data collected about the patients in these applications varies with respect to volume, velocity, variety, veracity, and value. To process such a large collection of heterogeneous data is one of the biggest challenges that needs a specialized approach. To address this issue, a new fuzzy rule-based classifier for big data handling using cloud-based infrastructure is presented in this paper, with an aim to provide Healthcare-as-a-Service (HaaS) to the users located at remote locations. The proposed scheme is based upon the cluster formation using the modified Expectation-Maximization (EM) algorithm and processing of the big data on the cloud environment. Then, a fuzzy rule-based classifier is designed for an efficient decision making about the data classification in the proposed scheme. The proposed scheme is evaluated with respect to different evaluation metrics such as classification time, response time, accuracy and false positive rate. The results obtained are compared with the standard techniques to confirm the effectiveness of the proposed scheme.

1234 1 - 50 of 178
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf