Change search
Refine search result
1234567 51 - 100 of 850
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Wazid, Mohammad
    et al.
    Cyber Security and Networks Lab, Innopolis University, Innopolis, Russian Federation.
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad, India.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala, India.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rodrigues, Joel J. P. C.
    National Institute of Telecommunications (Inatel), Brazil; Instituto de Telecomunicações, Portugal; University of Fortaleza (UNIFOR), Brazil.
    Design and Analysis of Secure Lightweight Remote User Authentication and Key Agreement Scheme in Internet of Drones Deployment2019In: IEEE Internet of Things Journal, ISSN 2327-4662, Vol. 6, no 2, p. 3572-3584, article id 8581510Article in journal (Refereed)
    Abstract [en]

    The Internet of Drones (IoD) provides a coordinated access to Unmanned Aerial Vehicles (UAVs) that are referred as drones. The on-going miniaturization of sensors, actuators, and processors with ubiquitous wireless connectivity makes drones to be used in a wide range of applications ranging from military to civilian. Since most of the applications involved in the IoD are real-time based, the users are generally interested in accessing real-time information from drones belonging to a particular fly zone. This happens if we allow users to directly access real-time data from flying drones inside IoD environment and not from the server. This is a serious security breach which may deteriorate performance of any implemented solution in this IoD environment. To address this important issue in IoD, we propose a novel lightweight user authentication scheme in which a user in the IoD environment needs to access data directly from a drone provided that the user is authorized to access the data from that drone. The formal security verification using the broadly-accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool along with informal security analysis show that our scheme is secure against several known attacks. The performance comparison demonstrates that our scheme is efficient with respect to various parameters, and it provides better security as compared to those for the related existing schemes. Finally, the practical demonstration of our scheme is done using the widely-accepted NS2 simulation.

  • 52.
    Wazid, Mohammad
    et al.
    Cyber Security and Networks Lab, Innopolis University, Innopolis, Russian Federation.
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad, India.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala, India.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Design of secure key management and user authentication scheme for fog computing services2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 91, p. 475-492Article in journal (Refereed)
    Abstract [en]

    Fog computing (fog networking) is known as a decentralized computing infrastructure in which data, applications, compute as well as data storage are scattered in the most logical and efficient place among the data source (i.e., smart devices) and the cloud. It gives better services than cloud computing because it has better performance with reasonably low cost. Since the cloud computing has security and privacy issues, and fog computing is an extension of cloud computing, it is therefore obvious that fog computing will inherit those security and privacy issues from cloud computing. In this paper, we design a new secure key management and user authentication scheme for fog computing environment, called SAKA-FC. SAKA-FC is efficient as it only uses the lightweight operations, such as one-way cryptographic hash function and bitwise exclusive-OR (XOR), for the smart devices as they are resource-constrained in nature. SAKA-FC is shown to be secure with the help of the formal security analysis using the broadly accepted Real-Or-Random (ROR) model, the formal security verification using the widely-used Automated Validation of Internet Security Protocols and Applications (AVISPA) tool and also the informal security analysis. In addition, SAKA-FC is implemented for practical demonstration using the widely-used NS2 simulator.

  • 53.
    Xu, Xiwei
    et al.
    Data61, CSIRO, Sydney, Australia. School of Computer Science and Engineering, UNSW, Sydney, Australia.
    Lu, Qinghua
    Data61, CSIRO, Sydney, Australia. School of Computer Science and Engineering, UNSW, Sydney, Australia.
    Liu, Yue
    College of Computer and Communication Engineering, China. University of Petroleum (East China), Qingdao, China.
    Zhu, Liming
    Data61, CSIRO, Sydney, Australia. School of Computer Science and Engineering, UNSW, Sydney, Australia.
    Yao, Haonan
    College of Computer and Communication Engineering, China. University of Petroleum (East China), Qingdao, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Designing blockchain-based applications a case study for imported product traceability2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 92, p. 399-406Article in journal (Refereed)
    Abstract [en]

    Blockchain technology enables decentralization as new forms of distributed software architectures, where components can reach agreements on the shared system states without trusting on a central integration point. Since blockchain is an emerging technology which is still at an early stage of development, there is limited experience on applying blockchain to real-world software applications. We applied blockchain application design approaches proposed in software architecture community in a real-world project called originChain, which is a blockchain-based traceability system that restructures the current system by replacing the central database with blockchain. In this paper, we share our experience of building originChain. By using blockchain and designing towards security, originChain provides transparent tamper-proof traceability data with high availability and enables automated regulatory-compliance checking and adaptation in product traceability scenarios. We also demonstrate both qualitative and quantitative analysis of the software architecture of originChain. Based on our experience and analysis, we found that the structural design of smart contracts has large impact on the quality of the system.

  • 54.
    Laine, Teemu H.
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suk, Haejung
    Ajou University.
    Designing Educational Mobile Augmented Reality Games Using Motivators and Disturbance Factors2019In: Augmented Reality Games II: The Gamification of Education, Medicine and Art / [ed] Vladimir Geroimenko, Springer, 2019, p. 33-56Chapter in book (Refereed)
    Abstract [en]

    Mobile augmented reality (MAR) has emerged as a mainstream technology to provide novel visualization and interaction opportunities across application domains. The primary forte of MAR is its ability to bridge the real world with virtual worlds by bringing virtual elements onto a real-world view, and by adapting the experience according to the user’s location and other context parameters. Research has shown that MAR possesses a multitude of affordances in the field of education. These affordances can be amplified in educational MAR games (EMARGs) due to the motivational value and the fun factor provided by intriguing game elements. However, there is a gap in research on design guidelines for EMARGs, especially regarding the connection to motivators and disturbance factors that may have positive and negative effects respectively on the learning experience. In this chapter, we first describe related background, and then present two MAR case studies—a treasure hunt and a story-driven adventure game—to illustrate our experiences in designing EMARGs. We conduct a qualitative analysis of the case studies based on questionnaire answers and interviews of 29 and 112 participants respectively, to identify motivators (16, 20) and disturbance factors (11, 25) in the participants’ gameplay experiences. Through an analysis of the motivators, disturbance factors and our design experiences, we proposed 24 design guidelines in six categories that can potentially strengthen motivators and diminish disturbance factors in MAR applications.

  • 55.
    Jangirala, Srinivas
    et al.
    Jindal Global Business School, O. P. Jindal Global University, Haryana, India.
    Das, Ashok Kumar
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology Hyderabad, Hyderabad, Telangana, India .
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Designing Secure Lightweight Blockchain-Enabled RFID-Based Authentication Protocol for Supply Chains in 5G Mobile Edge Computing Environment2019In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050Article in journal (Refereed)
    Abstract [en]

    Secure real-time data about goods in transit in supply chains needs bandwidth having capacity that is not fulfilled with the current infrastructure. Hence, 5G-enabled Internet of Things (IoT) in mobile edge computing is intended to substantially increase this capacity. To deal with this issue, we design a new efficient lightweight blockchain-enabled RFID-based authentication protocol for supply chains in 5G mobile edge computing environment, called LBRAPS. LBRAPS is based on bitwise exclusive-or (XOR), one-way cryptographic hash and bitwise rotation operations only. LBRAPS is shown to be secure against various attacks. Moreover, the simulation-based formal security verification using the broadly-accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool assures that LBRAPS is secure. Finally, it is shown that LBRAPS has better trade-off among its security and functionality features, communication and computation costs as compared to those for existing protocols.

  • 56.
    Jiménez, Lara Lorna
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    DOCMA: A Decentralized Orchestrator for Containerized Microservice Applications2019In: 2019 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications: IEEE Cloud Summit 2019, Washington, D.C., USA, USA: IEEE, 2019, p. 45-51Conference paper (Refereed)
    Abstract [en]

    The advent of the Internet-of-Things and its associated applications are key business and technological drivers in industry. These pose challenges that modify the playing field for Internet and cloud service providers who must enable this new context. Applications and services must now be deployed not only to clusters in data centers but also across data centers and all the way to the edge. Thus, a more dynamic and scalable approach toward the deployment of applications in the edge computing paradigm is necessary. We propose DOCMA, a fully distributed and decentralized orchestrator for containerized microservice applications built on peer-to-peer principles to enable vast scalability and resiliency. Secure ownership and control of each application are provided that do not require any designated orchestration nodes in the system as it is automatic and self-healing. Experimental results of DOCMA's performance are presented to highlight its ability to scale.

  • 57.
    Fan, Kuan
    et al.
    School of computer science and Engineering, Northeastern University, Shenyang, China.
    Bao, Zijian
    School of Cyber Science and Engineering, Wuhan University, Wuhan, China.
    Liu, Mingxi
    School of computer science and Engineering, Northeastern University, Shenyang, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Computer Science and Technology, Fuzhou University, Fuzhou, China.
    Shi, Wenbo
    School of Computer and Communication Engineering, Northeastern University, Qinhuangdao, China.
    Dredas: Decentralized, reliable and efficient remote outsourced data auditing scheme with blockchain smart contract for industrial IoT2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115Article in journal (Refereed)
    Abstract [en]

    The development of cloud computing and the Internet of things (IOT) attracts more and more enterprises to outsource the data from their Industrial Internet of things (IIOT) to cloud servers in order to save operating costs and improve efficiency. However, in this environment, protecting the security and privacy of data storage is an important challenge for IIOT and cloud server provider (CSP). Data auditing could allow data owner discover malicious behaviors of CSP which destroy their outsourced data. The public auditing authorizes the trusted third part auditor (TPA) to audit the owner’s outsourced data and frees owner from regular tasks. However, the public auditing using TPA is considered a centralized auditing, and the TPA is assumed totally honest, but it is difficult to find a reliable auditing organization. In this paper, a novel decentralized auditing smart contract in Ethereum is proposed. By replacing the TPA with a designed smart contract, a decentralized auditing scheme (Dredas) is proposed, where anyone can obtain the auditing result from Ethereum without worrying about semi-honest TPA. Compared with traditional auditing, apart from being able to perform traditional auditing functions, Dredas has three important benefits over previous work. First, the random values of challenge are more secure. Dredas chooses the current blockchain nonce as a random seed to prevent any party forging random values. Secondly, in order to achieve a safe, regular, proactive auditing, the protocol writes the auditing rules into the blockchain, and uses the number blocks on the Ethereum as the security timestamp. Finally, data owner, user and CSP must pay some ether for smart contract as deposit. This way not only inhibits the malicious behavior of these three parties, but also makes it more reasonable in real life. We implement Dredas to show that the computation costs are reasonable and efficient.

  • 58.
    Sakr, Sherif
    et al.
    University of Taru, Estonia.
    Zomaya, Albert
    University of Sydney, Australia.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Editorial for Special issue of FGCS special issue on “Benchmarking big data systems”2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 96, p. 32-34Article in journal (Refereed)
    Abstract [en]

    Even though several big data processing and analytics systems have been introduced with various design architectures, we are still lacking a deeper understanding of the performance characteristics for the various design architectures in addition to lacking comprehensive benchmarks for the various Big Data platforms. There is a crucial need to conduct fundamental research with a more comprehensive performance evaluation for the various Big Data processing systems and architectures. We also lack the availability of validation tools, standard benchmarks, and system performance prediction methods that can help us have a deeper and more solid understanding of the strengths and weaknesses of the various Big Data processing platforms. This special issue is dedicated to original results and achievements by active researchers, designers, and developers working on various issues and challenges related to big data research.

  • 59.
    Kor, Ah-Lian
    et al.
    Leeds Beckett University.
    Rondeau, Eric
    University of Lorraine, Nancy, France.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Porras, Jari
    Lappeenranta University of Technology, Finland.
    Georges, Jean-Philippe
    University of Lorraine, Nance, France.
    Education in Green ICT and Control of Smart Systems: A First Hand Experience from the International PERCCOM Masters Programme2019In: Proceedings of the 12th International Federation of Automatic Control Symposium on Advances in Control Education (IFAC-ACE 2019), 2019, Vol. 52, p. 1-8, article id 9Conference paper (Refereed)
    Abstract [en]

    PERCCOM (PERvasive Computing and COMmunications in sustainable development) Masters is the first innovative international programme in Green ICT for educating and equipping new IT engineers with Green IT skills for sustainable digital applications design and implementation. After five years of running the PERCCOM programme, this paper provides an assessment of skills and employability in the context of Green jobs and skills. The paper ends with a list of recommendations for the development of environment related education curricula.

    Download full text (pdf)
    fulltext
  • 60.
    Wu, Weiwei
    et al.
    School of Computer Science, Southeast University, Nanjing, Jiangsu, China.
    Wang, Wanyuan
    School of Computer Science and Engeering, Southeast University, Nanjing, Jiangsu China.
    Fang, Xiaolin
    Computer Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang China.
    Junzhou, Luo
    School of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Electricity Price-aware Consolidation Algorithms for Time-sensitive VM Services in Cloud Systems2019In: IEEE Transactions on Services Computing, ISSN 1939-1374, E-ISSN 1939-1374Article in journal (Refereed)
    Abstract [en]

    Despite the salient feature of cloud computing, the cloud provider still suffers from electricity bill, which mainly comes from 1) the power consumption of running physical machines and 2) the dynamically varying electricity price offered by smart grids. In the literature, there exist viable solutions adaptive to electricity price variation to reduce the electricity bill. However, they are not applicable to serving time-sensitive VM requests. In serving time-sensitive VM requests, it is potential for the cloud provider to apply proper consolidation strategies to further reduce the electricity bill. In this work, to address this challenge, we develop electricity-price-aware consolidation algorithms for both the offline and online scenarios. For the offline scenario, we first develop an consolidation algorithm with constant approximation, which always approaches the optimal solution within a constant factor of 5. For the online scenario, we propose an $O(\log(\frac{L_{max}}{L_{min}}))$ -competitive algorithm that is able to approach the optimal offline solution within a logarithmic factor, where $\frac{L_{max}}{L_{min}}$ is the ratio of the longest length of the processing time requirement of VMs to the shortest one. Our trace-driven simulation results further demonstrate that the average performance of the proposed algorithms produce near-optimal electricity bill.

  • 61.
    Fu, Zhangjie
    et al.
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Huang, Fengxiao
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Sun, Xingming
    Department of Computer and Software, Nanjing University of Information Science and Technology.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Enabling Semantic Search based on Conceptual Graphs over Encrypted Outsourced Data2019In: IEEE Transactions on Services Computing, ISSN 1939-1374, E-ISSN 1939-1374, Vol. 12, no 5, p. 813-823Article in journal (Refereed)
    Abstract [en]

    Currently, searchable encryption is a hot topic in the field of cloud computing. The existing achievements are mainly focused on keyword-based search schemes, and almost all of them depend on predefined keywords extracted in the phases of index construction and query. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval and cannot completely match users’ search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, for the first time, we define and solve the problems of semantic search based on conceptual graphs(CGs) over encrypted outsourced data in clouding computing (SSCG).We firstly employ the efficient measure of ”sentence scoring” in text summarization and Tregex to extract the most important and simplified topic sentences from documents. We then convert these simplified sentences into CGs. To perform quantitative calculation of CGs, we design a new method that can map CGs to vectors. Next, we rank the returned results based on ”text summarization score”. Furthermore, we propose a basic idea for SSCG and give a significantly improved scheme to satisfy the security guarantee of searchable symmetric encryption (SSE). Finally, we choose a real-world dataset – ie., the CNN dataset to test our scheme. The results obtained from the experiment show the effectiveness of our proposed scheme.

  • 62.
    Sun, Gang
    et al.
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China. Center for Cyber Security, University of Electronic Science and Technology of China, Chengdu, China.
    Li, Yayu
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China.
    Yu, Hongfang
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China. Center for Cyber Security, University of Electronic Science and Technology of China, Chengdu, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Du, Xiaojiang
    Department of Computer and Information Sciences, Temple University, Philadelphia, PA, USA.
    Guizani, Mohsen
    Department of Electrical and Computer Engineering, University of Idaho, Moscow, ID, USA.
    Energy-efficient and traffic-aware service function chaining orchestration in multi-domain networks2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 91, p. 347-360Article in journal (Refereed)
    Abstract [en]

    Service function chaining (SFC) provisioning is helpful not only for saving the capital expenditure (CAPEX) and operational expenditure (OPEX) of a network provider but also for reducing energy consumption in the substrate network. However, to the best of our knowledge, there has been little research on the problem of energy consumption for orchestrating online SFC requests in multi-domain networks. In this paper, we first formulate the problem of an energy-efficient online SFC request that is orchestrated across multiple clouds as an integer linear programming (ILP) model to find an optimal solution. Then, we analyze the complexity of this ILP model and prove that the problem is NP-hard. Additionally, we propose a low-complexity heuristic algorithm named energy-efficient online SFC request orchestration across multiple domains (EE-SFCO-MD) for near-optimally solving the mentioned problem. Finally, we conduct simulation experiments to evaluate the performance of our algorithm. Simulation results show that EE-SFCO-MD consumes less energy than existing approaches while the online SFC’s requirements are met and the privacy of each cloud is effectively guaranteed. The low computational complexity of the heuristic approach makes it applicable for quickly responding to online SFC requests.

  • 63.
    Chude-Okonkwo, Uche A. K.
    et al.
    University of Pretoria.
    Maharaj, B. T.
    University of Pretoria.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Malekian, Reza
    University of Pretoria.
    Exploring the Impact of Ligand Residence Time on Molecular Communication System Performance2019In: 2019 IEEE Global Communications Conference (GLOBECOM), IEEE, 2019Conference paper (Other academic)
    Abstract [en]

    Information reception in artificially synthesized molecular communication (MC) systems ideally follows the mechanisms employed by natural nanosystems to communicate. One of such reception mechanism is the so called ligand-receptor binding. Contemporary research in MC has considerably discussed this mechanism; however, the impact of a crucial parameter associated with the ligand-receptor binding action has not been given appropriate attention in the MC literature. This parameter is termed the residence time, and has played very crucial role in defining for instance, the efficacy of drugs in therapeutic processes; hence, it is critical in the performance of MC. In this paper, we employ biophysical approach to model and discuss the influence of the ligand residence time on the performance of MC systems. The performance metrics considered here are the receiver sensitivity and the intersymbol interference. Numerical results that expose the impact of the residence time on these metrics, and the interrelationships between these metrics in MC system, are discussed.

  • 64.
    Uddin Ahmed, Tawsin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Bangladesh.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Facial Expression Recognition using Convolutional Neural Network with Data Augmentation2019In: Proceedings of the Joint 2019 8th International Conference on Informatics, Electronics & Vision (ICIEV), 2019Conference paper (Refereed)
    Abstract [en]

    Detecting emotion from facial expression has become an urgent need because of its immense applications in artificial intelligence such as human-computer collaboration, data-driven animation, human-robot communication etc. Since it is a demanding and interesting problem in computer vision, several works had been conducted regarding this topic. The objective of this research is to develop a facial expression recognition system based on convolutional neural network with data augmentation. This approach enables to classify seven basic emotions consist of angry, disgust, fear, happy, neutral, sad and surprise from image data. Convolutional neural network with data augmentation leads to higher validation accuracy than the other existing models (which is 96.24%) as well as helps to overcome their limitations.

    Download full text (pdf)
    fulltext
  • 65.
    Makkie, Milad
    et al.
    Computer Science Department, University of Georgia, Athens, GA, USA.
    Huang, Heng
    School of Automation, Northwestern Polytechnical University, Xi'an, China.
    Zhao, Yu
    Computer Science Department, University of Georgia, Athens, GA, USA.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liu, Tianming
    Harvard Center for Neurodegeneration and Repair, Boyd GSRC 420, Athens, GA 30602, United States.
    Fast and Scalable Distributed Deep Convolutional Autoencoder for fMRI Big Data Analytics2019In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 325, p. 20-30Article in journal (Refereed)
    Abstract [en]

    In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided shallow models based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Given the Convolutional Neural Network (CNN) successes in learning hierarchical abstractions from low-level data such as tfMRI time series, in this work we propose a novel scalable distributed deep CNN autoencoder model and apply it for fMRI big data analysis. This model aims to both learn the complex hierarchical structures of the tfMRI big data and to leverage the processing power of multiple GPUs in a distributed fashion. To deploy such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow, leveraging from a large cluster of GPU nodes over cloud. Experimental results from applying the model on the Human Connectome Project (HCP) data show that the proposed model is efficient and scalable toward tfMRI big data modeling and analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data.

  • 66.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Martinsson, Torbjörn
    Volvo CE, Bolindervägen 5, 63185 Eskilstuna, Sweden.
    Field test of neural-network based automatic bucket-filling algorithm for wheel-loaders2019In: Automation in Construction, ISSN 0926-5805, E-ISSN 1872-7891, Vol. 97, p. 1-12Article in journal (Refereed)
    Abstract [en]

    Automation of earth-moving industries (construction, mining and quarry) require automatic bucket-filling algorithms for efficient operation of front-end loaders. Autonomous bucket-filling is an open problem since three decades due to difficulties in developing useful earth models (soil, gravel and rock) for automatic control. Operators make use of vision, sound and vestibular feedback to perform the bucket-filling operation with high productivity and fuel efficiency. In this paper, field experiments with a small time-delayed neural network (TDNN) implemented in the bucket control-loop of a Volvo L180H front-end loader filling medium coarse gravel are presented. The total delay time parameter of the TDNN is found to be an important hyperparameter due to the variable delay present in the hydraulics of the wheel-loader. The TDNN network successfully performs the bucket-filling operation after an initial period (100 examples) of imitation learning from an expert operator. The demonstrated solution show only 26% longer bucket-filling time, an improvement over manual tele-operation performance.

    Download full text (pdf)
    fulltext
  • 67.
    Zhan, Yufeng
    et al.
    School of Automation, Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, PR China.
    Xia, Yuanqing
    School of Automation, Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, PR China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Future directions of networked control systems: A combination of cloud control and fog control approach2019In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069, Vol. 161, p. 235-248Article in journal (Refereed)
    Abstract [en]

    Currently, we have witnessed that networked control technology has played a key role in Internet of Things (IoT). However, the volume, variety and velocity properties of big data from IoT make the traditional networked control systems (NCSs) can not meet the current requirements. Due to this, cloud control systems have emerged as a new control paradigm which bring lots of benefits and have played a key role in current IoT society. Despite cloud control systems have tremendous advantages, there are still lots of tough challenges such as latency, network congestion and etc., which hinder the development of cloud control systems. For these challenges, we extend the cloud control systems to the cloud fog control systems which bring the fog computing into the NCSs design. First, some recent studies of fog computing have been surveyed. Second, a new architecture of NCSs based on cloud computing and fog computing has been proposed. Then, an incentive mechanism has been designed for the cloud fog control systems. In the end, the cases of control tasks offloading and a simple platform of cloud fog control systems have been studied.

  • 68.
    Lindberg, Renny S. N.
    et al.
    Vrije Universiteit Brussel.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Haaranen, Lassi
    Aalto University.
    Gamifying programming education in K‐12: A review of programming curricula in seven countries and programming games2019In: British Journal of Educational Technology, ISSN 0007-1013, E-ISSN 1467-8535, Vol. 50, no 4, p. 1979-1995Article in journal (Refereed)
    Abstract [en]

    An increasing number of countries have recently included programming education in their curricula. Similarly, utilizing programming concepts in gameplay has become popular in the videogame industry. Although many games have been developed for learning to program, their variety and their correspondence to national curricula remain an uncharted territory. Consequently, this paper has three objectives. Firstly, an investigation on the guidelines on programming education in K‐12 in seven countries was performed by collecting curricula and other relevant data official from governmental and non‐profit educational websites. Secondly, a review of existing acquirable games that utilize programming topics in their gameplay was conducted by searching popular game stores. Lastly, we compared the curricula and made suggestions as to which age group the identified games would be suitable. The results of this study can be useful to educators and curriculum designers who wish to gamify programming education.

  • 69.
    Chiu, Wei-Yu
    et al.
    Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan.
    Sun, Hongjian
    Department of Engineering Durham University Durham, U.K.
    Wang, Chao
    Department of Computer Science University of Exeter, Innovation Center, Exeter, U.K.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Guest Editorial: Special Issue on Computational Intelligence for Smart Energy Applications to Smart Cities2019In: IEEE Transactions on Emerging Topics in Computational Intelligence, E-ISSN 2471-285X, Vol. 3, no 3, p. 173-176Article in journal (Refereed)
    Abstract [en]

    The papers in this special section focus on computational intelligence for smart energy applications in smart cities. By 2050, more than half the world’s population is expected to live in urban regions. This rapid expansion of population in the cities of the future will lead to increasing demands on various infrastructures; the urban economics will play a major role in national economics. Cities must be competitive by providing smart functions to support high quality of life. There is thus an urgent need to develop smart cities that possess a number of smart components. Among them, smart energy is arguably the first infrastructure to be established because almost all systems require energy to operate. Smart energy refers to energy monitoring, prediction, use or management in a smart way. In smart cities, smart energy applications include smart grids, smart mobility, and smart communications. While realizing smart energy is promising to smart cities, it involves a number of challenges. The articles in this section aim to provide in-depth CI technologies that enable smart energy applications to smart cities.

  • 70.
    Yang, Chao-Tung
    et al.
    Department of Computer Science, Tunghai University, Taichung City, Taiwan, ROC.
    Chen, Shuo-Tsung
    Artificial Intelligence Recognition Industry Service Research Center (AIR-IS Research Center), National Yunlin University of Science and Technology, Yunlin, Taiwan, ROC. College of Future, Bachelor Program in Interdisciplinary Studies, National Yunlin University of Science and Technology, Taiwan, ROC.
    Liu, Jung-Chun
    Department of Computer Science, Tunghai University, Taichung City, Taiwan, ROC.
    Yang, Yao-Yu
    Department of Computer Science, Tunghai University, Taichung City, Taiwan, ROC.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ranjan, Rajiv
    School of Computer, China University of Geosciences, China. School of Computing Science, Newcastle University, United Kingdom.
    Implementation of a real-time network traffic monitoring service with network functions virtualization2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 93, p. 687-701Article in journal (Refereed)
    Abstract [en]

    The Network Functions Virtualization (NFV) extends the functionality provided by Software-Defined Networking (SDN). It is a virtualization technology that aims to replace the functionality provided by traditional networking hardware using software solutions. Thereby, enabling cheaper and efficient network deployment and management. The use of NFV and SDN is anticipated to enhance the performance of Infrastructure-as-a-Service (IaaS) clouds. However, due to the presence of a large number of network devices in IaaS clouds offering a plethora of networked services, there is need to develop a traffic monitoring system for the efficient network. This paper proposes and validates an extensible SDN and NFV-enabled network traffic monitoring system. Using extensive experiments, we show that the proposed system can closely match the performance of traditional networks at cheaper costs and by adding more flexibility to network management tasks.

  • 71.
    Messina, Fabrizio
    et al.
    University of Catania, Catania, Italy.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    De Meo, Pasquale
    University of Messina, Messina, Italy.
    Introduction to the special section on Recent trends in flocking control and communication for Unmanned vehicles2019In: Computers & electrical engineering, ISSN 0045-7906, E-ISSN 1879-0755, Vol. 80, article id 106495Article in journal (Refereed)
  • 72.
    Chowdury, Mohammad Salah Uddin
    et al.
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Bin Emranb, Talha
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Ghosha, Subhasish
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Pathak, Abhijit
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Alama, Mohd. Manjur
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Absar, Nurul
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    IoT Based Real-time River Water Quality Monitoring System2019In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 155, p. 161-168Article in journal (Refereed)
    Abstract [en]

    Current water quality monitoring system is a manual system with a monotonous process and is very time-consuming. This paper proposes a sensor-based water quality monitoring system. The main components of Wireless Sensor Network (WSN) include a microcontroller for processing the system, communication system for inter and intra node communication and several sensors. Real-time data access can be done by using remote monitoring and Internet of Things (IoT) technology. Data collected at the apart site can be displayed in a visual format on a server PC with the help of Spark streaming analysis through Spark MLlib, Deep learning neural network models, Belief Rule Based (BRB) system and is also compared with standard values. If the acquired value is above the threshold value automated warning SMS alert will be sent to the agent. The uniqueness of our proposed paper is to obtain the water monitoring system with high frequency, high mobility, and low powered. Therefore, our proposed system will immensely help Bangladeshi populations to become conscious against contaminated water as well as to stop polluting the water.

    Download full text (pdf)
    fulltext
  • 73.
    Pathak, Abhijit
    et al.
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Uddin, Mohammad Amaz
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Abedin, Md. Jainal
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mustafa, Rashed
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    IoT based Smart System to Support Agricultural Parameters: A Case Study2019In: Proceedings of the 6th International Symposium on Emerging Inter-networks, Communication and Mobility (EICM), Elsevier, 2019, p. 648-653Conference paper (Refereed)
    Abstract [en]

    Now-a-days, the natural irrigation system is under pressure due to the growing water shortages, which are mainly caused by population growth and climate change. Therefore, the control of water resources to increase the allocation of retained water is very important. It has been observed in the last two decades, especially in the Indian sub-continent, the change of climate affects the agricultural crops production significantly. However, the prediction of good harvests before harvesting, enables the farmers as well as the government officials to take appropriate measures of marketing and storage of crops. Some strategies for predicting and modelling crop yields have been developed, although they do not take into account the characteristics of climate, and they are empirical in nature. In the proposed system, a Cuckoo Search Algorithm has been developed, allowing the allocation of water for farming under any conditions. The various parameters such as temperature, turbidity, pH., moisture have been collected by using Internet of Things (IoT) platform, equipped with related sensors and wireless communication systems. In this IoT platform the sensor data have been displayed in the cloud environment by using ThingSpeak. The data received in the ThingSpeak used in the proposed Cuckoo Search Algorithm, allowing the selection of appropriate crops for particular soil

    Download full text (pdf)
    fulltext
  • 74.
    Liu, Xiao
    et al.
    School of Information Science and Engineering, Central South University, ChangSha .
    Zhao, Shaona
    School of Information Science and Engineering, Central South University, ChangSha .
    Liu, Anfeng
    School of Information Science and Engineering, Central South University, ChangSha .
    Xiong, Naixue
    Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK .
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Knowledge-aware Proactive Nodes Selection approach for energy management in Internet of Things2019In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 92, p. 1142-1156Article in journal (Refereed)
    Abstract [en]

    Internet of Things will serve communities across the different domains of life. Tracking mobile targets is one important system engineering application in IOT, and the resource of embedded devices and objects working under IoT implementation are constrained. Thus, building a scheme to make full use of energy is key issue for mobile target tracking applications. To achieve both energy efficiency and high monitoring performance, an effective Knowledge-aware Proactive Nodes Selection (KPNS) system is proposed in this paper. The innovations of KPNS are as follows: 1) the number of proactive nodes are dynamically adjusted based on prediction accuracy of target trajectory. If the prediction accuracy is high, the number of proactive nodes in the non-main predicted area will be decreased. If prediction accuracy of moving trajectory is low, large number of proactive nodes will be selected to enhance monitoring quality. 2) KPNS takes full advantage of energy to further enhance target tracking performance by properly selecting more proactive nodes in the network. We evaluated the efficiency of KPNS with both theory analysis and simulation based experiments. The experimental results demonstrate that compared with Probability-based target Prediction and Sleep Scheduling strategy (PPSS), KPNS scheme improves the energy efficiency by 60%, and can reduce target missing rate and tracking delay to 66%, 75% respectively.

  • 75.
    Andersson, Karl
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Distance- Spanning Technology. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Tan, Hwee-Pink
    Singapore Management University, Singapore.
    LCN 2019 Message from the TPC Chairs2019In: 2019 IEEE 44th Conference on Local Computer Networks (LCN) / [ed] Karl Andersson, Hwee-Pink Tan, Sharief Oteafy, IEEE, 2019, p. i-iConference paper (Other academic)
  • 76.
    Song, Qiang
    et al.
    Henan University of Technology, College of Electrical Engineering, Zhengzhou, China.
    Liu, Fang
    Huanghuai University, Zhumadian, China.
    Cao, Jinde
    Southeast University, Research Center for Complex Systems and Network Sciences, Nanjing, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Tang, Yang
    East China University of Science and Technology, Institute of Physics, Berlin, Germany.
    Leader-following synchronization of coupled homogeneous and heterogeneous harmonic oscillators based on relative position measurements2019In: IEEE Transactions on Control of Network Systems, ISSN 2325-5870, Vol. 6, no 1, p. 13-23Article in journal (Refereed)
    Abstract [en]

    This paper considers the leader-following synchronization problem for a network of coupled harmonic oscillators by utilizing the relative position measurements between neighboring nodes, where the node dynamics can be either identical or nonidentical. For a homogeneous network with the same node dynamics, two types of first-order observer-based protocols are proposed to achieve leader-following synchronization in the network under some necessary and sufficient conditions, including some synchronization criteria for the homogeneous network subject to parameter uncertainty. For a heterogeneous network with different node dynamics, an output regulation approach is applied to solve the leader-following synchronization problem for the nominal network, based on which the robust synchronization of the uncertain network is investigated with an allowable bound being estimated for parameter uncertainties. Numerical examples are given to illustrate the correctness and the feasibility of the theoretical analysis. 

  • 77.
    Sun, Gang
    et al.
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China.
    Xu, Zhu
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China.
    Yu, Hongfang
    Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, Chengdu, China.
    Chen, Xi
    School of Computer Science and Technology, Southwest Minzu University, Chengdu, China.
    Chang, Victor
    School of Computing & Digital Technologies, Teesside University, Middlesbrough, UK.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Low-latency and Resource-efficient Service Function Chaining Orchestration in Network Function Virtualization2019In: IEEE Internet of Things Journal, ISSN 2327-4662Article in journal (Refereed)
    Abstract [en]

    Recently, network function virtualization (NFV) has been proposed to solve the dilemma faced by traditional networks and to improve network performance through hardware and software decoupling. The deployment of the service function chain (SFC) is a key technology that affects the performance of virtual network function (VNF). The key issue in the deployment of SFCs is proposing effective algorithms to achieve efficient use of resources. In this paper, we propose a service function chain deployment optimization (SFCDO) algorithm based on a breadth-first search (BFS). The algorithm first uses a BFS based algorithm to find the shortest path between the source node and the destination node. Then, based on the shortest path, the path with the fewest hops is preferentially chosen to implement the SFC deployment. Finally, we compare the performances with the greedy and simulated annealing (G-SA) algorithm. The experiment results show that the proposed algorithm is optimized in terms of end-to-end delay and bandwidth resource consumption. In addition, we also consider the load rate of the nodes to achieve network load balancing.

  • 78.
    Karvonen, Niklas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Nilsson, Joakim
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Kleyko, Denis
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Jimenez, Lara Lorna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Low-Power Classification using FPGA: An Approach based on Cellular Automata, Neural Networks, and Hyperdimensional Computing2019In: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) / [ed] M. Arif Wani, Taghi M. Khoshgoftaar, Dingding Wang, Huanjing Wang, Naeem (Jim) Seliya, IEEE, 2019, p. 370-375Conference paper (Other academic)
    Abstract [en]

    Field-Programmable Gate Arrays (FPGA) are hardware components that hold several desirable properties for wearable and Internet of Things (IoT) devices. They offer hardware implementations of algorithms using parallel computing, which can be used to increase battery life or achieve short response-times. Further, they are re-programmable and can be made small, power-efficient and inexpensive. In this paper we propose a classifier targeted specifically for implementation on FPGAs by using principles from hyperdimensional computing and cellular automata. The proposed algorithm is shown to perform on par with Naive Bayes for two benchmark datasets while also being robust to noise. It is also synthesized to a commercially available off-the-shelf FPGA reaching over 57.1 million classifications per second for a 3-class problem using 40 input features of 8 bits each. The results in this paper show that the proposed classifier could be a viable option for applications demanding low power-consumption, fast real-time responses, or a robustness against post-training noise.

  • 79.
    Seo, Jungryul
    et al.
    Ajou University.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sohn, Kyung-Ah
    Ajou University.
    Machine learning approaches for boredom classification using EEG2019In: Journal of Ambient Intelligence and Humanized Computing, ISSN 1868-5137, E-ISSN 1868-5145, Vol. 10, no 10, p. 3831-3846Article in journal (Refereed)
    Abstract [en]

    Recently, commercial physiological sensors and computing devices have become cheaper and more accessible, while computer systems have become increasingly aware of their contexts, including but not limited to users’ emotions. Consequently, many studies on emotion recognition have been conducted. However, boredom has received relatively little attention as a target emotion due to its diverse nature. Moreover, only a few researchers have tried classifying boredom using electroencephalogram (EEG). In this study, to perform this classification, we first reviewed studies that tried classifying emotions using EEG. Further, we designed and executed an experiment, which used a video stimulus to evoke boredom and non-boredom, and collected EEG data from 28 Korean adult participants. After collecting the data, we extracted its absolute band power, normalized absolute band power, differential entropy, differential asymmetry, and rational asymmetry using EEG, and trained these on three machine learning algorithms: support vector machine, random forest, and k-nearest neighbors (k-NN). We validated the performance of each training model with 10-fold cross validation. As a result, we achieved the highest accuracy of 86.73% using k-NN. The findings of this study can be of interest to researchers working on emotion recognition, physiological signal processing, machine learning, and emotion-aware system development.

  • 80.
    Kim, Joo Chan
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Lindberg, Renny S. N.
    WISE, Department of Computer Science, Vrije Universiteit Brussel, Belgium.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Faarinen, Ewa-Charlotte
    Luleå University of Technology, Department of Arts, Communication and Education, Education, Language, and Teaching.
    De Troyer, Olga
    WISE, Department of Computer Science, Vrije Universiteit Brussel, Belgium.
    Nygren, Eeva
    Department of Future Technologies, University of Turku, Finland.
    Multidisciplinary Development Process of a Story-based Mobile Augmented Reality Game for Learning Math2019In: Proceedings on the 17th International Conference on Emerging eLearning Technologies and Applications: PROCEEDINGS / [ed] František Jakab, IEEE, 2019, p. 372-377Conference paper (Refereed)
    Abstract [en]

    Despite the high number of educational games released, only a few games have a strong story that is more than an excuse for players’ actions. Furthermore, even fewer story-based games utilise the affordances of augmented reality (AR) to concretise abstract concepts while engaging players.Based on our literature review, we were inspired to merge AR into a story-based educational mobile game for teaching fractions to elementary school students. The game Tales & Fractions was created through a two-phase multidisciplinary development process. In order to successfully integrate AR into a story-based educational game, we employed an adapted version of the Scrum agile software development method implemented by a multidisciplinary team of experts from computer science, pedagogy, design and arts. During the development process, we faced many issues that other story-based AR game developers could meet. We summarised the encountered issues with our solutions which could be useful for developers to avoid common pitfalls and to enrich the user engagement.

  • 81.
    Li, He
    et al.
    Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran, Hokkaido.
    Ota, Kaoru
    Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran, Hokkaido.
    Dong, Mianxiong
    Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran, Hokkaido.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Nagano, Koji
    Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran, Hokkaido.
    Multimedia Processing Pricing Strategy in GPU-accelerated Cloud Computing2019In: IEEE Transactions on Cloud Computing, E-ISSN 2168-7161Article in journal (Refereed)
    Abstract [en]

    Graphics processing unit (GPU) accelerated processing performs significant efficiency in many multimedia applications. With the development of GPU cloud computing, more and more cloud providers focus on GPU-accelerated services. Since the high maintenance cost and different speedups for various applications, GPU-accelerated services still need a different pricing strategy. Thus, in this paper, we propose an optimal GPU-accelerated multimedia processing service pricing strategy for maximize the profits of both cloud provider and users. We first analyze the revenues and costs of the cloud provider and users when users adopt GPU-accelerated multimedia processing services then state the profit functions of both the cloud provider and users. With a game theory based method, we find the optimal solutions of both the cloud provider’s and users’ profit functions. Finally, through large scale simulations, our pricing strategy brings higher profit to the cloud provider and users compared to the original pricing strategy of GPU cloud services.

  • 82. Schürholz, D.
    et al.
    Nurgazy, M.
    Zaslavsky, A.
    Jayaraman, P.
    Kubler, S.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Saguna, Saguna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    MyAQI: Context-aware Outdoor Air Pollution Monitoring System2019In: International Conference on the Internet of Things, 2019Conference paper (Refereed)
  • 83.
    Talebian, H.
    et al.
    Centre for Mobile Cloud Computing (C4MCC), University of Malaya, Kuala Lumpur, Malaysia.
    Gani, A.
    Centre for Mobile Cloud Computing (C4MCC), University of Malaya, Kuala Lumpur, Malaysia.
    Sookhak, M.
    School of Informaion Technology, Illinois State University, Normal, United States.
    Abdelatif, A.A.
    The Future University, Khartoum, Sudan.
    Yousafzai, A.
    HITEC University, Taxila, Pakistan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Yu, F.R.
    Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada.
    Optimizing virtual machine placement in IaaS data centers: taxonomy, review and open issues2019In: Cluster Computing, ISSN 1386-7857, E-ISSN 1573-7543Article in journal (Refereed)
    Abstract [en]

    The unprecedented growth of energy consumption in data centers created critical concern in recent years for both the research community and industry. Besides its direct associated cost; high energy consumption also results in a large amount of CO2 emission and incurs extra cooling expenditure. The foremost reason for overly energy consumption is the underutilization of data center resources. In modern data centers, virtualization provides a promising approach to improve the hardware utilization level. Virtual machine placement is a process of mapping a group of virtual machines (VMs) onto a set of physical machines (PMs) in a data center with the aim of maximizing resource utilization and minimizing the total power consumption by PMs. An optimal virtual machine placement algorithm substantially contributes to cutting down the power consumption through assigning the input VMs to a minimum number of PMs and allowing the dispensable PMs to be turned off. However, VM Placement Problem is a complex combinatorial optimization problem and known to be NP-Hard problem. This paper presents an extensive review of virtual machine placement problem along with an overview of different approaches for solving virtual machine placement problem. The aim of this paper is to illuminate challenges and issues for current virtual machine placement techniques. Furthermore, we present a taxonomy of virtual machine placement based on various aspects such as methodology, number of objectives, operation mode, problem objectives, resource demand type and number of clouds. The state-of-the-art VM Placement techniques are classified in single objectives and multi-objective groups and a number of prominent works are reviewed in each group. Eventually, some open issues and future trends are discussed which serve as a platform for future research work in this domain.

  • 84.
    Araujo, Victor
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Saguna, Saguna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance evaluation of FIWARE: A cloud-based IoT platform for smart cities2019In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 132, p. 250-261Article in journal (Refereed)
    Abstract [en]

    As the Internet of Things (IoT) becomes a reality, millions of devices will be connected to IoT platforms in smart cities. These devices will cater to several areas within a smart city such as healthcare, logistics, and transportation. These devices are expected to generate significant amounts of data requests at high data rates, therefore, necessitating the performance benchmarking of IoT platforms to ascertain whether they can efficiently handle such devices. In this article, we present our results gathered from extensive performance evaluation of the cloud-based IoT platform, FIWARE. In particular, to study FIWARE’s performance, we developed a testbed and generated CoAP and MQTT data to emulate large-scale IoT deployments, crucial for future smart cities. We performed extensive tests and studied FIWARE’s performance regarding vertical and horizontal scalability. We present bottlenecks and limitations regarding FIWARE components and their cloud deployment. Finally, we discuss cost-efficient FIWARE deployment strategies that can be extremely beneficial to stakeholders aiming to deploy FIWARE as an IoT platform for smart cities.

  • 85.
    Dong, Pingping
    et al.
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Xie, Jingyun
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Tang, Wensheng
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Xiong, Naixue
    College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
    Zhong, Hua
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Evaluation of Multipath TCP Scheduling Algorithms2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 29818-29825Article in journal (Refereed)
    Abstract [en]

    One of the goals of 5G is to provide enhanced mobile broadband and enable low latency in some use cases. To achieve this aim, the Internet Engineering Task Force (IETF) has proposed the Multipath TCP (MPTCP) by utilizing the feature of dual connectivity in 5G, where a 5G device can be served by two different base stations. However, the path heterogeneity between the 5G device to the server may cause packet out-of-order problem. The researchers proposed a number of scheduling algorithms to tackle this issue. This paper introduces the existing algorithms, and with the aim to make a thorough comparison between the existing scheduling algorithms and provide guidelines for designing new scheduling algorithms in 5G, we have conducted an extensive set of emulation studies based on the real Linux experimental platform. The evaluation covers a wide range of network scenarios to investigate the impact of different network metrics, namely, RTT, buffer size and file size on the performance of existing widely-deployed scheduling algorithms.

  • 86.
    Fejzo, Orsola
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Zaslavsky, Arkady
    Deakin University, Melbourne, Australia. ITMO University, Saint Petersburg, Russia.
    Saguna, Saguna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Proactive Context-Aware IoT-Enabled Waste Management2019In: Proceeings of the 19th International Conference on Next Generation Wired/Wireless Advanced Networks and System, Springer, 2019, p. 3-15Conference paper (Refereed)
    Abstract [en]

    Exploiting future opportunities and avoiding problematic upcoming events is the main characteristic of a proactively adapting system, leading to several benefits such as uninterrupted and efficient services. In the era when IoT applications are a tangible part of our reality, with interconnected devices almost everywhere, there is potential to leverage the diversity and amount of their generated data in order to act and take proactive decisions in several use cases, smart waste management as such. Our work focuses in devising a system for proactive adaptation of behavior, named ProAdaWM. We propose a reasoning model and system architecture that handles waste collection disruptions due to severe weather in a sustainable and efficient way using decision theory concepts. The proposed approach is validated by implementing a system prototype and conducting a case study.

  • 87.
    Zhao, Yali
    et al.
    The University of Melbourne, Australia.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Bailey, James
    The University of Melbourne, Australia.
    Sinnott, Richard
    The University of Melbourne, Australia.
    Profit Optimization for Splitting and Sampling Based Resource Management in Big Data Analytics-as-a-Service Platforms in Cloud Computing Environments2019In: IEEE 15th International Conference on eScience: eScience 2019, IEEE, 2019, p. 156-167Conference paper (Other academic)
    Abstract [en]

    Exploring optimal big data analytics solutions to benefit various domains in decision making and problem solving becomes an ever-important research area. Big data Analytics-as-a-Service (AaaS) platforms offer online AaaS to various domains in a pay-as-you-go model. Big data analytics incurs expensive costs and takes lengthy processing times due to large-scale computing requirements. To tackle the cost and time challenges for big data analytics, we focus on proposing efficient and automatic resource management algorithms to maximize profits and minimize query times while guaranteeing Service Level Agreements (SLAs) on Quality of Service (QoS) requirements of queries. For query processing constrained by tight deadlines and limited budgets, our proposed algorithms enable data splitting and sampling based resource scheduling for parallel and approximate processing that significantly reduce data processing times and resource costs. We formulate the multi-objective resource scheduling problem to optimize profits for AaaS platforms while guaranteeing SLAs of queries with minimized response times. We design extensive experiments for algorithm performance evaluation, results show our proposed algorithms outperform state-of-the-art algorithms that maximize profits for AaaS platforms while improving admission rates and minimizing response times for queries. The scheduling algorithms support elastic and automatic large-scale resource configurations to minimize resource costs, and deliver timely, cost-effective, and reliable AaaS with SLA guarantees.

  • 88.
    Dong, Pingping
    et al.
    College of Information Science and Engineering, Hunan Normal University, Changsha, China. Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Gao, Kai
    College of Automotive and Mechanical Engineering, Changsha University of Science & Technology, Changsha 410114, China. Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems, Changsha, China.
    Xie, Jingyun
    College of Information Science and Engineering, Hunan Normal University, Changsha, China. Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Tang, Wensheng
    College of Information Science and Engineering, Hunan Normal University, Changsha, China. Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Xiong, Naixue
    College of Intelligence and Computing, Tianjin University, Tianjin, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Receiver-Side TCP Countermeasure in Cellular Networks2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 12, article id 2791Article in journal (Refereed)
    Abstract [en]

    Cellular-based networks keep large buffers at base stations to smooth out the bursty data traffic, which has a negative impact on the user’s Quality of Experience (QoE). With the boom of smart vehicles and phones, this has drawn growing attention. For this paper, we first conducted experiments to reveal the large delays, thus long flow completion time (FCT), caused by the large buffer in the cellular networks. Then, a receiver-side transmission control protocol (TCP) countermeasure named Delay-based Flow Control algorithm with Service Differentiation (DFCSD) was proposed to target interactive applications requiring high throughput and low delay in cellular networks by limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB in Long Term Evolution (LTE). DFCSD stems from delay-based congestion control algorithms but works at the receiver side to avoid the performance degradation of the delay-based algorithms when competing with loss-based mechanisms. In addition, it is derived based on the TCP fluid model to maximize the network utility. Furthermore, DFCSD also takes service differentiation into consideration based on the size of competing flows to shorten their completion time, thus improving user QoE. Simulation results confirmed that DFCSD is compatible with existing TCP algorithms, significantly reduces the latency of TCP flows, and increases network throughput.

  • 89.
    Niazi, Muaz A.
    et al.
    COSMOSE Research Group, Computer Science Department, COMSATS University, Islamabad, Pakistan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Temkin, Anatoly
    Department of Computer Science, Boston University Metropolitan College, Boston, MA, USA.
    Review of “Exploratory Social Network Analysis with Pajek” by Wouter De Nooy, Andrej Mrvar and Vladimir Batageli2019In: Complex Adaptive Systems Modeling, E-ISSN 2194-3206, Vol. 7, no 1Article, book review (Other academic)
  • 90.
    Andersson, Karl
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    You, Ilsun
    Soonchunhyang University, Asan, Republic of Korea.
    Rahmani, Rahim
    Stockholm University, Stockholm, Sweden.
    Sharma, Vishal
    Soonchunhyang University, Asan, Republic of Korea.
    Secure Computation on 4G/5G Enabled Internet-of-Things2019In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677, Vol. 2019, article id 3978193Article in journal (Refereed)
    Abstract [en]

    The rapid development of Internet-of-Things (IoT) techniques in 4G/5G deployments is witnessing the generation of massive amounts of data which are collected, stored, processed, and presented in an easily interpretable form. Analysis of IoT data helps provide smart services such as smart homes, smart energy, smart health, and smart environments through 4G and 5G technologies. At the same time, the threat of the cyberattacks and issues with mobile internet security is becoming increasingly severe, which introduces new challenges for the security of IoT systems and applications and the privacy of individuals thereby. Protecting IoT data privacy while enabling data availability is an urgent but difficult task.

    Data privacy in a distributed environment like IoT can be attained through secure multiparty computation. An emerging area of potential applications for secure computation is to address privacy concerns in data aggregation and analysis to match the explosive growth of the amount of IoT data. However, the inherent complexity of IoT systems really complicates the design and deployment of efficient, interoperable, and scalable secure computation mechanisms. As a result, there is an increasing demand for the development of new secure computation methods and tools which can fill in the gap between security and practical usage in IoT.

    The scope of this special issue is in line with recent contributions from academia and industry on the recent activities that tackle the technical challenges making computing secure on 4G/5G enabled Internet-of-Things. For the current issue, we are pleased to introduce a collection of papers covering a range of topics such as securely verifiable remote erasure schemes, multiuser identification algorithms, privacy-preserving shared storage, situational aware threat assessment, authorized client-side deduplication in cloud storage, radio environment map construction, analysis of the vulnerabilities of connected car environments, combat pollution attacks in 5G multihop networks, automatically traceback RDP-based targeted ransomware attacks, multiresolution face recognition through virtual faces generation, anonymous communication via anonymous identity-based encryption, and Secure Storage and Retrieval of IoT Data.

  • 91. de Lange, Michiel
    et al.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Leindecker, Gerald
    Smart Citizens in the Hackable City: On the Datafication, Playfulness, and Making of Urban Public Spaces Through Digital Art2019In: CyberParks – The Interface Between People, Places and Technology: New Approaches and Perspectives / [ed] Carlos Smaniotto Costa, Ina Šuklje Erjavec, Therese Kenna, Michiel de Lange, Konstantinos Ioannidis, Gabriela Maksymiuk, Martijn de Waal, Springer Nature , 2019, p. 157-166Chapter in book (Refereed)
    Abstract [en]

    This contribution explores concepts, approaches and technologies used to make urban public spaces more playful and artful. Through a variety of compelling narratives involving play and art it assists in the design of new cyberparks, public spaces where digitally mediated interactions are an inherent part. How can play and interactive art be used to strengthen urban public spaces by fostering citizen engagement and participation? We propose to not only utilise interactive media for designing urban (public) spaces, but also for social innovation for the benefit of citizens. in cyberparks. The contribution connects urbanity, play and games, as well as concepts of active and passive interactive digital art as part of trends towards pervasive urban interaction, gameful design and artification. We position this as an important part of developing human-centred smart cities where social capital is central, and where citizens engaging in play and art are prerequisites for sustainable communities. Using art, play and games to foster citizen engagement and collaboration is a means to develop social technologies and support the development of collective intelligence in cyberparks. This is studied in concrete cases, such as the Ice Castle in Luleå, Sweden and the Ars Electronica in Linz, from a multi-disciplinary stance involving interaction design, digital art, landscape design, architecture, and health proficiencies. We will analyse two cases of gameful design and one case of digital interactive art being used to address urban issues. Rezone the game is an interactive multimedia game developed to tackle vacancy in the city of Den Bosch in the Netherlands. The Neighbourhood is a board game developed to involve various stakeholders in making their neighbourhood using water as a collective resource.

    Download full text (pdf)
    fulltext
  • 92.
    Andersson, Karl
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Gavalas, Damianos
    Computer Technology Institute and Press (CTI).
    SMARTBUY dataset2019Data set
  • 93.
    Demirbaga, Umit
    et al.
    Newcastle University. Bartin University.
    Noor, Ayman
    Newcastle University. Taibah University.
    Wen, Zhenyu
    Newcastle University.
    James, Philip
    Newcastle University.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ranjan, Rajiv
    Newcastle University.
    SmartMonit: Real-time Big Data Monitoring System2019In: 2019 IEEE 38th International Symposium on Reliable Distributed Systems: SRDS 2019, IEEE, 2019, p. 357-359Conference paper (Other academic)
    Abstract [en]

    Modern big data processing systems are becoming very complex in terms of large-scale, high-concurrency and multiple talents. Thus, many failures and performance reductions only happen at run-time and are very difficult to capture. Moreover, some issues may only be triggered when some components are executed. To analyze the root cause of these types of issues, we have to capture the dependencies of each component in real-time. In this paper, we propose SmartMonit, a real-time big data monitoring system, which collects infrastructure information such as the process status of each task. At the same time, we develop a real-time stream processing framework to analyze the coordination among the tasks and the infrastructures. This coordination information is essential for troubleshooting the reasons for failures and performance reduction, especially the ones propagated from other causes.

  • 94.
    Lemlouma, T.
    et al.
    University of Rennes 1, Rennes, France.
    Laborie, S.
    University of Pau and Adour Countries, Anglet, France.
    Rachedi, A.
    University of Paris-Est Marne la Vallée, Champs-sur-Marne, France.
    Santos, A.
    Vestas, Leça do Balio, Porto, Portugal.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Special issue on selected papers from e-health pervasive wireless applications and services 20172019In: Information, E-ISSN 2078-2489, Vol. 10, no 2, article id 52Article in journal (Refereed)
  • 95.
    Islam, Md. Zahirul
    et al.
    Department of Computer Science and Engineering University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Static Hand Gesture Recognition using Convolutional Neural Network with Data Augmentation2019In: Proceedings of the Joint 2019 8th International Conference on Informatics, Electronics & Vision (ICIEV), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    Computer is a part and parcel in our day to day life and used in various fields. The interaction of human and computer is accomplished by traditional input devices like mouse, keyboard etc. Hand gestures can be a useful medium of human-computer interaction and can make the interaction easier. Gestures vary in orientation and shape from person to person. So, non-linearity exists in this problem. Recent research has proved the supremacy of Convolutional Neural Network (CNN) for image representation and classification. Since, CNN can learn complex and non-linear relationships among images, in this paper, a static hand gesture recognition method using CNN was proposed. Data augmentation like re-scaling, zooming, shearing, rotation, width and height shifting was applied to the dataset. The model was trained on 8000 images and tested on 1600 images which were divided into 10 classes. The model with augmented data achieved accuracy 97.12% which is nearly 4% higher than the model without augmentation (92.87%).

    Download full text (pdf)
    fulltext
  • 96.
    Monrat, Ahmed Afif
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Survey of Blockchain from the Perspectives of Applications, Challenges and Opportunities2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 117134-117154Article in journal (Refereed)
    Abstract [en]

    Blockchain is the underlying technology of a number of digital cryptocurrencies. Blockchain is a chain of blocks that store information with digital signatures in a decentralized and distributed network. The features of blockchain, including decentralization, immutability, transparency and auditability, make transactions more secure and tamper proof. Apart from cryptocurrency, blockchain technology can be used in financial and social services, risk management, healthcare facilities, and so on. A number of research studies focus on the opportunity that blockchain provides in various application domains. This paper presents a comparative study of the tradeoffs of blockchain and also explains the taxonomy and architecture of blockchain, provides a comparison among different consensus mechanisms and discusses challenges, including scalability, privacy, interoperability, energy consumption and regulatory issues. In addition, this paper also notes the future scope of blockchain technology.

    Download full text (pdf)
    fulltext
  • 97.
    Bezerra, Nibia Souza
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Saguna, Saguna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    de Sousa Jr., Vicente A.
    Federal University of Rio Grande do Norte (UFRN).
    Temperature Impact in LoRaWAN: A Case Study in Northern Sweden2019In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 20, article id 4414Article in journal (Refereed)
    Abstract [en]

    LoRaWAN has become popular as an IoT enabler. The low cost, ease of installation and the capacity of fine-tuning the parameters make this network a suitable candidate for the deployment of smart cities. In northern Sweden, in the smart region of Skellefteå, we have deployed a LoRaWAN to enable IoT applications to assist the lives of citizens. As Skellefteå has a subarctic climate, we investigate how the extreme changes in the weather happening during a year affect a real LoRaWAN deployment in terms of SNR, RSSI and the use of SF when ADR is enabled. Additionally, we evaluate two propagation models (Okumura-Hata and ITM) and verify if any of those models fit the measurements obtained from our real-life network. Our results regarding the weather impact show that cold weather improves the SNR while warm weather makes the sensors select lower SFs, to minimize the time-on-air. Regarding the tested propagation models, Okumura-Hata has the best fit to our data, while ITM tends to overestimate the RSSI values.

    Download full text (pdf)
    fulltext
  • 98.
    Du, Rong
    et al.
    Department of Network and Systems Engineering, KTH Royal Institute of Technology, Stockholm, 10044, Sweden.
    Santi, Paolo
    MIT Senseable City Laboratory, Cambridge, MA 02139 USA and also with the Istituto di Informatica e Telematica del CNR, 56124 Pisa, Italy.
    Xiao, Ming
    Department of Information Science and Engineering, KTH Royal Institute of Technology, Stockholm, 10044, Sweden..
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Fischione, Carlo
    Department of Network and Systems Engineering, KTH Royal Institute of Technology, Stockholm, 10044, Sweden.
    The sensable city: A survey on the deployment and management for smart city monitoring2019In: IEEE Communications Surveys and Tutorials, ISSN 1553-877X, E-ISSN 1553-877X, Vol. 21, no 2, p. 1533-1560Article in journal (Refereed)
    Abstract [en]

    In last two decades, various monitoring systems have been designed and deployed in urban environments, toward the realization of the so called smart cities. Such systems are based on both dedicated sensor nodes, and ubiquitous but not dedicated devices such as smart phones and vehicles’ sensors. When we design sensor network monitoring systems for smart cities, we have two essential problems: node deployment and sensing management. These design problems are challenging, due to large urban areas to monitor, constrained locations for deployments, and heterogeneous type of sensing devices. There is a vast body of literature from different disciplines that have addressed these challenges. However, we do not have yet a comprehensive understanding and sound design guidelines. This article addresses such a research gap and provides an overview of the theoretical problems we face, and what possible approaches we may use to solve these problems. Specifically, this paper focuses on the problems on both the deployment of the devices (which is the system design/configuration part) and the sensing management of the devices (which is the system running part). We also discuss how to choose the existing algorithms in different type of monitoring applications in smart cities, such as structural health monitoring, water pipeline networks, traffic monitoring. We finally discuss future research opportunities and open challenges for smart city monitoring.

  • 99.
    Liu, Ling
    et al.
    Beijing Key Laboratory of Mobile Computing and Pervasive Devices, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;University of Chinese Academy of Sciences, Beijing, China.
    Zhou, Yiqing
    Beijing Key Laboratory of Mobile Computing and Pervasive Devices, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;University of Chinese Academy of Sciences, Beijing, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Tian, Lin
    Beijing Key Laboratory of Mobile Computing and Pervasive Devices, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;University of Chinese Academy of Sciences, Beijing, China.
    Shi, Jinglin
    Beijing Key Laboratory of Mobile Computing and Pervasive Devices, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;University of Chinese Academy of Sciences, Beijing, China.
    Time-domain ICIC and optimized designs for 5G and beyond: a survey2019In: Science China Information Sciences, ISSN 1674-733X, E-ISSN 1869-1919, Vol. 62, no 2, article id 21302Article in journal (Refereed)
    Abstract [en]

    Time-domain enhanced inter-cell interference coordination (eICIC) is an effective technique to reduce the cross-tier inter-cell interference (ICI) in long term evolution (LTE)-based heterogeneous small cell networks (HetSCNs). This paper first clarifies two main communication scenarios in HetSCNs, i.e., macrocells deployed with femtocells (macro-femto) and with picocells (macro-pico). Then, the main challenges in HetSCNs, particularly the severe cross-tier ICI in macro-femto caused by femtocells with closed subscribe group (CSG) access or in macro-pico caused by picocells with range expansion are analyzed. Based on the prominent feature of dominant interference in HetSCNs, the main idea of time-domain interference coordination and two basic schemes in the eICIC standardization, i.e., almost blank subframe (ABS) and orthogonal frequency division multiplexing symbol shift are presented, with a systematic introduction to the interactions of these techniques with other network functions. Then, given macro-femto and macro-pico HetSCNs, an overview is provided on the advanced designs of ABS-based eICIC, including self-optimized designs with regard to key parameters such as ABS muting ratio, and joint optimized designs of ABS-based eICIC and other radio resource management techniques, such as user association and power control. Finally, the open issues and future research directions are discussed.

  • 100.
    Akram, Waseem
    et al.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Niazi, Muaz A.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Iantovics, Laszlo Barna
    Petru Maior University of Tirgu Mures, Informatics Department, Tirgu Mures, Romania.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Towards agent-based model specification of smart grid: a cognitive agent-based computing approach2019In: Interdisciplinary Description of Complex Systems, ISSN 1334-4684, E-ISSN 1334-4676, Vol. 17, no 3B, p. 546-585Article in journal (Refereed)
    Abstract [en]

    A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.