Change search
Refine search result
1234567 51 - 100 of 794
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Johansson, Ingemar
    et al.
    Ericsson AB, Luleå.
    Dadhich, Siddharth
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Jönsson, Tomas
    Ericsson AB, Luleå.
    Adaptive Video with SCReAM over LTE for Remote-Operated Working Machines2018In: Wireless Communications & Mobile Computing, ISSN 1530-8669, E-ISSN 1530-8677, Vol. 2018, article id 3142496Article in journal (Refereed)
    Abstract [en]

    Remote operation is a step toward the automation of mobile working machines. Safe and efficient teleremote operation requires good-quality video feedback. Varying radio conditions make it desirable to adapt the video sending rate of cameras to make the best use of the wireless capacity. The adaptation should be able to prioritize camera feeds in different directions depending on motion, ongoing tasks, and safety concerns. Self-Clocked Rate Adaptation for Multimedia (SCReAM) provides a rate adaptation algorithm for these needs. SCReAM can control the compression used for multiple video streams using differentiating priorities and thereby provide sufficient congestion control to achieve both low latency and high video throughput. We present results from the testing of prioritized adaptation of four video streams with SCReAM over LTE and discuss how such adaptation can be useful for the teleremote operation of working machines.

  • 52.
    Khan, Zaheer
    et al.
    Centre for Wireless Communications, University of Oulu.
    Lehtomäki, Janne
    Centre for Wireless Communications, University of Oulu.
    Vasilakos, Athanasios V.
    Centre for Wireless Communications, University of Oulu.
    MacKenzie, Allen B.
    Centre for Wireless Communications, University of Oulu.
    Juntti, Markku
    Centre for Wireless Communications, University of Oulu.
    Adaptive wireless communications under competition and jamming in energy constrained networks2018In: Wireless networks, ISSN 1022-0038, E-ISSN 1572-8196, Vol. 24, no 1, p. 151-171Article in journal (Refereed)
    Abstract [en]

    We propose a framed slotted Aloha-based adaptive method for robust communication between autonomous wireless nodes competing to access a channel under unknown network conditions such as adversarial disruptions. With energy as a scarce resource, we show that in order to disrupt communications, our method forces the reactive adversary to incur higher energy cost relative to a legitimate node. Consequently, the adversary depletes its energy resources and stops attacking the network. Using the proposed method, a transmitter node changes the number of selected time slots and the access probability in each selected time slot based on the number of unsuccessful transmissions of a data packet. On the receiver side, a receiver node changes the probability of listening in a time slot based on the number of unsuccessful communication attempts of a packet. We compare the proposed method with two other framed slotted Aloha-based methods in terms of average energy consumption and average time required to communicate a packet. For performance evaluation, we consider scenarios in which: (1) Multiple nodes compete to access a channel. (2) Nodes compete in the presence of adversarial attacks. (3) Nodes compete in the presence of channel errors and capture effect.

  • 53.
    Dinh, Thanh
    et al.
    School of Electronic Engineering, Soongsil University, Seoul 06978, South Korea.
    Kim, Younghan
    School of Electronic Engineering, Soongsil University, Seoul 06978, South Korea.
    Gu, Tau
    School of Computer Science, Royal Melbourne Institute of Technology University, Melbourne.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Adaptive Low-Power Listening Protocol for Wireless Sensor Networks in Noisy Environments2018In: IEEE Systems Journal, ISSN 1932-8184, E-ISSN 1937-9234, Vol. 2, no 3, p. 2162-2173Article in journal (Refereed)
    Abstract [en]

    This paper investigates the energy consumption minimizationproblem for wireless sensor networks running low-powerlistening (LPL) protocols in noisy environments. We observe thatthe energy consumption by false wakeups (i.e., wakeup without receivingany packet) of a node in noisy environments can be a dominantfactor in many cases while the false wakeup rate is spatiallyand temporarily dynamic. Based on this observation, without carefullyconsidering the impact of false wakeups, the energy efficientperformance of LPL nodes in noisy environments may significantlydeviate from the optimal performance. To address this problem,we propose a theoretical framework incorporating LPL temporalparameters with the false wakeup rate and the data rate. We thenformulate an energy consumption minimization problem of LPLin noisy environments and address the problem by a simplifiedand practical approach. Based on the theoretical framework, wedesign an efficient adaptive protocol for LPL (APL) in noisy environments.Through extensive experimental studies with Telosbnodes in real environments, we show that APL achieves 20%–40%energy efficient improvement compared to existing LPL protocolsunder various network conditions.

  • 54.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An IoT-Belief Rule Base Smart System to Assess Autism2018In: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, p. 671-675Conference paper (Refereed)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

  • 55.
    alam, M.E.
    et al.
    Electrical Electronic Engineering, International Islamic University, Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, M.S.
    Computer Science Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An IoT-belief rule base smart system to assess autism2018Conference paper (Refereed)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in real-time and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

  • 56.
    Zhai, Haoyang
    et al.
    School of Communication Information Engineering, University of Electronic Science and Technology of China.
    Liu, Qiang
    School of Communication Information Engineering, University of Electronic Science and Technology of China..
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Anti-ISI Demodulation Scheme and Its Experiment-based Evaluation for Diffusion-based Molecular Communication2018In: IEEE Transactions on Nanobioscience, ISSN 1536-1241, E-ISSN 1558-2639, Vol. 17, no 2, p. 126-133Article in journal (Refereed)
    Abstract [en]

    In diffusion-based molecular communication (MC), the most common modulation technique is based on the concentration of information molecules. However, the random delay of molecules due to the channel with memory causes severe inter-symbol interference (ISI) among consecutive signals. In this paper, we propose a detection technique for demodulating signals, the increase detection algorithm (IDA), to improve the reliability of concentration-encoded diffusion-based molecular communication. The proposed IDA detects an increase (i.e., a relative concentration value) in molecule concentration to extract the information instead of detecting an absolute concentration value. To validate the availability of IDA, we establish a real physical tabletop testbed. And we evaluate the proposed demodulation technique using bit error rate (BER) and demonstrate by the tabletop molecular communication platform that the proposed IDA successfully minimizes and even isolates ISI so that a lower BER is achieved than the common demodulation technique.

  • 57.
    Idowu, Samuel O.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Applied Machine Learning in District Heating System2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In an increasingly applied domain of pervasive computing, sensing devices are being deployed progressively for data acquisition from various systems through the use of technologies such as wireless sensor networks. Data obtained from such systems are used analytically to advance or improve system performance or efficiency. The possibility to acquire an enormous amount of data from any target system has made machine learning a useful approach for several large-scale analytical solutions. Machine learning has proved viable in the area of the energy sector, where the global demand for energy and the increasingly accepted need for green energy is gradually challenging energy supplies and the efficiency in its consumption.

    This research, carried out within the area of pervasive computing, aims to explore the application of machine learning and its effectiveness in the energy sector with dependency on sensing devices. The target application area readily falls under a multi-domain energy grid which provides a system across two energy utility grids as a combined heat and power system. The multi-domain aspect of the target system links to a district heating system network and electrical power from a combined heat and power plant. This thesis, however, focuses on the district heating system as the application area of interest while contributing towards a future goal of a multi-domain energy grid, where improved efficiency level, reduction of overall carbon dioxide footprint and enhanced interaction and synergy between the electricity and thermal grid are vital goals. This thesis explores research issues relating to the effectiveness of machine learning in forecasting heat demands at district heating substations, and the key factors affecting domestic heat load patterns in buildings.

    The key contribution of this thesis is the application of machine learning techniques in forecasting heat energy consumption in buildings, and our research outcome shows that supervised machine learning methods are suitable for domestic thermal load forecast. Among the examined machine learning methods which include multiple linear regression, support vector machine,  feed forward neural network, and regression tree, the support vector machine performed best with a normalized root mean square error of 0.07 for a 24-hour forecast horizon. In addition, weather and time information are observed to be the most influencing factors when forecasting heat load at heating network substations. Investigation on the effect of using substation's operational attributes, such as the supply and return temperatures, as additional input parameters when forecasting heat load shows that the use of substation's internal operational attributes has less impact.

  • 58.
    Wazid, Mohammad
    et al.
    Cyber Security and Networks Lab, Innopolis University, Innopolis, Russia.
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad, India.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Authenticated key management protocol for cloud-assisted body area sensor networks2018In: Journal of Network and Computer Applications, ISSN 1084-8045, E-ISSN 1095-8592, Vol. 123, p. 112-126Article in journal (Refereed)
    Abstract [en]

    Due to recent advances in various technologies such as integrated circuit, embedded systems and wireless communications, the wireless body area network (WBAN) becomes a propitious networking paradigm. WBANs play a very important role in modern medical systems as the real-time biomedical data through intelligent medical sensors in or around the patients' body can be collected and sent the data to remote medical personnel for clinical diagnostics. However, wireless nature of communication makes an adversary to intercept or modify the private and secret data collected by the sensors in WBANs. In critical applications of WBANs, there is a great requirement to access directly the sensing information collected by the body sensors by an external user (e.g., a doctor) in order to monitor the health condition of a patient. In order to do so, the user needs to first authenticate with the accessed body sensors, and only after mutual authentication between that user and the body sensors the real-time data can be directly accessed securely by the user.

    In this paper, we propose a new user authentication and key management scheme for this purpose. The proposed scheme allows mutual authentication between a user and personal server connected to WBAN via the healthcare server situated at the cloud, and once the mutual authentication is successful, both user and personal server are able to establish a secret session key for their future communication. In addition, key management process is provided for establishment of secret keys among the sensors and personal server for their secure communication. The formal security based on broadly-accepted Real-Or-Random (ROR) model and informal security give confidence that the proposed scheme can withstand several known attacks needed for WBAN security. A detailed comparative analysis among the proposed scheme and other schemes shows that the proposed scheme provides better security & functionality features, low computation and comparable communication costs as compared to recently proposed related schemes. Finally, the practical demonstration using the NS2 based simulation is shown for the proposed scheme and also for other schemes.

  • 59.
    Cruciani, Frederico
    et al.
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Cleland, Ian
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Nugent, Chris
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    McCullagh, Paul
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Automatic annotation for human activity recognition in free living using a smartphone2018In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 7, article id 2203Article in journal (Refereed)
    Abstract [en]

    Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine).

  • 60.
    Yi,, J.-H.
    et al.
    School of Mathematics and Big Data, Foshan University, Foshan, China; School of Information and Control Engineering, Qingdao University of Technology, Qingdao, China.
    Xing, L. -N.
    School of Mathematics and Big Data, Foshan University, Foshan, China.
    Wang, G.-G.
    Department of Computer Science and Technology, Ocean University of China, Qingdao, China.
    Dong, J.
    Department of Computer Science and Technology, Ocean University of China, Qingdao, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Alavi, A.H.
    Department of Civil and Environmental Engineering, University of Missouri, Columbia, United States.
    Wang, L.
    Department of Automation, Tsinghua University, Beijing, China.
    Behavior of crossover operators in NSGA-III for large-scale optimization problems2018In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291Article in journal (Refereed)
    Abstract [en]

    Traditional multi-objective optimization evolutionary algorithms (MOEAs) do not usually meet the requirements for online data processing because of their high computational costs. This drawback has resulted in difficulties in the deployment of MOEAs for multi-objective, large-scale optimization problems. Among different evolutionary algorithms, non-dominated sorting genetic algorithm-the third version (NSGA-III) is a fairly new method capable of solving large-scale optimization problems with acceptable computational requirements. In this paper, the performance of three crossover operators of the NSGA-III algorithm is benchmarked using a large-scale optimization problem based on human electroencephalogram (EEG) signal processing. The studied operators are simulated binary (SBX), uniform crossover (UC), and single point (SI) crossovers. Furthermore, enhanced versions of the NSGA-III algorithm are proposed through introducing the concept of Stud and designing several improved crossover operators of SBX, UC, and SI. The performance of the proposed NSGA-III variants is verified on six large-scale optimization problems. Experimental results indicate that the NSGA-III methods with UC and UC-Stud (UCS) outperform the other developed variants.

  • 61.
    Shitiri, Ethungshan
    et al.
    School of Electronics, Kyungpook National University, Korea.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Cho, Ho-Shin
    School of Electronics, Kyungpook National University, Korea.
    Biological Oscillators in Nanonetworks-Opportunities and Challenges2018In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 5, article id 1544Article in journal (Refereed)
    Abstract [en]

    One of the major issues in molecular communication-based nanonetworks is the provision and maintenance of a common time knowledge. To stay true to the definition of molecular communication, biological oscillators are the potential solutions to achieve that goal as they generate oscillations through periodic fluctuations in the concentrations of molecules. Through the lens of a communication systems engineer, the scope of this survey is to explicitly classify, for the first time, existing biological oscillators based on whether they are found in nature or not, to discuss, in a tutorial fashion, the main principles that govern the oscillations in each oscillator, and to analyze oscillator parameters that are most relevant to communication engineer researchers. In addition, the survey highlights and addresses the key open research issues pertaining to several physical aspects of the oscillators and the adoption and implementation of the oscillators to nanonetworks. Moreover, key research directions are discussed.

  • 62.
    Kumar Das, Ashok
    et al.
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad, India.
    Wazid, Mohammad
    Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Karnataka, India.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala, India.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rodrigues, Joel J. P. C.
    National Institute of Telecommunications (Inatel), Brazil; Instituto de Telecomunicações, Portugal; University of Fortaleza (UNIFOR), Brazil..
    Biometrics-Based Privacy-Preserving User Authentication Scheme for Cloud-Based Industrial Internet of Things Deployment2018In: IEEE Internet of Things Journal, ISSN 2327-4662, Vol. 5, no 6, p. 4900-4913Article in journal (Refereed)
    Abstract [en]

    Due to the widespread popularity of Internet-enabled devices, Industrial Internet of Things (IIoT) becomes popular in recent years. However, as the smart devices share the information with each other using an open channel, i.e., Internet, so security and privacy of the shared information remains a paramount concern. There exist some solutions in the literature for preserving security and privacy in IIoT environment. However, due to their heavy computation and communication overheads, these solutions may not be applicable to wide category of applications in IIoT environment. Hence, in this paper, we propose a new Biometric-based Privacy Preserving User Authentication (BP2UA) scheme for cloud-based IIoT deployment. BP2UA consists of strong authentication between users and smart devices using pre-established key agreement between smart devices and the gateway node. The formal security analysis of BP2UA using the well-known ROR model is provided to prove its session key security. Moreover, an informal security analysis of BP2UA is also given to show its robustness against various types of known attacks. The computation and communication costs of BP2UA in comparison to the other existing schemes of its category demonstrate its effectiveness in the IIoT environment. Finally, the practical demonstration of BP2UA is also done using the NS2 simulation.

  • 63.
    Zhang, Bo
    et al.
    Nanjing University of Posts and Telecommunications.
    Wang, Yufeng
    Nanjing University of Posts and Telecommunications.
    Wei, Li
    Nanjing University of Posts and Telecommunications.
    Jin, Qun
    Department of Human Informatics and Cognitive Sciences, Waseda University, Saitama.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    BLE mesh: A practical mesh networking development framework for public safety communications2018In: Tsinghua Science and Technology, ISSN 1007-0214, E-ISSN 1878-7606, Vol. 23, no 3, p. 333-346Article in journal (Refereed)
    Abstract [en]

    Owing to advanced storage and communication capabilities today, smart devices have become the basic interface between individuals and their surrounding environment. In particular, massive devices connect to one other directly in a proximity area, thereby enabling abundant Proximity Services (ProSe), which can be classified into two categories: public safety communication and social discovery. However, two challenges impede the quick development and deployment of ProSe applications. From the viewpoint of networking, no multi-hop connectivity functionality component can be directly operated on commercially off-the-shelf devices, and from the programming viewpoint, an easily reusable development framework is lacking for developers with minimal knowledge of the underlying communication technologies and connectivity. Considering these two issues, this paper makes a twofold contribution. First, a multi-hop mesh networking based on Bluetooth Low Energy (BLE) is implemented, in which a proactive routing mechanism with link-quality (i.e., received signal strength indication) assistance is designed. Second, a ProSe development framework called BLE Mesh is designed and implemented, which can provide significant benefits for application developers, framework maintenance professionals, and end users. Rich application programming interfaces can help developers to build ProSe apps easily and quickly. Dependency inversion principle and template method pattern allow modules in BLE Mesh to be loosely coupled and easy to maintain and update. Callback mechanism enables modules to work smoothly together and automation processes such as registration, node discovery, and messaging are employed to offer nearly zero-configuration for end users. Finally, based on the designed ProSe development kit, a public safety communications app called QuoteSendApp is built to distribute emergency information in close area without Internet access. The process illustrates the easy usability of BLE Mesh to develop ProSe apps.

  • 64.
    Lin, Chao
    et al.
    Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University.
    He, Debiao
    Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University.
    Huang, Xinyi
    School of Mathematics and Computer Science, Fujian Normal University.
    Choo, Kim-Kwang Raymond
    The University of Texas at San Antonio.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    BSeIn: A blockchain-based secure mutual authentication with fine-grained access control system for industry 4.02018In: Journal of Network and Computer Applications, ISSN 1084-8045, E-ISSN 1095-8592, Vol. 116, p. 42-52Article in journal (Refereed)
    Abstract [en]

    To be prepared for the ‘Industry 4.0’-era, we propose a hierarchical framework comprising four tangible layers, which is designed to vertically integrate inter-organizational value networks, engineering value chain, manufacturing factories, etc. The conceptual framework allows us to efficiently implement a flexible and reconfigurable smart factory. However, we need to consider security inherent in existing (stand-alone) devices and networks as well as those that may arise in such integrations. Especially the existing solutions are insufficient to address these fundamental security concerns. Thus, we present a blockchain-based system for secure mutual authentication, BSeIn, to enforce fine-grained access control polices. The proposed system (with integrated attribute signature, multi-receivers encryption and message authentication code) is designed to provide privacy and security guarantees such as anonymous authentication, auditability, and confidentiality. BSeIn also scales well due to the utilization of smart contract. We then evaluate the security and performance of BSeIn. For example, findings from the performance evaluation demonstrate that Initialization/Request Issuance/Chain Transaction/State Delivery/Permission Update phase only cost 12.123/4.810/6.978/0.013/2.559s, respectively.

  • 65.
    Monrat, Ahmed Afif
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Challenges and Opportunities of Using Big Data for Assessing Flood Risks2018In: Applications of Big Data Analytics: Trends, Issues, and Challenges / [ed] Mohammed M. Alani, Hissam Tawfik, Mohammed Saeed, Obinna Anya, Cham: Springer, 2018, p. 31-42Chapter in book (Refereed)
    Abstract [en]

    Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has disastrous impact on the socioeconomic lifeline of a country. Nowadays, business organizations are using Big Data to improve their strategies and operations for revealing patterns and market trends to increase revenues. Eventually, the crisis response teams of a country have turned their interest to explore the potentialities of Big Data in managing disaster risks such as flooding. The reason for this is that during flooding, crisis response teams need to take decisions based on the huge amount of incomplete and inaccurate information, which are mainly coming from three major sources, including people, machines, and organizations. Hence, Big Data technologies can be used to monitor and to determine the people exposed to the risks of flooding in real time. This could be achieved by analyzing and processing sensor data streams coming from various sources as well as data collected from other sources such as Twitter, Facebook, and satellite and also from disaster organizations of a country by using Big Data technologies. Therefore, this chapter explores the challenges, the opportunities, and the methods, required to leverage the potentiality of Big Data to assess and predict the risk of flooding.

  • 66.
    Cleland, Ian
    et al.
    Ulster University.
    Donnelly, Mark
    Ulster University.
    Nugent, Chris
    Ulster University.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Espinilla, Macarena
    University of Jaen.
    García-Constantino, Matías
    Ulster University.
    Collection of a Diverse, Naturalistic and Annotated Dataset for Wearable Activity Recognition2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2018, p. 555-560Conference paper (Refereed)
    Abstract [en]

    This paper discusses the opportunities and challenges associated with the collection of a large scale, diverse dataset for Activity Recognition. The dataset was collected by 141 undergraduate students, in a controlled environment. Students collected triaxial accelerometer data from a wearable accelerometer whilst each carrying out 3 of the 18 investigated activities, categorized into 6 scenarios of daily living. This data was subsequently labelled, anonymized and uploaded to a shared repository. This paper presents an analysis of data quality, through outlier detection and assesses the suitability of the dataset for the creation and validation of Activity Recognition models. This is achieved through the application of a range of common data driven machine learning approaches. Finally, the paper describes challenges identified during the data collection process and discusses how these could be addressed. Issues surrounding data quality, in particular, identifying and addressing poor calibration of the data were identified. Results highlight the potential of harnessing these diverse data for Activity Recognition. Based on a comparison of six classification approaches, a Random Forest provided the best classification (F-measure: 0.88). In future data collection cycles, participants will be encouraged to collect a set of “common” activities, to support generation of a larger homogeneous dataset. Future work will seek to refine the methodology further and to evaluate model on new unseen data.

  • 67.
    Cleland, I.
    et al.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Donnelly, M.P.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Nugent, C.D:
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Espinilla, M.
    Department of Computer Science, University of Jaen, Jaen, Spain.
    Garcia-Constantino, M.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Collection of a Diverse, Realistic and Annotated Dataset for Wearable Activity Recognition2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2018, IEEE, 2018, p. 555-560, article id 8480322Conference paper (Refereed)
    Abstract [en]

    This paper discusses the opportunities and challenges associated with the collection of a large scale, diverse dataset for Activity Recognition. The dataset was collected by 141 undergraduate students, in a controlled environment. Students collected triaxial accelerometer data from a wearable accelerometer whilst each carrying out 3 of the 18 investigated activities, categorized into 6 scenarios of daily living. This data was subsequently labelled, anonymized and uploaded to a shared repository. This paper presents an analysis of data quality, through outlier detection and assesses the suitability of the dataset for the creation and validation of Activity Recognition models. This is achieved through the application of a range of common data driven machine learning approaches. Finally, the paper describes challenges identified during the data collection process and discusses how these could be addressed. Issues surrounding data quality, in particular, identifying and addressing poor calibration of the data were identified. Results highlight the potential of harnessing these diverse data for Activity Recognition. Based on a comparison of six classification approaches, a Random Forest provided the best classification (F-measure: 0.88). In future data collection cycles, participants will be encouraged to collect a set of 'common' activities, to support generation of a larger homogeneous dataset. Future work will seek to refine the methodology further and to evaluate model on new unseen data.

  • 68.
    Zhang, Wenbin
    et al.
    School of Mathematical Sciences, Yangzhou University, Jiangsu.
    Tang, Yang
    Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai.
    Huang, Tingwen
    Texas A & M University at Qatar.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Consensus of Networked Euler-Lagrange Systems under Time-Varying Sampled-Data Control2018In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 14, no 2, p. 535-544Article in journal (Refereed)
    Abstract [en]

    This paper is concerned with the consensus of multiple Euler-Lagrange systems with pulse-width modulated sampled-data control. Different from traditional sampled-data strategies, a pulse-modulated sampled-data strategy is developed to realize the consensus of multiple Euler-lagrange systems, in which a pulse function that can be distinct at different sampling instants is proposed to modulate the sampling interval. In addition, a new definition of average sampling interval, which is parallel to the average dwell time in switching control or average impulsive interval in impulsive control, is proposed to characterize the number of the updating of the sampling controller during some certain interval. The proposed average sampling interval makes our sampled-data strategy more suitable for a wide range of sampling signals. By utilizing the comparison principle, a sufficient criterion is obtained to guarantee the consensus of multiple Euler-Lagrange systems. The sufficient criterion is heavily dependent on the actual control duration time, the upper and lower bounds of the pulse function and the communication graph. Finally, a simulation example is presented to verify the applicability of the proposed results.

  • 69.
    Li, YH
    et al.
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China. Tongji Univ, Key Lab Embedded Syst & Serv Comp, Minist Educ, Shanghai 201804, China.
    Shi, XY
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China.
    Lindgren, Anders
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. RISE SICS, Kista.
    Hu, Z
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China.Tongji Univ, Key Lab Embedded Syst & Serv Comp, Minist Educ, Shanghai, China .
    Zhang, P
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China.
    Jin, D
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China.
    Zhou, YC
    Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing, China.
    Context-Aware Data Dissemination for ICN-Based Vehicular Ad Hoc Networks2018In: Information (Switzerland), ISSN 2078-2489, Vol. 9, no 11, article id 263Article in journal (Refereed)
    Abstract [en]

    Information-centric networking (ICN) technology matches many major requirements of vehicular ad hoc networks (VANETs) in terms of its connectionless networking paradigm accordant with the dynamic environments of VANETs and is increasingly being applied to VANETs. However, wireless transmissions of packets in VANETs using ICN mechanisms can lead to broadcast storms and channel contention, severely affecting the performance of data dissemination. At the same time, frequent changes of topology due to driving at high speeds and environmental obstacles can also lead to link interruptions when too few vehicles are involved in data forwarding. Hence, balancing the number of forwarding vehicular nodes and the number of copies of packets that are forwarded is essential for improving the performance of data dissemination in information-centric networking for vehicular ad-hoc networks. In this paper, we propose a context-aware packet-forwarding mechanism for ICN-based VANETs. The relative geographical position of vehicles, the density and relative distribution of vehicles, and the priority of content are considered during the packet forwarding. Simulation results show that the proposed mechanism can improve the performance of data dissemination in ICN-based VANET in terms of a successful data delivery ratio, packet loss rate, bandwidth usage, data response time, and traversed hops.

  • 70.
    Schmidt, Mischa
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. NEC Laboratories Europe.
    Schülke, Anett
    NEC Laboratories Europe.
    Venturi, Alberto
    NEC Laboratories Europe.
    Kurpatov, Roman
    NEC Laboratories Europe.
    Henríquez, Enrique Blanco
    NEC Laboratories Europe.
    Cyber-Physical System For Energy Efficient Stadium Operation: Methodology And Experimental Validation2018In: ACM Transactions on Cyber-Physical Systems, ISSN 2378-962X, Vol. 2, no 4, article id 25Article in journal (Refereed)
    Abstract [en]

    The environmental impacts of medium to large scale buildings receive substantial attention in research,industry, and media. This paper studies the energy savings potential of a commercial soccer stadium duringday-to-day operation. Buildings of this kind are characterized by special purpose system installations likegrass heating systems and by event-driven usage patterns. This work presents a methodology to holisticallyanalyze the stadium’s characteristics and integrate its existing instrumentation into a Cyber-PhysicalSystem, enabling to deploy different control strategies flexibly. In total, seven different strategies for controllingthe studied stadium’s grass heating system are developed and tested in operation. Experiments inwinter season 2014/2015 validated the strategies’ impacts within the real operational setup of the CommerzbankArena, Frankfurt, Germany. With 95% confidence, these experiments saved up to 66% of mediandaily weather-normalized energy consumption. Extrapolated to an average heating season, this correspondsto savings of 775 MWh and 148 t of CO2 emissions. In winter 2015/2016 an additional predictive nighttimeheating experiment targeted lower temperatures, which increased the savings to up to 85%, equivalent to1 GWh (197 t CO2) in an average winter. Beyond achieving significant energy savings, the different controlstrategies also met the target temperature levels to the satisfaction of the stadium’s operational staff. Whilethe case study constitutes a significant part, the discussions dedicated to the transferability of this workto other stadiums and other building types show that the concepts and the approach are of general nature.Furthermore, this work demonstrates the first successful application of Deep Belief Networks to regress andpredict the thermal evolution of building systems.

  • 71.
    Zhou, Fanfu
    et al.
    Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai Jiao Tong University.
    Qi, Zhengwei
    School of Software, Shanghai Jiao Tong University, Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai Jiao Tong University.
    Yao, Jianguo
    Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai Jiao Tong University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Guan, Haibing
    School of Software, Shanghai Jiao Tong University, Shanghai Key Laboratory of Scalable Computing and Systems, Department of Computer Science and Engineering, Shanghai Jiao Tong University.
    D2FL: design and Implementation of Distributed Dynamic Fault Localization2018In: IEEE Transactions on Dependable and Secure Computing, ISSN 1545-5971, E-ISSN 1941-0018, Vol. 15, no 3, p. 378-392Article in journal (Refereed)
    Abstract [en]

    Compromised or misconfigured routers have been a major concern in large-scale networks. Such routers sabotage packet delivery, and thus hurt network performance. Data-plane fault localization (FL) promises to solve this problem. Regrettably, the path-based FL fails to support dynamic routing, and the neighbor-based FL requires a centralized trusted administrative controller (AC) or global clock synchronization in each router and introduces storage overhead for caching packets. To address these problems, we introduce a dynamic distributed and low-cost model, D2FL. Using random 2-hop neighborhood authentication, D2FL supports volatile path without the AC or global clock synchronization. Besides, D2FL requires only constant tens of KB for caching which is independent of the packet transmission rate. This is much less than the cache size of DynaFL or DFL which consumes several MB. The simulations show that D2FL achieves low false positive and false negative rate with no more than 3% bandwidth overhead. We also implement an open source prototype and evaluate its effect. The result shows that the performance burden in user space is less than 10% with the dynamic sampling algorithm. 

  • 72.
    Kim, Joochan
    et al.
    Department of Life Media, Ajou University.
    Seo, Jungryul
    Department of Computer Engineering, Ajou University.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Detecting boredom from eye gaze and EEG2018In: Biomedical Signal Processing and Control, ISSN 1746-8094, E-ISSN 1746-8108, Vol. 46, p. 302-313Article in journal (Refereed)
    Abstract [en]

    The recent proliferation of affordable physiological sensors has boosted research and development efforts of emotion-aware systems. Boredom has received relatively little attention as a target emotion, and we identified a lack of research on the relationship between eye gaze and electroencephalogram (EEG) when people feel bored. To investigate this matter, we first conducted a background study on boredom and its detection by physiological methods. Then, we designed and executed an experiment that uses a video stimulus – specifically designed for this experiment, yet general enough for other boredom research – with an eye tracker and EEG sensor to elicit and detect boredom. Moreover, a questionnaire was used to confirm the existence of boredom. The experiment was based on a hypothesis that participants may feel bored when their gaze deviates from an expected area of interest, thus indicating loss of attention. The results of the experiment indicated correlations between eye gaze data and EEG data with all participants (N = 13) when they felt bored. This study can be useful for researchers who have interest in developing boredom-aware systems.

  • 73.
    Tang, Rui
    et al.
    Department of Computer and Information Science, University of Macau.
    Fong, Simon
    Department of Computer and Information Science, University of Macau.
    Deb, Suash
    INNS India Regional Chapter.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Millham, Richard C.
    Department of Information Technology, Durban University of Technology.
    Dynamic Group Optimisation Algorithm for Training Feed-Forward Neural Networks2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 314, p. 1-19Article in journal (Refereed)
    Abstract [en]

    Feed-forward neural networks are efficient at solving various types of problems. However, finding efficient training algorithms for feed-forward neural networks is challenging. The dynamic group optimisation (DGO) algorithm is a recently proposed half-swarm half-evolutionary algorithm, which exhibits a rapid convergence rate and good performance in searching and avoiding local optima. In this paper, we propose a new hybrid algorithm, FNNDGO that integrates the DGO algorithm into a feed-forward neural network. DGO plays an optimisation role in training the neural network, by tuning parameters to their optimal values and configuring the structure of feed-forward neural networks. The performance of the proposed algorithm was determined by comparing its performance with those of other training methods in solving two types of problems. The experimental results show that our proposed algorithm exhibits promising performance for solving real-world problems.

  • 74.
    Nygren, Eeva
    et al.
    University of Turku.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sutinen, Erkki
    University of Turku.
    Dynamics between Disturbances and Motivations in Educational Mobile Games2018In: International Journal of Interactive Mobile Technologies (iJIM), ISSN 1865-7923, E-ISSN 1865-7923, Vol. 12, no 3, p. 120-141Article in journal (Refereed)
    Abstract [en]

    Understanding engagement in games provides great opportuni- ties for developing motivating educational games. However, even good games may induce disturbances on the learner. Therefore, we go further than present- ing only results and discussion related to the motivation aspects and disturbance factors of the playing experience in UFractions (Ubiquitous fractions) storytell- ing mobile game. Namely, we define the dynamics between these two important game features. Sample of the case study was 305 middle school pupils in South Africa, Finland, and Mozambique.

    Guidelines for game developers, users and educators were derived from the interplay of disturbance factors and motivations. Furthermore, we defined six different learning zones deriving from disturbances the player is facing and the player’s motivation level.

  • 75.
    Siddiquee, Kazy Noor E Alam
    et al.
    University of Science and Technology, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Moreno Arrebola, Francisco Javier
    HeidelbergCement, Spain.
    Abedin, Md. Zainal
    University of Science and Technology, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Estimation of Signal Coverage and Localization in Wi-Fi Networkswith AODV and OLSR2018In: Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, ISSN 2093-5374, E-ISSN 2093-5382, Vol. 9, no 3, p. 11-24, article id 2Article in journal (Refereed)
    Abstract [en]

    For estimation of signal coverage and localization, path loss is the major component for link budget of any communication system. Instead of traditional Doppler shift or Doppler spread techniques, the path loss has been chosen for IEEE 802.11 (Wi-Fi) signals of 2.5 and 5 GHz to measure the signal coverage and localization in this research. A Wi-Fi system was deployed in a MANET (Mobile Adhoc NETwork), involving both mobile and stationary nodes. The Adhoc network was also assessed in a routing environment under AODV and OLSR protocols. The proposal was evaluated using the OPNET Modeler simulation environment.

  • 76.
    Schmidt, Mischa
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. NEC Laboratories Europe.
    EVOX-CPS: A Methodology For Data-Driven Optimization Of Building Operation2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Existing building stock’s energy efficiency must improve due to its significant proportion of the global energy consumption mix. Predictive building control promises to increase the efficiency of buildings during their operational phase and thus lead to a reduction of the lion’s share of buildings’ lifetime energy consumption. Predictive control complements other means to increase performance, such as refurbishments as well as modernization of systems.

    This thesis contributes EVOX-CPS, a holistic methodology to develop data-driven predictive control for (existing) buildings and deploy the control in day-to-day use. EVOX-CPS evolves buildings into Cyber-Physical Systems and addresses the development of data-driven predictive control using computational methods. The thesis’ focus rests on accounting for the situation of existing buildings - which vary greatly regarding their physical characteristics, usage patterns, system installation, and instrumentation levels. The methodology addresses the aspect of building stock variety with its capability to flexibly adapt to different buildings’ characteristics, e.g., by supporting the integration of varying levels of pre-existing building instrumentation. Furthermore, EVOX-CPS supports using different data mining, regression, or control techniques (i) to strengthen the support for a variety of buildings, and (ii) to cater to researchers’ and practitioners’ differing skills, experiences, or preferences concerning different data analysis techniques. Through its flexibility, the methodology addresses a vast potential installation base and lowers the barriers for adoption in day-to-day use, e.g., by being able to leverage prior investments in building instrumentation and supporting different data-analysis techniques. At the same time, EVOX-CPS provides researchers and practitioners with comprehensive guidance relevant to their daily work. Besides, EVOX-CPS supports addressing a building’s known limitations in the daily operation, e.g., uncomfortable indoor conditions.

    The experimentation in two real buildings validates the effectiveness of EVOX-CPS’ data-driven control with high reliability due to prolonged experimentation periods combined with applying energy normalization and inferential statistics. The experiments during routine heating system operation establish high confidence in the recorded effect sizes: the improvements in operational efficiency are profound and statistically significant. More specifically, the experiments of controlling the grass heating system of the soccer stadium Commerzbank Arena, Frankfurt, Germany, in two winters saved up to 66% (2014/2015) and 85% (2015/2016) of energy consumption. Extrapolation to an average heating season leads to expected savings of 775 MWh (148 t of CO2 emissions) and 1 GWh (197 t CO2), respectively. The experiments also show that EVOX-CPS allowed alleviating the known operational limitation of heating supply shortages which required nightly preheating in the stadium’s standard operating procedures. In another set of experiments, we applied the methodology to control the heating system of the Sierra Elvira School in Granada, Spain. The experimentation occurred during the regular class hours of 43 school days in winter 2015/2016. A first experiment demonstrated the possibility to lower consumption by one-third while maintaining indoor comfort. Another experiment raised average indoor temperatures by 2K with 5% additional energy consumption. Again, that illustrates EVOX-CPS’ capability to address a building’s known operational issues.

  • 77.
    Lindberg, Renny
    et al.
    Vrije Universiteit Brussel.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Formative evaluation of an adaptive game for engaging learners of programming concepts in K-122018In: International journal of shape modeling, ISSN 0218-6543, Vol. 5, no 2, p. 3-26Article in journal (Refereed)
    Abstract [en]

    As the global demand for programmers is soaring, several countries have integrated programming into their K-12 curricula. Finding effective ways to engage children in programming education is an important objective. One effective method for this can be presenting learning materials via games, which are known to increase engagement and motivation. Current programming education games often focus on a single genre and offer one- size-fits-all experience to heterogeneous learners. In this study, we presented Minerva, a multi-genre (adventure, action, puzzle) game to engage elementary school students in learning programming concepts. The game content is adapted to play and learning styles of the player to personalize the gameplay. We conducted a formative mixed-method evaluation of Minerva with 32 Korean 6th grade students who played the game and compared their learning outcomes with 32 6th grade students who studied the same concepts using handouts. The results indicated that, in terms of retention, learning was equally effective in both groups. Furthermore, the game was shown to facilitate engagement among the students. These results, together with uncovered issues, will guide Minerva’s further development.

  • 78.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Andersson, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    From Tele-remote Operation to Semi-automated Wheel-loader2018In: International Journal of Electrical and Electronic Engineering and Telecommunications, ISSN 2319-2518, Vol. 7, no 4, p. 178-182Article in journal (Refereed)
    Abstract [en]

    This paper presents experimental results with tele-remote operation of a wheel-loader and proposes a method to semi-automate the process. The different components of the tele-remote setup are described in the paper. We focus on the short loading cycle, which is commonly used at quarry and construction sites for moving gravel from piles onto trucks. We present results from short-loading-cycle experiments with three operators, comparing productivity between tele-remote operation and manual operation. A productivity loss of 42% with tele-remote operation motivates the case for more automation. We propose a method to automate the bucket-filling process, which is one of the key operations performed by a wheel-loader.

  • 79.
    Kostenius, Catrine
    et al.
    Luleå University of Technology, Department of Health Sciences, Health and Rehabilitation.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Lindqvist, Anna-Karin
    Luleå University of Technology, Department of Health Sciences, Health and Rehabilitation.
    Gamification of health education: Schoolchildren’s participation in the development of a serious game to promote health and learning2018In: Health Education, ISSN 0965-4283, E-ISSN 1758-714X, Vol. 118, no 4, p. 354-368Article in journal (Refereed)
    Abstract [en]

    Purpose

    The use of modern technology has many challenges and risks. However, by collaborating with schoolchildren, ideas to effectively promote health and learning in school can be identified. This study aimed to examine how a participatory approach can deepen the understanding of how schoolchildren relate to and use gamification as a tool to promote physical activity and learning.

    Design/methodology/approach

    Inspired by the concept and process of empowerment and child participation, the methodological focus of this study was on consulting schoolchildren. During a 2-month period, 18 schoolchildren (10–12-years-old) participated in workshops to create game ideas that would motivate them to be physically active and learn in school.

    Findings

    The phenomenological analysis resulted in one main theme, ‘Playing games for fun to be the best I can be’. This consisted of four themes with two sub-themes each. The findings offer insights on how to increase physical activity and health education opportunities using serious games in school.

    Originality/value

    The knowledge gained provides gamification concepts and combinations of different technological applications to increase health and learning, as well as motivational aspects suggested by the schoolchildren. The findings are discussed with health promotion and health education in mind.

  • 80.
    Cao, Liang
    et al.
    Nanjing University of Posts and Telecommunications, China.
    Wang, Yufeng
    Nanjing University of Posts and Telecommunications, China.
    Zhang, Bo
    Nanjing University of Posts and Telecommunications, China.
    Jin, Qun
    Waseda University, Japan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    GCHAR: An efficient Group-based Context–aware human activity recognition on smartphone2018In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 118, no part-1, p. 67-80Article in journal (Refereed)
    Abstract [en]

    With smartphones increasingly becoming ubiquitous and being equipped with various sensors, nowadays, there is a trend towards implementing HAR (Human Activity Recognition) algorithms and applications on smartphones, including health monitoring, self-managing system and fitness tracking. However, one of the main issues of the existing HAR schemes is that the classification accuracy is relatively low, and in order to improve the accuracy, high computation overhead is needed. In this paper, an efficient Group-based Context-aware classification method for human activity recognition on smartphones, GCHAR is proposed, which exploits hierarchical group-based scheme to improve the classification efficiency, and reduces the classification error through context awareness rather than the intensive computation. Specifically, GCHAR designs the two-level hierarchical classification structure, i.e., inter-group and inner-group, and utilizes the previous state and transition logic (so-called context awareness) to detect the transitions among activity groups. In comparison with other popular classifiers such as RandomTree, Bagging, J48, BayesNet, KNN and Decision Table, thorough experiments on the realistic dataset (UCI HAR repository) demonstrate that GCHAR achieves the best classification accuracy, reaching 94.1636%, and time consumption in training stage of GCHAR is four times shorter than the simple Decision Table and is decreased by 72.21% in classification stage in comparison with BayesNet.

  • 81.
    Perera, Charith
    et al.
    School of Computing Science, Newcastle University, Newcastle, UK.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Calikli, Gul
    Chalmers University, Gothenburg, Sweden.
    Sheng, Quan Z.
    Department of Computing, Macquarie University, Sydney, Australia.
    Li, Kuan-Ching
    Department of Computer Science and Information Engineering (CSIE), Providence University, Taichung City, Taiwan.
    Guest Editorial Special Section on Engineering Industrial Big Data Analytics Platforms for Internet of Things2018In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 14, no 2, p. 744-747Article in journal (Refereed)
    Abstract [en]

    Over the last few years, a large number of Internet of Things (IoT) solutions have come to the IoT marketplace. Typically, each of these IoT solutions are designed to perform a single or minimal number of tasks (primary usage). We believe a significant amount of knowledge and insights are hidden in these data silos that can be used to improve our lives; such data include our behaviors, habits, preferences, life patterns, and resource consumption. To discover such knowledge, we need to acquire and analyze this data together in a large scale. To discover useful information and deriving conclusions toward supporting efficient and effective decision making, industrial IoT platform needs to support variety of different data analytics processes such as inspecting, cleaning, transforming, and modeling data, especially in big data context. IoT middleware platforms have been developed in both academic and industrial settings in order to facilitate IoT data management tasks including data analytics. However, engineering these general-purpose industrial-grade big data analytics platforms need to address many challenges. We have accepted six manuscripts out of 24 submissions for this special section (25% acceptance rate) after the strict peerreview processes. Each manuscript has been blindly reviewed by at least three external reviewers before the decisions were made. The papers are briefly summarized.

  • 82.
    Synnes, Kåre
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Lilja, Margareta
    Luleå University of Technology, Department of Health Sciences, Health and Rehabilitation.
    Nyman, Anneli
    Luleå University of Technology, Department of Health Sciences, Health and Rehabilitation.
    Espinilla, Macarena
    Cleland, Ian
    Sanchez Comas, Andres Gabriel
    Comas Gonzalez, Zhoe Vanessa
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Karvonen, Niklas
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ourique de Morais, Wagner
    Cruciani, Federico
    Nugent, Chris
    H2Al - The Human Health and Activity Laboratory2018In: 12th International Conference on Ubiquitous Computing and Ambient ‪Intelligence (UCAmI 2018), Punta Cana, Dominican Republic, 4-7 December, 2018. / [ed] MDPI, MDPI, 2018, Vol. 2, article id 1241Conference paper (Refereed)
    Abstract [en]

    The Human Health and Activity Laboratory (H2Al) is a new research facility at Luleå University of Technology implemented during 2018 as a smart home environment in an educational training apartment for nurses and therapists at the Luleå campus. This paper presents the design and implementation of the lab together with a discussion on potential impact. The aim is to identify and overcome economical, technical and social barriers to achieve an envisioned good and equal health and welfare within and from home environments. The lab is equipped with multiple sensor and actuator systems in the environment, worn by persons and based on digital information. The systems will allow for advanced capture, filtering, analysis and visualization of research data such as A/V, EEG, ECG, EMG, GSR, respiration and location while being able to detect falls, sleep apnea and other critical health and wellbeing issues. The resulting studies will be aimed towards supporting and equipping future home environments and care facilities, spanning from temporary care to primary care at hospitals, with technologies for activity and critical health and wellness issue detection. The work will be conducted at an International level and within a European context, based on a collaboration with other smart labs, such that experiments can be replicated at multiple sites. This paper presents some initial lessons learnt including design, setup and configuration for comparison of sensor placements and configurations as well as analytical methods.

  • 83.
    Ten, Chee-Wooi
    et al.
    Electrical and Computer Engineering Department, Michigan Technological University, Houghton, MI.
    Yamashita, Koji
    Electrical and Computer Engineering Department, Michigan Technological University, Houghton, MI.
    Yang, Zhiyuan
    Electrical and Computer Engineering Department, Michigan Technological University, Houghton, MI.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ginter, Andrew
    Waterfall Security Solutions.
    Impact Assessment of Hypothesized Cyberattackson Interconnected Bulk Power Systems2018In: IEEE Transactions on Smart Grid, ISSN 1949-3053, E-ISSN 1949-3061, Vol. 9, no 5, p. 4405-4425Article in journal (Refereed)
    Abstract [en]

    The first-ever Ukraine cyberattack on power grid has proven its devastation by hacking into their critical cyber assets. With administrative privileges accessing substation networks/ local control centers, one intelligent way of coordinated cyberattacks is to execute a series of disruptive switching executions on multiple substations using compromised supervisory control and data acquisition (SCADA) systems. These actions can cause significant impacts to an interconnected power grid. Unlike the previous power blackouts, such high-impact initiating events can aggravate operating conditions, initiating instability that may lead to system-wide cascading failure. A systemic evaluation of “nightmare” scenarios is highly desirable for asset owners to manage and prioritize the maintenance and investment in protecting their cyberinfrastructure. This survey paper is a conceptual expansion of real-time monitoring, anomaly detection, impact analyses, and mitigation (RAIM) framework that emphasizes on the resulting impacts, both on steady-state and dynamic aspects of power system stability. Hypothetically, we associate the combinatorial analyses of steady state on substations/components outages and dynamics of the sequential switching orders as part of the permutation. The expanded framework includes (1) critical/noncritical combination verification, (2) cascade confirmation, and (3) combination re-evaluation. This paper ends with a discussion of the open issues for metrics and future design pertaining the impact quantification of cyber-related contingencies

  • 84.
    Rho, Seungmin
    et al.
    Department of Media Software at Sungkyul University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Intelligent collaborative system and service in value network for enterprise computing2018In: Enterprise Information Systems, ISSN 1751-7575, E-ISSN 1751-7583, Vol. 12, no 1, p. 1-3Article in journal (Refereed)
  • 85.
    Bezerra, Nibia Souza
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ferreira Maciel, Tarcisio
    Federal University of Ceará.
    M. Lima, Francisco Rafael
    Federal University of Ceará.
    Sousa Jr., Vicente A.
    Federal University of Rio Grande do Norte.
    Interference Aware Resource Allocation with QoS Guarantees in OFDMA/SC-FDMA2018In: Journal of Communication and Information Systems, ISSN 1980-6604, Vol. 33, no 1, p. 124-128Article in journal (Refereed)
    Abstract [en]

    Efficient Radio Resource Allocation (RRA) is of utmost importance for achieving maximum capacity in mobile networks. However, the performance assessment should take into account the main constraints of these networks. This letter presents important enhancements to RRA algorithms proposed in [1]. Prior work [1] ignores some important system constraints such as the impact of inter-cell interference and granularity of frequency allocation blocks. Here we show the performance degradation when these system constraints are assumed on the algorithms in [1] as well as propose some improvements on these algorithms in order to achieve better performance.

  • 86.
    Cherkaoui, S
    et al.
    Université de Sherbrooke, Canada.
    Andersson, KarlLuleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    LCN 2018 Message from the Program Chairs2018Conference proceedings (editor) (Refereed)
  • 87.
    Eriksson, Eva
    et al.
    Aarhus University, Denmark. Chalmers University of Technology, Sweden.
    Heath, Carl
    RISE Interactive, Gothenburg.
    Ljungstrand, Peter
    RISE Interactive, Gothenburg.
    Parnes, Peter
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Makerspace in school: Considerations from a large-scale national testbed2018In: International Journal of Child-Computer Interaction, ISSN 2212-8689, E-ISSN 2212-8697, Vol. 16, p. 9-15Article in journal (Refereed)
    Abstract [en]

    Digital fabrication and making has received a growing interest in formal and informal learning environments. However, many of these initiatives often start from a grassroots perspective, with little coordination on a national level. This paper illustrates and discusses a study from an ongoing large-scale national testbed in Sweden named Makerspace in schools (Makerskola). The project embodies a series of considerations that arise when a maker approach is applied to a geographically widespread national education context. The results of this study are based on an analysis of the extensive project documentation and first-hand experiences from initiating and running a large-scale national testbed in Sweden, involving more than 30 formal actors and more than one thousand active partners in a national educational setting. The main contribution of this paper is the identification and discussion of five different considerations that have emerged during the project, and include Procurement practices, The teacher and leader perspective, Informing national policy making, Creating equal opportunities, and Progression in digital fabrication.

  • 88.
    Kjällander, Susanne
    et al.
    Stockholm University.
    Åkerfeldt, Anna
    Stockholm University.
    Mannila, Linda
    Linköping University.
    Parnes, Peter
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Makerspaces Across Settings: Didactic Design for Programming in Formal and Informal Teacher Education in the Nordic Countries2018In: Journal of Digital Learning in Teacher Education, ISSN 2153-2974, Vol. 34, no 1, p. 18-30Article in journal (Refereed)
    Abstract [en]

    For education to provide knowledge reflecting our current and future society, many countries are revising their curricula, including a vivid discussion on digital competence, programming and computational thinking. This article builds an understanding of the maker movement in relation to education in programming, by demonstrating challenges and possibilities in the interface between Makerspaces and teacher education. Three different Nordic initiatives are presented and their designs for learning are analysed. The article illustrates how Makerspaces and teacher education can be transformed by each other; how Makerspaces can be used in programming activities and what challenges and possibilities emerge in the meeting between the two. The results highlight a core aspect of the maker movement: authenticity. Designs for learning have different levels of authenticity, but in all cases authenticity has been a positive factor. These hands-on learning environments are designed to foster collaboration, share ideas and innovation with people from different backgrounds to transform and form multimodal representations together. In the interface between the formal and informal a potential for inclusion and creation of spaces that reach individuals from different backgrounds is found. Mobile learning is a phenomenon that the making movement together with teacher education can make use of, at for example practice schools, university campuses, mobile Makerspaces or “open-door”-approaches. In the digital environment learning is distributed, but collaboration between formal and informal education is so far complicated to establish, meaning that the academy needs to find more creative and flexible ways of making connections outside the academy.

  • 89.
    Dirin, Amir
    et al.
    Haaga-Helia University of Applied Sciences, Helsinki, Finland.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Alamäki, Ari
    Haaga-Helia University of Applied Sciences, Helsinki, Finland.
    Managing Emotional Requirements in a Context-Aware Mobile Application for Tourists2018In: International Journal of Interactive Mobile Technologies (iJIM), ISSN 1865-7923, E-ISSN 1865-7923, Vol. 12, no 2, p. 177-196Article in journal (Refereed)
    Abstract [en]

    The objective of this study was to unveil the importance of emotions and feelings in developing mobile-based tourism applications. We gathered and analyzed emotional requirements to develop a mobile context-aware application for tourists. Emotional requirements are non-functional requirements affecting users’ emotional experiences around using applications, which are important for sustainable application usage. Many tourism applications exist, but were designed without considering emotional requirements or related UX factors and emotions. We developed a proof-of-concept prototype service-based context-aware tourism application (SCATA), and users participated in the design and evaluation processes. Emotional requirements are key to sustainable usage, especially regarding security. This paper details the application design and evaluation processes, emotional requirements analysis in each design phase, and the emotional effects of content accessibility in the application’s offline mode in unknown environments. The results show that trust, security, adjustability, and reliability are important factors to users, especially in unknown environments.

  • 90.
    Li, Changle
    et al.
    State Key Laboratory of Integrated Services Networks, Xidian University.
    Zhang, Beibei
    State Key Laboratory of Integrated Services Networks, Xidian University.
    Yuan, Xiaoming
    State Key Laboratory of Integrated Services Networks, Xidian University.
    Ullah, Sana
    Department of Computer and Software Technology, University of Swat.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    MC-MAC: a multi-channel based MAC scheme for interference mitigation in WBANs2018In: Wireless networks, ISSN 1022-0038, E-ISSN 1572-8196, Vol. 24, no 3, p. 719-733Article in journal (Refereed)
    Abstract [en]

    Wireless body area networks (WBANs) support the inter-operability of biomedical sensors and medical institutions with convenience and high-efficiency, which makes it an appropriate solution for the pervasive healthcare. Typically, WBANs comprise in-body or around-body sensor nodes for collecting data of physiological feature. Therefore, the efficient medium access control (MAC) protocol is a crucial paramount to coordinate these devices and forward data to the medical center in an efficient and reliable way. However, the extensive use of wireless channel and coexistence of WBANs may result in inevitable interference which will cause performance degradation. Besides, contention-based access in single channel in WBANs is less efficient for dense medical traffic on account of large packet delay, energy consumption and low priority starvation. To address these issues above, we propose a multi-channel MAC (MC-MAC) scheme to obtain better network performance. Considering the characteristic and emergency degree of medical traffic, we introduce a novel channel mapping and selection mechanism, cooperating with conflict avoidance strategy, to organize nodes to access available channels without collisions. In addition, we have evaluated the performance of MC-MAC and the standard IEEE 802.15.6 via simulation and hardware test. The test is conducted by hardware platform based on prototype system of WBANs. Both of the analysis and simulation results show that MC-MAC outperforms the IEEE 802.15.6 in terms of packet delay, throughput, packet error rate and frame error rate

  • 91.
    Danjuma, Kwetishe Joro
    et al.
    Modibbo Adama University of Technology, Yola.
    Oyelere, Solomon Sunday
    University of Eastern Finland.
    Oyelere, Elisha Sunday
    Obafemi Awolowo University.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mobile application for Ebola virus disease diagnosis (EbolaDiag)2018In: Mobile Technologies and Socio-Economic Development in Emerging Nations / [ed] Fredrick Mtenzi; George S Oreku; Dennis M Lupiana; Jim James Yonazi, Hershey, Pennsylvania: IGI Global, 2018, p. 64-80Chapter in book (Refereed)
    Abstract [en]

    This chapter describes how the Ebola virus is considered extremely infectious with a series of physical and psychological traumas on the victims. Common clinical signs associated with the disease include a sudden fever, severe headaches, muscle pain, fatigue, diarrhea, vomiting, and unexplained hemorrhages. In Africa, with strained medical facilities and remote localities, prompt identification and diagnosis of the symptoms of Ebola in a suspected patient are important to the control of the epidemic and in curtailing further spread. This chapter presents the development of an Android mobile application called EbolaDiag (Ebola Diagnosis), which is capable of supporting the diagnosis, screening, and healthcare experts working on the frontline in contact tracing and monitoring of the spread of Ebola. Furthermore, EbolaDiag is suitable for aiding the strained medical facilities in endemic areas. In addressing this gap, the application provided a model for implementing such solutions in pandemic environments. Such a solution becomes more relevant and useful to combat Ebola and several other diseases in similar environments.

  • 92.
    Laine, Teemu H.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mobile Educational Augmented Reality Games: A Systematic Literature Review and Two Case Studies2018In: Computers, E-ISSN 2073-431X, Vol. 7, no 1, article id 19Article in journal (Refereed)
    Abstract [en]

    Augmented reality (AR) has evolved from research projects into mainstream applications that cover diverse fields, such as entertainment, health, business, tourism and education. In particular, AR games, such as Pokémon Go, have contributed to introducing the AR technology to the general public. The proliferation of modern smartphones and tablets with large screens, cameras, and high processing power has ushered in mobile AR applications that can provide context-sensitive content to users whilst freeing them to explore the context. To avoid ambiguity, I define mobile AR as a type of AR where a mobile device (smartphone or tablet) is used to display and interact with virtual content that is overlaid on top of a real-time camera feed of the real world. Beyond being mere entertainment, AR and games have been shown to possess significant affordances for learning. Although previous research has done a decent job of reviewing research on educational AR applications, I identified a need for a comprehensive review on research related to educational mobile AR games (EMARGs). This paper explored the research landscape on EMARGs over the period 2012–2017 through a systematic literature review complemented by two case studies in which the author participated. After a comprehensive literature search and filtering, I analyzed 31 EMARGs from the perspectives of technology, pedagogy, and gaming. Moreover, I presented an analysis of 26 AR platforms that can be used to create mobile AR applications. I then discussed the results in depth and synthesized my interpretations into 13 guidelines for future EMARG developers.

  • 93.
    Cai, H.
    et al.
    School of Software, Shanghai JiaoTong University, Shanghai, China.
    Gu, Y.
    School of Software, Shanghai JiaoTong University, Shanghai, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Xu, B.
    College of Economics and Management, Shanghai JiaoTong University, Shanghai, China.
    Zhou, J.
    School of Software, Shanghai JiaoTong University, Shanghai, China.
    Model-Driven Development Patterns for Mobile Services in Cloud of Things2018In: IEEE Transactions On Cloud Computing, ISSN 2168-7161, Vol. 6, no 3, p. 771-784, article id 7399727Article in journal (Refereed)
    Abstract [en]

    Cloud of Things (CoT) is an integration of Internet of Things (IoT) and cloud computing for intelligent and smart application especially in mobile environment. Model Driven Architecture (MDA) is used to develop Software as a Service (SaaS) so as to facilitate mobile application development by relieving developers from technical details. However, traditional service composition or mashup are somewhat unavailable due to complex relations and heterogeneous deployed environments. For the purpose of building cloud-enabled mobile applications in a configurable and adaptive way, Model-Driven Development Patterns based on semantic reasoning mechanism are provided towards CoT application development. Firstly, a meta-model covering both multi-view business elements and service components are provided for model transformation. Then, based on formal representation of models, three patterns from different tiers of Model-View-Controller (MVC) framework are used to transform business models into service component system so as to configure cloud services rapidly. Lastly, a related software platform is also provided for verification. The result shows that the platform is applicable for rapid system development by means of various service integration patterns. 

  • 94.
    Rahimi, M. Reza
    et al.
    Huawei Innovation Center, US R&D Storage Lab, Santa Clara.
    Venkatasubramanian, Nalini
    School of Information and Computer Science, University of California, Irvine.
    Mehrotra, Sharad
    School of Information and Computer Science, University of California, Irvine.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On Optimal and Fair Service Allocation in Mobile Cloud Computing2018In: I E E E Transactions on Cloud Computing, ISSN 2168-7161, Vol. 6, no 3, p. 815-828Article in journal (Refereed)
    Abstract [en]

    This paper studies the optimal and fair service allocation for a variety of mobile applications (single or group and collaborative mobile applications) in mobile cloud computing. We exploit the observation that using tiered clouds, i.e. clouds at multiple levels (local and public) can increase the performance and scalability of mobile applications. We proposed a novel framework to model mobile applications as a location-time workflows (LTW) of tasks; here users mobility patterns are translated to mobile service usage patterns. We show that an optimal mapping of LTWs to tiered cloud resources considering multiple QoS goals such application delay, device power consumption and user cost/price is an NP-hard problem for both single and group-based applications. We propose an efficient heuristic algorithm called MuSIC that is able to perform well (73% of optimal, 30% better than simple strategies), and scale well to a large number of users while ensuring high mobile application QoS. We evaluate MuSIC and the 2-tier mobile cloud approach via implementation (on real world clouds) and extensive simulations using rich mobile applications like intensive signal processing, video streaming and multimedia file sharing applications. We observe about 25% lower delays and power (under fixed price constraints) and about 35% decrease in price (considering fixed delay) in comparison to only using the public cloud. Our studies also show that MuSIC performs quite well under different mobility patterns, e.g. random waypoint and Manhattan models.

  • 95.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018In: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, p. 1-7Conference paper (Refereed)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

  • 96.
    Cruciani, Federico
    et al.
    Ulster University.
    Cleland, Ian
    Ulster University.
    Nugent, Chris
    Ulster University.
    McCullagh, Paul
    Ulster University.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Personalized Online Training for Physical Activity monitoring using weak labels2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2018, p. 567-572Conference paper (Refereed)
    Abstract [en]

    The use of smartphones for activity recognition is becoming common practice. Most approaches use a single pretrained classifier to recognize activities for all users. Research studies, however, have highlighted how a personalized trained classifier could provide better accuracy. Data labeling for ground truth generation, however, is a time-consuming process. The challenge is further exacerbated when opting for a personalized approach that requires user specific datasets to be labeled, making conventional supervised approaches unfeasible. In this work, we present early results on the investigation into a weakly supervised approach for online personalized activity recognition. This paper describes: (i) a heuristic to generate weak labels used for personalized training, (ii) a comparison of accuracy obtained using a weakly supervised classifier against a conventional ground truth trained classifier. Preliminary results show an overall accuracy of 87% of a fully supervised approach against a 74% with the proposed weakly supervised approach.

  • 97.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Predicting bucket-filling control actions of a wheel-loader operator using aneural network ensemble2018In: 2018 International Joint Conference on Neural Networks (IJCNN), Piscataway, NJ: IEEE, 2018, article id 8489388Conference paper (Refereed)
    Abstract [en]

    Automatic bucket filling is an open problem since three decades. In this paper, we address this problem with supervised machine learning using data collected from manual operation. The range-normalized actuations of lift joystick, tilt joystick and throttle pedal are predicted using information from sensors on the machine and the prediction errors are quantified. We apply linear regression, k-nearest neighbors, neural networks, regression trees and ensemble methods and find that an ensemble of neural networks results in the most accurate predictions. The prediction root-mean-square-error (RMSE) of the lift action exceeds that of the tilt and throttle actions, and we obtain an RMSE below 0.2 for complete bucket fillings after training with as little as 135 bucket filling examples

  • 98.
    Bezerra, Nibia Souza
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Wang, Min
    Network Architecture and Protocols Research, Ericsson .
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Nordberg, Mats
    Network Architecture and Protocols Research, Ericsson .
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    RACH performance in massive machine-type communications access scenario2018Conference paper (Refereed)
    Abstract [en]

    With the increasing number of devices performing Machine-Type Communications (MTC), mobile networks are expected to encounter a high load of burst transmissions. One bottleneck in such cases is the Random Access Channel (RACH) procedure, which is responsible for the attachment of devices, among other things. In this paper, we performed a rich-parameter based simulation on RACH to identify the procedure bottlenecks. A finding from the studied scenarios is that the Physical Downlink Control Channel (PDCCH) capacity for the grant allocation is the main limitation for the RACH capacity rather than the number of Physical Random Access Channel (PRACH) preambles. Guided by our simulation results, we proposed improvements to the RACH procedure and to PDCCH.

  • 99.
    Souza Bezerra, Níbia
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Wang, Min
    Luleå University of Technology, External.
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Nordberg, Mats
    Luleå University of Technology, External.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    RACH performance in massive machine-type communications access scenario2018In: / [ed] IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    With the increasing number of devices performing Machine-Type Communications (MTC), mobile networks are expected to encounter a high load of burst transmissions. One bottleneck in such cases is the Random Access Channel (RACH) procedure, which is responsible for the attachment of devices, among other things. In this paper, we performed a rich-parameter based simulation on RACH to identify the procedure bottlenecks. A finding from the studied scenarios is that the Physical Downlink Control Channel (PDCCH) capacity for the grant allocation is the main limitation for the RACH capacity rather than the number of Physical Random Access Channel (PRACH) preambles. Guided by our simulation results, we proposed improvements to the RACH procedure and to PDCCH.

  • 100.
    Zhohov, Roman
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Minovski, Dimitar
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. InfoVista Sweden.
    Johansson, Per
    InfoVista Sweden.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Real-time Performance Evaluation of LTE for IIoT2018In: Proceedings of the 43rd IEEE Conference on Local Computer Networks (LCN) / [ed] Soumaya Cherkaoui, Institute of Electrical and Electronics Engineers (IEEE), 2018Conference paper (Refereed)
    Abstract [en]

    Industrial Internet of Things (IIoT) is claimed to be a global booster technology for economic development. IIoT brings bulky use-cases with a simple goal of enabling automation, autonomation or just plain digitalization of industrial processes. The abundance of interconnected IoT and CPS generate additional burden on the telecommunication networks, imposing number of challenges to satisfy the key performance requirements. In particular, the QoS metrics related to real-time data exchange for critical machine-to-machine type communication. This paper analyzes a real-world example of IIoT from a QoS perspective, such as remotely operated underground mining vehicle. As part of the performance evaluation, a software tool is developed for estimating the absolute, one-way delay in end-toend transmissions. The measured metric is passed to a machine learning model for one-way delay prediction based on LTE RAN measurements using a commercially available cutting-edge software tool. The achieved results prove the possibility to predict the delay figures using machine learning model with a coefficient of determination up to 90%.