Change search
Refine search result
123 1 - 50 of 102
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ahmed, Kazi Main Uddin
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Alvarez, Manuel
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Bollen, Math H. J.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    A Novel Reliability Index to Assess the Computational Resource Adequacy in Data Centers2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 54530-54541Article in journal (Refereed)
    Abstract [en]

    The energy demand of data centers is increasing globally with the increasing demand for computational resources to ensure the quality of services. It is important to quantify the required resources to comply with the computational workloads at the rack-level. In this paper, a novel reliability index called loss of workload probability is presented to quantify the rack-level computational resource adequacy. The index defines the right-sizing of the rack-level computational resources that comply with the computational workloads, and the desired reliability level of the data center investor. The outage probability of the power supply units and the workload duration curve of servers are analyzed to define the loss of workload probability. The workload duration curve of the rack, hence, the power consumption of the servers is modeled as a function of server workloads. The server workloads are taken from a publicly available data set published by Google. The power consumption models of the major components of the internal power supply system are also presented which shows the power loss of the power distribution unit is the highest compared to the other components in the internal power supply system. The proposed reliability index and the power loss analysis could be used for rack-level computational resources expansion planning and ensures energy-efficient operation of the data center.

  • 2.
    Ahmed, Kazi Main Uddin
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Bollen, Math H.J
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Alvarez, Manuel
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    A Review of Data Centers Energy Consumption And Reliability Modeling2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, article id 152536Article, review/survey (Refereed)
    Abstract [en]

    Enhancing the efficiency and the reliability of the data center are the technical challenges for maintaining the quality of services for the end-users in the data center operation. The energy consumption models of the data center components are pivotal for ensuring the optimal design of the internal facilities and limiting the energy consumption of the data center. The reliability modeling of the data center is also important since the end-user’s satisfaction depends on the availability of the data center services. In this review, the state-of-the-art and the research gaps of data center energy consumption and reliability modeling are identified, which could be beneficial for future research on data center design, planning, and operation. The energy consumption models of the data center components in major load sections i.e., information technology (IT), internal power conditioning system (IPCS), and cooling load section are systematically reviewed and classified, which reveals the advantages and disadvantages of the models for different applications. Based on this analysis and related findings it is concluded that the availability of the model parameters and variables are more important than the accuracy, and the energy consumption models are often necessary for data center reliability studies. Additionally, the lack of research on the IPCS consumption modeling is identified, while the IPCS power losses could cause reliability issues and should be considered with importance for designing the data center. The absence of a review on data center reliability analysis is identified that leads this paper to review the data center reliability assessment aspects, which is needed for ensuring the adaptation of new technologies and equipment in the data center. The state-of-the-art of the reliability indices, reliability models, and methodologies are systematically reviewed in this paper for the first time, where the methodologies are divided into two groups i.e., analytical and simulation-based approaches. There is a lack of research on the data center cooling section reliability analysis and the data center components’ failure data, which are identified as research gaps. In addition, the dependency of different load sections for reliability analysis of the data center is also included that shows the service reliability of the data center is impacted by the IPCS and the cooling section.

  • 3.
    Al Banna, Md. Hasan
    et al.
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Ghosh, Tapotosh
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Al Nahian, Md. Jaber
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Taher, Kazi Abu
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka 1342, Bangladesh.
    Mahmud, Mufti
    Department of Computer Science, Nottingham Trent University, NG11 8NS – Nottingham, UK.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Attention-based Bi-directional Long-Short Term Memory Network for Earthquake Prediction2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 56589-56603Article in journal (Refereed)
    Abstract [en]

    An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.

    Download full text (pdf)
    fulltext
  • 4.
    Alani, Mohammed M.
    et al.
    Department of Computer Science, Toronto Metropolitan University, Toronto, ON, Canada; School of IT Administration and Security, Seneca College of Applied Arts and Technology, Toronto, ON M2J 2X5, Canada.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems. College of Information Technology, United Arab Emirates University, Al Ain, United Arab Emirates; Electrical Engineering Department, Faculty of Engineering, Al-Azhar University, Qena 83513, Egypt; Centre for Security, Communications and Network Research, University of Plymouth, Plymouth PL4 8AA, U.K..
    PAIRED: An Explainable Lightweight Android Malware Detection System2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 73214-73228Article in journal (Refereed)
    Abstract [en]

    With approximately 2 billion active devices, the Android operating system tops all other operating systems in terms of the number of devices using it. Android has gained wide popularity not only as a smartphone operating system, but also as an operating system for vehicles, tablets, smart appliances, and Internet of Things devices. Consequently, security challenges have arisen with the rapid adoption of the Android operating system. Thousands of malicious applications have been created and are being downloaded by unsuspecting users. This paper presents a lightweight Android malware detection system based on explainable machine learning. The proposed system uses the features extracted from applications to identify malicious and benign malware. The proposed system is tested, showing an accuracy exceeding 98% while maintaining its small footprint on the device. In addition, the classifier model is explained using Shapley Additive Explanation (SHAP) values.

  • 5.
    Alizadeh, Morteza
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Comparative Analysis of Decentralized Identity Approaches2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 92273-92283Article in journal (Refereed)
    Abstract [en]

    Decentralization is essential when trust and performance must not depend on a single organization. Distributed Ledger Technologies (DLTs) and Decentralized Hash Tables (DHTs) are examples where the DLT is useful for transactional events, and the DHT is useful for large-scale data storage. The combination of these two technologies can meet many challenges. The blockchain is a DLT with immutable history protected by cryptographic signatures in data blocks. Identification is an essential issue traditionally provided by centralized trust anchors. Self-sovereign identities (SSIs) are proposed decentralized models where users can control and manage their identities with the help of DHT. However, slowness is a challenge among decentralized identification systems because of many connections and requests among participants. In this article, we focus on decentralized identification by DLT and DHT, where users can control their information and store biometrics. We survey some existing alternatives and address the performance challenge by comparing different decentralized identification technologies based on execution time and throughput. We show that the DHT and machine learning model (BioIPFS) performs better than other solutions such as uPort, ShoCard, and BBID.

    Download full text (pdf)
    fulltext
  • 6.
    Alizadehsani, Roohallah
    et al.
    Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC, Australia.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hussain, Sadiq
    Dibrugarh University, Examination Branch, Dibrugarh, Assam, India.
    Jagatheesaperumal, Senthil Kumar
    Mepco Schlenk Engineering College, Department of Electronics and Communication Engineering, Sivakasi, India.
    Calixto, Rene Ripardo
    Federal University of Ceará, Department of Teleinformatics Engineering, Fortaleza, Brazil.
    Rahouti, Mohamed
    Fordham University, Department of Computer and Information Science, Bronx, NY, USA.
    Roshanzamir, Mohamad
    Fasa University, Faculty of Engineering, Department of Computer Engineering, Fasa, Iran.
    De Albuquerque, Victor Hugo C.
    Federal University of Ceará, Department of Teleinformatics Engineering, Fortaleza, Brazil.
    Explainable Artificial Intelligence for Drug Discovery and Development: A Comprehensive Survey2024In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 35796-35812Article, review/survey (Refereed)
    Abstract [en]

    The field of drug discovery has experienced a remarkable transformation with the advent of artificial intelligence (AI) and machine learning (ML) technologies. However, as these AI and ML models are becoming more complex, there is a growing need for transparency and interpretability of the models. Explainable Artificial Intelligence (XAI) is a novel approach that addresses this issue and provides a more interpretable understanding of the predictions made by machine learning models. In recent years, there has been an increasing interest in the application of XAI techniques to drug discovery. This review article provides a comprehensive overview of the current state-of-the-art in XAI for drug discovery, including various XAI methods, their application in drug discovery, and the challenges and limitations of XAI techniques in drug discovery. The article also covers the application of XAI in drug discovery, including target identification, compound design, and toxicity prediction. Furthermore, the article suggests potential future research directions for the application of XAI in drug discovery. This review article aims to provide a comprehensive understanding of the current state of XAI in drug discovery and its potential to transform the field.

    Download full text (pdf)
    fulltext
  • 7.
    Azangoo, Mohammad
    et al.
    Department of Electrical Engineering and Automation, Aalto University, 00076 Helsinki, Finland.
    Sorsamäki, Lotta
    VTT Technical Research Center of Finland, 02044 Espoo, Finland.
    Sierla, Seppo
    Department of Electrical Engineering and Automation, Aalto University, 00076 Helsinki, Finland.
    Mätäsniemi, Teemu
    VTT Technical Research Center of Finland, 02044 Espoo, Finland.
    Rantala, Miia
    Semantum Oy, 02150 Espoo, Finland.
    Rainio, Kari
    VTT Technical Research Center of Finland, 02044 Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, 00076 Helsinki, Finland.
    A methodology for generating a digital twin for process industry: a case study of a fiber processing pilot plant2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 58787-58810Article in journal (Refereed)
    Abstract [en]

    Digital twins are now one of the top trends in Industry 4.0, and many companies are using them to increase their level of digitalization, and, as a result, their productivity and reliability. However, the development of digital twins is difficult, expensive, and time consuming. This article proposes a semi-automated methodology to generate digital twins for process plants by extracting process data from engineering documents using text and image processing techniques. The extracted information is used to build an intermediate graph model, which serves as a starting point for generating a model in a simulation software. The translation of a graph-based model into a simulation software environment necessitates the use of simulator-specific mapping rules. This paper describes an approach for generating a digital twin based on a steady state simulation model, using a Piping and Instrumentation Diagram (P&ID) as the main source of information. The steady state modeling paradigm is especially suitable for use cases involving retrofits for an operational process plant, also known as a brownfield plant. A methodology and toolchain is proposed, consisting of manual, semi-automated and fully automated steps. A pilot scale brownfield fiber processing plant was used as a case study to demonstrate our proposed methodology and toolchain, and to identify and address issues that may not occur in laboratory scale case studies. The article concludes with an evaluation of unresolved concerns and future research topics for the automated development of a digital twin for a brownfield process system.

  • 8.
    Aziz, Abdullah
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Chouhan, Shailesh Singh
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Distributed Digital Twins as Proxies-Unlocking Composability & Flexibility for Purpose-Oriented Digital Twins2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 137577-137593Article in journal (Refereed)
    Abstract [en]

    In the realm of Industrial Internet of Things (IoT) and Industrial Cyber-Physical Systems (ICPS), Digital Twins (DTs) have revolutionized the management of physical entities. However, existing implementations often face constraints due to hardware-centric approaches and limited flexibility. This article introduces a transformative paradigm that harnesses the potential of distributed Digital Twins as proxies, enabling software-centricity and unlocking composability and flexibility for purpose-oriented digital twin development and deployment. The proposed microservices-based architecture, rooted in service-oriented architecture (SOA) and microservices principles, emphasizes reusability, modularity, and scalability. Leveraging the Lean Digital Twin Methodology and packaged business capabilities expedites digital twin creation and deployment, facilitating dynamic responses to evolving industrial demands. This architecture segments the industrial realm into physical and virtual spaces, where core components are responsible for digital twin management, deployment, and secure interactions. By abstracting and virtualizing physical entities into individual digital twins, this approach establishes the groundwork for purpose-oriented composite digital twin creation. Our key contributions involve a comprehensive exposition of the architecture, a practical proof-of-concept (PoC) implementation, and the application of the architecture in a use-case scenario. Additionally, we provide an analysis, including a quantitative evaluation of the proxy aspect and a qualitative comparison with traditional approaches. This assessment emphasizes key properties such as reusability, modularity, abstraction, discoverability, and security, transcending the limitations of contemporary industrial systems and enabling agile, adaptable digital proxies to meet modern industrial demands.

    Download full text (pdf)
    fulltext
  • 9.
    Banna, Md. Hasan Al
    et al.
    Department of Computer Science and Engineering, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Ghosh, Tapotosh
    Department of Computer Science and Engineering, United International University, Dhaka 1209, Bangladesh.
    Nahian, Md. Jaber AL
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1212, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka 1342, Bangladesh.
    Mahmud, Mufti
    Department of Computer Science and Medical Technology Innovation Facility, Nottingham Trent University, Clifton, NG11 8NS Nottingham, U.K..
    Taher, Kazi Abu
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1212, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Hybrid Deep Learning Model to Predict the Impact of COVID-19 on Mental Health from Social Media Big Data2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 77009-77022Article in journal (Refereed)
    Abstract [en]

    The novel coronavirus disease (COVID-19) pandemic is provoking a prevalent consequence on mental health because of less interaction among people, economic collapse, negativity, fear of losing jobs, and death of the near and dear ones. To express their mental state, people often are using social media as one of the preferred means. Due to reduced outdoor activities, people are spending more time on social media than usual and expressing their emotion of anxiety, fear, and depression. On a daily basis, about 2.5 quintillion bytes of data are generated on social media. Analyzing this big data can become an excellent means to evaluate the effect of COVID-19 on mental health. In this work, we have analyzed data from Twitter microblog (tweets) to find out the effect of COVID-19 on people’s mental health with a special focus on depression. We propose a novel pipeline, based on recurrent neural network (in the form of long short-term memory or LSTM) and convolutional neural network, capable of identifying depressive tweets with an accuracy of 99.42%. Preprocessed using various natural language processing techniques, the aim was to find out depressive emotion from these tweets. Analyzing over 571 thousand tweets posted between October 2019 and May 2020 by 482 users, a significant rise in depressing tweets was observed between February and May of 2020, which indicates as an impact of the long ongoing COVID-19 pandemic situation.

    Download full text (pdf)
    fulltext
  • 10.
    Bawankar, Nilesh
    et al.
    International Institute of Information Technology (IIIT-H), Hyderabad, Telangana, India.
    Kriti, Ankit
    International Institute of Information Technology (IIIT-H), Hyderabad, Telangana, India.
    Chouhan, Shailesh Singh
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Chaudhari, Sachin
    International Institute of Information Technology (IIIT-H), Hyderabad, Telangana, India.
    IoT-enabled Water Monitoring in Smart Cities with Retrofit and Solar-based Energy Harvesting2024In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 58222-58238Article in journal (Refereed)
    Abstract [en]

    Monitoring water flow helps to identify leaks and wastage, leading to better management of water resources and conservation of this precious resource. To address this challenge, there is a need for an efficient and sustainable water management system. This paper presents an Internet of Things (IoT) based solution that involves retrofitting existing analog water meters using readily available off-the-shelf electronic components. Real-time data collection and analysis are performed through edge computation, which locally processes water meter images captured by the camera and extracts water meter readings. These readings are transmitted to the cloud for storage and further analysis. Various strategies have been implemented to optimize supply-current usage, preserving charge-discharge cycles of solar-powered batteries even in adverse environmental conditions. To streamline the firmware update process for multiple connected devices, a broadcasting technique is employed, offering the benefits of reduced manual labor and time savings. To assess the reliability and performance of developed solution, field deployment is conducted over several months, enabling the characterization of water usage patterns across different locations. Integrating energy harvesting capabilities into system reduces maintenance costs and promotes eco-friendly energy practices. Overall, this solution offers an effective and comprehensive approach towards achieving efficient and sustainable water management.

    Download full text (pdf)
    fulltext
  • 11.
    Berezovskaya, Yulia
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Yang, Chen-Wei
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mousavi, Arash
    IT Department, SCANIA CV AB, Södertälje, Sweden.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, 02150 Helsinki, Finland.
    Minde, Tor Björn
    RISE SICS North, Luleå, Sweden.
    Modular Model of a Data Centre as a Tool for Improving Its Energy Efficiency2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 46559-46573Article in journal (Refereed)
    Abstract [en]

    For most modern data centres, it is of high value to select practical methods for improving energy efficiency and reducing energy waste. IT-equipment and cooling systems are the two most significant energy consumers in data centres, thus the energy efficiency of any data centre mainly relies on the energy efficiency of its computational and cooling systems. Existing techniques of optimising the energy usage of both these systems have to be compared. However, such experiments cannot be conducted in real plants as they may harm the electronic equipment. This paper proposes a modelling toolbox which enables building models of data centres of any scale and configuration with relative ease. The toolbox is implemented as a set of building blocks which model individual components of a typical data centre, such as processors, local fans, servers, units of cooling systems, it provides methods of adjusting the internal parameters of the building blocks, as well as contains constructors utilising the building blocks for building models of data centre systems of different levels from server to the server room. The data centre model is meant to accurate estimating the energy consumption as well as the evolution of the temperature of all computational nodes and the air temperature inside the data centre. The constructed model capable of substitute for the real data centre at examining the performance of different energy-saving strategies in dynamic mode: the model provides information about data centre operating states at each time point (as model outputs) and takes values of adjustable parameters as the control signals from system implementing energy-saving algorithm (as model inputs). For Module 1 of the SICS ICE data centre located in Luleå, Sweden, the model was constructed from the building blocks. After adjusting the internal parameters of the building blocks, the model demonstrated the behaviour quite close to real data from the SICS ICE data centre. Therefore the model is applicable to use as a substitute for the real data centre. Some examples of using the model for testing energy-saving strategies are presented at the end of the paper.

  • 12.
    Bo, Lin
    et al.
    The State Key Laboratory of Mechanical Transmissions, Chongqing University, Chongqing, China.
    Xu, Guanji
    The State Key Laboratory of Mechanical Transmissions, Chongqing University, Chongqing, China.
    Liu, Xiaofeng
    The State Key Laboratory of Mechanical Transmissions, Chongqing University, Chongqing, China.
    Lin, Jing
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Bearing Fault Diagnosis Based on Subband Time-Frequency Texture Tensor2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 37611-37619Article in journal (Refereed)
    Abstract [en]

    The texture feature tensor established from a subband time–frequency image (TFI) was extracted and used to identify the fault states of a rolling bearing. The TFI of adaptive optimal-kernel distribution was optimally partitioned into TFI blocks based on the minimum frequency band entropy. The texture features were extracted from the co-occurrence matrix of each TFI block. Based on the order of the segmented frequency bands, the texture feature tensor was constructed using the multidimensional feature vectors from all the blocks; this preserved the inherent characteristic of the TFI structure and avoided the information loss caused by vectorizing multidimensional features. The linear support higher order tensor machine based on the feature tensor was applied to identify the fault states of the rolling bearing.

  • 13.
    Bokde, Neeraj Dhanraj
    et al.
    Department of Electronics and Communication Engineering, Visvesvaraya National Institute of Technology, Nagpur, India. Department of Engineering-Renewable Energy and Thermodynamics, Aarhus University, Denmark..
    Feijóo, Andrés
    Departamento de Enxeñería Eléctrica-Universidade de Vigo, Campus de Lagoas-Marcosende, Vigo, Spain..
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Yaseen, Zaher Mundher
    Sustainable Developments in Civil Engineering Research Group, Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam..
    A comparison between reconstruction methods for generation of synthetic time series applied to wind speed simulation2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 1353096-135398Article in journal (Refereed)
    Abstract [en]

    Wind energy is an attractive renewable sources and its prediction is highly essential for multiple applications. Over the literature, there are several studies have been focused on the related researches of synthetic wind speed data generation. In this research, two reconstruction methods are developed for synthetic wind speed time series generation. The modeling is constructed based on different processes including independent values generation from the known probability distribution function, rearrangement of random values and segmentation. They have been named as Rank-wise and Step-wise reconstruction methods. The proposed methods are explained with the help of a standard time series and the examination on wind speed time series collected from Galicia, the autonomous region in the northwest of Spain. Results evidenced the potential of the developed models over the state-of-the-art synthetic time series generation methods and demonstrated a successful validation using the means of mean and median wind speed values, autocorrelations, probability distribution parameters with their corresponding histograms and confusion matrix. Pros and cons of both methods are discussed comprehensively.

    Download full text (pdf)
    fulltext
  • 14.
    Catelani, Marcantonio
    et al.
    Department of Information Engineering, University of Florence, via di S. Marta 3, 50139, Florence, Italy.
    Ciani, Lorenzo
    Department of Information Engineering, University of Florence, via di S. Marta 3, 50139, Florence, Italy.
    Galar, Diego
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. Industry and Transport Division, Tecnalia Research and Innovation, Miñano, Araba , 01510, Spain.
    Guidi, Giulia
    Department of Information Engineering, University of Florence, via di S. Marta 3, 50139, Florence, Italy.
    Matucci, Serena
    Department of Mathematics and Computer Science “Ulisse Dini”, University of Florence, Florence, Italy.
    Patrizi, Gabriele
    Department of Information Engineering, University of Florence, via di S. Marta 3, 50139, Florence, Italy.
    FMECA assessment for railway safety-critical systems investigating a new risk threshold method2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 86243-86253Article in journal (Refereed)
    Abstract [en]

    This paper develops a Failure Mode, Effects and Criticality Analysis (FMECA) for a heating, ventilation and air conditioning (HVAC) system in railway. HVAC is a safety critical system which must ensure emergency ventilation in case of fire and in case of loss of primary ventilation functions. A study of the HVAC’s critical areas is mandatory to optimize its reliability and availability and consequently to guarantee a low operation and maintenance cost. The first part of the paper describes the FMECA which is performed and reported to highlight the main criticalities of the HVAC system under analysis. Secondly, the paper deals with the problem of the evaluation of a threshold risk value, which can distinguish negligible and critical failure modes. Literature barely considers the problem of an objective risk threshold estimation. Therefore, a new analytical method based on finite difference is introduced to find a univocal risk threshold value. The method is then tested on two Risk Priority Number datasets related to the same HVAC. The threshold obtained in both cases is a good tradeoff between the risk mitigation and the cost investment for the corrective actions required to mitigate the risk level. Finally, the threshold obtained with the proposed method is compared with the methods available in literature. The comparison shows that the proposed finite difference method is a well-structured technique, with a low computational cost. Furthermore, the proposed approach provides results in line with the literature, but it completely deletes the problem of subjectivity.

  • 15.
    Catelani, Marcantonio
    et al.
    Department of Information Engineering, University of Florence, Florence, Italy.
    Ciani, Lorenzo
    Department of Information Engineering, University of Florence, Florence, Italy.
    Galar, Diego
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Patrizi, Gabriele
    Department of Information Engineering, University of Florence, Florence, Italy.
    Risk Assessment of a Wind Turbine: A New FMECA-Based tool with RPN threshold estimation2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 20181-20190Article in journal (Refereed)
    Abstract [en]

    A wind turbine is a complex system used to convert the kinetic energy of the wind into electrical energy. During the turbine design phase, a risk assessment is mandatory to reduce the machine downtime and the Operation & Maintenance cost and to ensure service continuity. This paper proposes a procedure based on Failure Modes, Effects, and Criticality Analysis to take into account every possible criticality that could lead to a turbine shutdown. Currently, a standard procedure to be applied for evaluation of the risk priority number threshold is still not available. Trying to fill this need, this paper proposes a new approach for the Risk Priority Number (RPN) prioritization based on a statistical analysis and compares the proposed method with the only three quantitative prioritization techniques found in literature. The proposed procedure was applied to the electrical and electronic components included in a Spanish 2 MW on-shore wind turbine.

  • 16.
    Chen, Jiayu
    et al.
    Beihang University, Beijing, China.
    Zhou, Dong
    Beihang University, Beijing, China.
    Guo, Ziyue
    Beihang University, Beijing, China.
    Lin, Jing
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    LYU, Chuan
    Beihang University, Beijing, China.
    LU, Chen
    Beihang University, Beijing, China.
    An Active Learning Method Based on Uncertainty and Complexity for Gearbox Fault Diagnosis2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 9022-9031Article in journal (Refereed)
    Abstract [en]

    It is crucial to implement an effective and accurate fault diagnosis of a gearbox for mechanical systems. However, being composed of many mechanical parts, a gearbox has a variety of failure modes resulting in the difficulty of accurate fault diagnosis. Moreover, it is easy to obtain raw vibration signals from real gearbox applications, but it requires significant costs to label them, especially for multi-fault modes. These issues challenge the traditional supervised learning methods of fault diagnosis. To solve these problems, we develop an active learning strategy based on uncertainty and complexity. Therefore, a new diagnostic method for a gearbox is proposed based on the present active learning, empirical mode decomposition-singular value decomposition (EMD-SVD) and random forests (RF). First, the EMD-SVD is used to obtain feature vectors from raw signals. Second, the proposed active learning scheme selects the most valuable unlabeled samples, which are then labeled and added to the training data set. Finally, the RF, trained by the new training data, is employed to recognize the fault modes of a gearbox. Two cases are studied based on experimental gearbox fault diagnostic data, and a supervised learning method, as well as other active learning methods, are compared. The results show that the proposed method outperforms the two common types of methods, thus validating its effectiveness and superiority.

    Download full text (pdf)
    fulltext
  • 17.
    Cheng, Haibo
    et al.
    State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China. Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang 110016, China. Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China. University of Chinese Academy of Sciences, Beijing 100049, China.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, 02150 Espoo, Finland.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Zeng, Peng
    State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China. Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang 110016, China. Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China.
    Yu, Haibin
    State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China. Key Laboratory of Networked Control Systems, Chinese Academy of Sciences, Shenyang 110016, China. Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China.
    LSTM Based EFAST Global Sensitivity Analysis for Interwell Connectivity Evaluation Using Injection and Production Fluctuation Data2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 67289-67299Article in journal (Refereed)
    Abstract [en]

    In petroleum production system, interwell connectivity evaluation is a significant process to understand reservoir properties comprehensively, determine water injection rate scientifically, and enhance oil recovery effectively for oil and gas field. In this paper, a novel long short-term memory (LSTM) neural network based global sensitivity analysis (GSA) method is proposed to analyse injector-producer relationship. LSTM neural network is employed to build up the mapping relationship between production wells and surrounding injection wells using the massive historical injection and production fluctuation data of a synthetic reservoir model. Next, the extended Fourier amplitude sensitivity test (EFAST) based GSA approach is utilized to evaluate interwell connectivity on the basis of the generated LSTM model. Finally, the presented LSTM based EFAST sensitivity analysis method is applied to a benchmark test and a synthetic reservoir model. Experimental results show that the proposed technique is an efficient method for estimating interwell connectivity.

  • 18.
    Chiquito, Alex
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Attribute-Based Approaches for Secure Data Sharing in Industrial Contexts2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 10180-10195Article in journal (Refereed)
    Abstract [en]

    The sharing of data is becoming increasingly important for the process and manufacturing industries that are using data-driven models and advanced analysis to assess production performance and make predictions, e.g., on wear and tear. In such environments, access to data needs to be accurately controlled to prevent leakage to unauthorized users while providing easy to manage policies. Data should further be shared with users outside trusted domains using encryption. Finally, means for revoking access to data are needed. This paper provides a survey on attribute-based approaches for access control to data, focusing on policy management and enforcement. We aim to identify key properties provided by attribute-based access control (ABAC) and attribute-based encryption (ABE) that can be combined and used to meet the abovementioned needs.We describe such possible combinations in the context of a proposed architecture for secure data sharing. The paper concludes by identifying knowledge gaps to provide direction to future research on attribute-based approaches for secure data sharing in industrial contexts.

    Download full text (pdf)
    fulltext
  • 19.
    Chiquito, Eric
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Survey on Decentralized Auctioning Systems2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 51672-51688Article in journal (Refereed)
    Abstract [en]

    An electronic auction (e-auction) is an efficient negotiation model that allows multiple sellers or buyers to compete for assets or rights. Such systems have become increasingly popular with the evolution of the internet for commerce. In centralized auctioning systems, the presence of a governing third party has been a major trust concern, as such a party may not always be trustworthy or create transaction fees for the hosted auctions. Distributed and decentralized systems based on blockchain for auctions of nonphysical assets have been suggested as a means to distribute and establish trust among peers, and manage disputes and concurrent entries. Although a blockchain system provides attractive features such as decentralized trust management and fraud prevention, it cannot alone support dispute resolutions and adjudications for physical assets. In this paper, we compare blockchain and non-blockchain decentralized auctioning systems based on the identified functional needs and quality attributes. We contrast these needs and attributes with the state-of-the-art models and other implementations of auctioning systems, and discuss the associated trade-offs. We further analyze the gaps in the existing decentralized approaches and propose design approaches for decentralized auctioning systems, for both physical and nonphysical assets, that support dispute resolution and adjudication based on collected evidence, and dispute prevention based on distributed consensus algorithms.

  • 20.
    Chiquito, Eric
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Monrat, Ahmed Afif
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Digitalized and Decentralized Open-Cry Auctioning: Key Properties, Solution Design, and Implementation2024In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 64686-64700Article in journal (Refereed)
    Abstract [en]

    Open-cry electronic auctions have revolutionized the landscape of high-value transactions for buying and selling goods. Online platforms such as eBay and Tradera have popularized these auctions due to their global accessibility and convenience. However, these centralized auctioning platforms rely on trust in a central entity to manage and control the processing of bids, e.g., the submission time and validity. The use of blockchain technologies for constructing decentralized systems has gained popularity for their versatility and useful properties toward decentralization. However, blockchain-based open-cry auctions, are sensitive to the order of transactions and deadlines which, in the absence of a governing party, need to be provided in the system design. In this paper, we identify the key properties for the development of decentralized open-cry auctioning systems, including verifiability, transaction immutability, ordering, and time synchronization. Three prominent blockchain platforms, namely, Ethereum, Hyperledger Fabric, and R3 Corda were analyzed in terms of their capabilities to ensure these properties for gap identification. We propose a solution design that addresses these key properties and presents a proof-of-concept (PoC) implementation of such design. Our PoC uses Hyperledger Fabric and mitigates the identified gaps related to the time synchronization of this system by utilizing an external component. During the chaincode execution, the creation and submission of bids initiate requests to the time service API. This API service retrieves trusted timestamps from NTP services to obtain accurate bid times. We then analyzed the system design and implementation in the context of the identified key properties. Lastly, we conducted a performance evaluation of the time service and the PoC system implementation in time-sensitive scenarios and assessed its overall performance.

    Download full text (pdf)
    fulltext
  • 21.
    Chude-Okonkwo, Uche K.
    et al.
    Institute for Intelligent Systems, University of Johannesburg, Auckland Park, 2006, South Africa..
    Paul, Babu S.
    Institute for Intelligent Systems, University of Johannesburg, Auckland Park, 2006, South Africa..
    Vasilakos, Athanasios
    College of Mathematics and Computer Science, Fuzhou University, Fuzhou, China; Center for AI Research (CAIR), University of Agder, Grimstad, Norway.
    Enabling Precision Medicine via Contemporary and Future Communication Technologies: A Survey2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 21210-21240Article, review/survey (Refereed)
    Abstract [en]

    Precision medicine (PM) is an innovative medical approach that considers differences in the individuals’ omics, medical histories, lifestyles, and environmental information in treating diseases. To fully achieve the envisaged gains of PM, various contemporary and future technologies have to be employed, among which are nanotechnology, sensor network, big data, and artificial intelligence. These technologies and other applications require a communication network that will enable them to work in tandem for the benefit of PM. Hence, communication technology serves as the nervous system of PM, without which the entire system collapses. Therefore, it is essential to explore and determine the candidate communication technology requirements that can guarantee the envisioned gains of PM. To the best of our knowledge, no work exploring how communication technology directly impacts the development and deployment of PM solutions exists. This survey paper is designed to stimulate discussions on PM from the communication engineering perspective. We introduce the fundamentals of PM and the demands in terms of quality of service that each of the enabling technologies of PM places on the communication network. We explore the information in the literature to suggest the ideal metric values of the key performance indicators for the implementation of the different components of PM. The comparative analysis of the suitability of the contemporary and future communication technologies for PM implementation is discussed. Finally, some open research challenges for the candidate communication technologies that will enable the full implementation of PM solutions are highlighted.

  • 22.
    Chukharev, Konstantin
    et al.
    Sirius University of Science and Technology, 1 Olympic Ave, 354340, Sochi, Russia. Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Suvorov, Dmitrii
    Sirius University of Science and Technology, 1 Olympic Ave, 354340, Sochi, Russia. Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Chivilikhin, Daniil
    Sirius University of Science and Technology, 1 Olympic Ave, 354340, Sochi, Russia. Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia. Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland .
    SAT-based Counterexample-Guided Inductive Synthesis of Distributed Controllers2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 207485-207498Article in journal (Refereed)
    Abstract [en]

    This paper proposes a new method for automatic synthesis of distributed discrete-state controllers from given temporal specification and behavior examples. The proposed method develops known synthesis methods to the distributed case, which is a fundamental extension. This method can be applied for automatic generation of correct-by-design distributed control software for industrial automation. The proposed approach is based on reduction to the Boolean satisfiability problem (SAT) and has Counterexample-Guided Inductive Synthesis (CEGIS) at its core. We evaluate the proposed approach using the classical distributed alternating bit protocol.

  • 23.
    Ciani, Lorenzo
    et al.
    Department of Information Engineering, University of Florence, via di Santa Marta 3, 50139, Florence, Italy.
    Guidi, Giulia
    Department of Information Engineering, University of Florence, via di Santa Marta 3, 50139, Florence, Italy.
    Patrizi, Gabriele
    Department of Information Engineering, University of Florence, via di Santa Marta 3, 50139, Florence, Italy.
    Galar, Diego
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. Industry and Transport Division, Tecnalia Research and Innovation, Miñano (Araba), 01510, Spain.
    Improving Human Reliability Analysis for railway systems using fuzzy logic2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 128648-128662Article in journal (Refereed)
    Abstract [en]

    The International Union of Railway provides an annually safety report highlighting that human factor is one of the main causes of railway accidents every year. Consequently, the study of human reliability is fundamental, and it must be included within a complete reliability assessment for every railway-related system. However, currently RARA (Railway Action Reliability Assessment) is the only approach available in literature that considers human task specifically customized for railway applications. The main disadvantages of RARA are the impact of expert’s subjectivity and the difficulty of a numerical assessment for the model parameters in absence of an exhaustive error and accident database. This manuscript introduces an innovative fuzzy method for the assessment of human factor in safety-critical systems for railway applications to address the problems highlighted above. Fuzzy logic allows to simplify the assessment of the model parameters by means of linguistic variables more resemblant to human cognitive process. Moreover, it deals with uncertain and incomplete data much better than classical deterministic approach and it minimizes the subjectivity of the analyst evaluation. The output of the proposed algorithm is the result of a fuzzy interval arithmetic, α-cut theory and centroid defuzzification procedure. The proposed method has been applied to the human operations carried out on a railway signaling system. Four human tasks and two scenarios have been simulated to analyze the performance of the proposed algorithm. Finally, the results of the method are compared with the classical RARA procedure underline compliant results obtain with a simpler, less complex and more intuitive approach.

  • 24.
    Cruciani, Federico
    et al.
    School of Computing, Ulster University, Newtownabbey UK.
    Nugent, Chris D.
    School of Computing, Ulster University, Newtownabbey UK.
    Medina Quero, Javier
    University of Jaen, Jaen Spain.
    Cleland, Ian
    School of Computing, Ulster University, Newtownabbey UK.
    McCullagh, Paul
    School of Computing, Ulster University, Newtownabbey UK.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Personalizing Activity Recognition with a Clustering based Semi-Population Approach2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 207794-207804Article in journal (Refereed)
    Abstract [en]

    Smartphone-based approaches for Human Activity Recognition have become prevalent in recent years. Despite the amount of research undertaken in the field, issues such as cross-subject variability are still posing an obstacle to the deployment of solutions in large scale, free-living settings. Personalized methods (i.e. aiming to adapt a generic classifier to a specific target user) attempt to solve this problem. The lack of labeled data for training purposes, however, represents a major barrier. This is especially the case when taking into consideration that personalization generally requires labeled data to be user-specific. This paper presents a novel personalization method combining a semi-population based approach with user adaptation. Personalization is achieved through the following. Firstly, the proposed method identifies a subset of users from the available population as best candidates for initializing the classifier to the target user. Subsequently, a semi-population Neural Network classifier is trained using data from this subset of users. The classifier’s network weights are then updated using a small amount of labeled data from the target user subsequently implementing personalization. This approach was validated on a large publicly available dataset collected in a free-living scenario. The personalized approach using the proposed method has shown to improve the overall F-score to 74.4% compared to 70.9% when using a generic non-personalized approach. Results obtained, with statistical significance being confirmed on a set of 57 users, indicate that model initialization using the semi-population approach can reduce the amount of labeled data required for personalization. As such, the proposed method for model initialization could facilitate the real-world deployment of systems implementing personalization by reducing the amount of data needed for personalization.

  • 25.
    Dai, Wenbai
    et al.
    Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
    Wang, Peng
    Chinese Academy of Sciences, Shenyang Institute of Automation, Shenyang, China.
    Sun, Weiqi
    Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
    Wu, Xian
    Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
    Zhang, Hualiang
    Chinese Academy of Sciences, Shenyang Institute of Automation, Shenyang, China.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Yang, Genke
    Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
    Semantic Integration of Plug-and-Play Software Components for Industrial Edges Based on Microservices2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 125882-125892Article in journal (Refereed)
    Abstract [en]

    The industrial cyber-physical system enables collaboration between distributed nodes across industrial clouds and edge devices. Flexibility and interoperability could be enhanced significantly by introducing the service-oriented architecture to industrial edge devices. From the industrial edge computing perspective, software components shall be dynamically composed across heterogeneous edge devices to perform various functionalities. In this paper, a knowledge-driven Microservice-based architecture to enable plug-and-play software components is proposed for industrial edges. These software components can be dynamically configured based on the orchestration of microservices with the support of the knowledge base and the reasoning process. These semantically enhanced plug-and-play microservices could provide rapid online reconfiguration without any programming efforts. The use of the plug-and-play software components is demonstrated by an assembly line example.

  • 26.
    Damigos, Gerasimos
    et al.
    Ericsson Research, Luleå, Sweden.
    Lindgren, Tore
    Ericsson Research, Luleå, Sweden.
    Nikolakopoulos, George
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Toward 5G Edge Computing for Enabling Autonomous Aerial Vehicles2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 3926-3941Article in journal (Refereed)
    Abstract [en]

    Offloading processes responsible for a robot’s control operation to external computational resources has been in the spotlight for many years. The vision of having access to a full cloud cluster for any autonomous robot has fueled many scientific fields. Such implementations rely strongly on a robust communication link between the robot and the cloud and have been tested over numerous network architectures. However, various limitations have been highlighted upon the realization of such platforms. For small-scale local deployments, technologies such as Wi-Fi, Zigbee, and blacktooth are inexpensive and easy to use but suffer from low transmit power and outdoor coverage limitations. In this study, the offloading time-critical control operations for an unmanned aerial vehicle (UAV) using cellular network technologies were evaluated and demonstrated experimentally, focusing on the 5G technology. The control process was hosted on an edge server that served as a ground control station (GCS). The server performs all the computations required for the autonomous operation of the UAV and sends the action commands back to the UAV over the 5G interface. This research focuses on analyzing the low-latency needs of a closed-loop control system that is put to the test on a real 5G network. Furthermore, practical limitations, integration challenges, the intended cellular architecture, and the corresponding Key Performance Indicators (KPIs) that correlate to the real-life behavior of the UAV are rigorously studied.

    Download full text (pdf)
    fulltext
  • 27.
    Deo, Ravinesh C.
    et al.
    Faculty of Health, Engineering and Sciences, School of Sciences, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
    Yaseen, Zaher Mundher
    Sustainable Developments in Civil Engineering Research Group, Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam.
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Nguyen-Huy, Thong
    Centre for Applied Climate Sciences, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
    McPherson Langlands, Trevor Ashley
    Faculty of Health, Engineering and Sciences, School of Sciences, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
    Galligan, Linda
    Faculty of Health, Engineering and Sciences, School of Sciences, University of Southern Queensland, Toowoomba, QLD 4350, Australia.
    Modern Artificial Intelligence Model Development for Undergraduate Student Performance Prediction: An Investigation on Engineering Mathematics Courses2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 136697-136724Article in journal (Refereed)
    Abstract [en]

    A computationally efficient artificial intelligence (AI) model called Extreme Learning Machines (ELM) is adopted to analyze patterns embedded in continuous assessment to model the weighted score (WS) and the examination (EX) score in engineering mathematics courses at an Australian regional university. The student performance data taken over a six-year period in multiple courses ranging from the mid- to the advanced level and a diverse course offering mode (i.e., on-campus, ONC, and online, ONL) are modelled by ELM and further benchmarked against competing models: random forest (RF) and Volterra. With the assessments and examination marks as key predictors of WS (leading to a grade in the mid-level course), ELM (with respect to RF and Volterra) outperformed its counterpart models both for the ONC and the ONL offer. This generated relative prediction error in the testing phase, of only 0.74%, compared to about 3.12% and 1.06%, respectively, while for the ONL offer, the prediction errors were only 0.51% compared to about 3.05% and 0.70%. In modelling the student performance in advanced engineering mathematics course, ELM registered slightly larger errors: 0.77% (vs. 22.23% and 1.87%) for ONC and 0.54% (vs. 4.08% and 1.31%) for the ONL offer. This study advocates a pioneer implementation of a robust AI methodology to uncover relationships among student learning variables, developing teaching and learning intervention and course health checks to address issues related to graduate outcomes, and student learning attributes in the higher education sector.

    Download full text (pdf)
    fulltext
  • 28.
    Dong, Pingping
    et al.
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Xie, Jingyun
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Tang, Wensheng
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Xiong, Naixue
    College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
    Zhong, Hua
    Hunnan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Evaluation of Multipath TCP Scheduling Algorithms2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 29818-29825Article in journal (Refereed)
    Abstract [en]

    One of the goals of 5G is to provide enhanced mobile broadband and enable low latency in some use cases. To achieve this aim, the Internet Engineering Task Force (IETF) has proposed the Multipath TCP (MPTCP) by utilizing the feature of dual connectivity in 5G, where a 5G device can be served by two different base stations. However, the path heterogeneity between the 5G device to the server may cause packet out-of-order problem. The researchers proposed a number of scheduling algorithms to tackle this issue. This paper introduces the existing algorithms, and with the aim to make a thorough comparison between the existing scheduling algorithms and provide guidelines for designing new scheduling algorithms in 5G, we have conducted an extensive set of emulation studies based on the real Linux experimental platform. The evaluation covers a wide range of network scenarios to investigate the impact of different network metrics, namely, RTT, buffer size and file size on the performance of existing widely-deployed scheduling algorithms.

  • 29.
    Euzébio, Thiago A. M.
    et al.
    Instituto Tecnológico Vale, MG, 35400-000, Brazil.
    Da Silva, Moisés T.
    Instituto Tecnológico Vale, MG, 35400-000, Brazil.
    Yamashita, Andre S.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Decentralized PID controller tuning based on nonlinear optimization to minimize the disturbance effects in coupled loops2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 156857-156867Article in journal (Refereed)
    Abstract [en]

    Decentralized proportional-integral-derivative (PID) control systems are widely used for multiple-input multiple-output (MIMO) control problems. However, decentralized controllers cannot suppress the plant interactions in multivariable systems, which are addressed in the controller tuning phase. In this paper, a decentralized PID tuning method is proposed in order to minimize the undesirable effects of the coupling between the inputs and outputs of the closed-loop system. For this purpose, the PID parameter tuning method solves a nonlinear optimization problem. This optimization problem is formulated with the criteria of the performance, robustness and multivariable stability of the closed-loop system. A single design parameter is required to specify the trade-off between performance and robustness. Simulation studies are conducted to demonstrate the effectiveness of the proposed method. The performance is compared to that of alternative tuning techniques from the literature. Results show that the proposed approach is a feasible candidate for industrial application, as it is simple to implement and capable of addressing robustness and stability concerns of plant operators.

    Download full text (pdf)
    fulltext
  • 30.
    Fan, Ye
    et al.
    Department of Information and Communication Engineering, Xi’an Jiaotong University, Xi’an, China.
    Liao, Xuewen
    Department of Information and Communication Engineering, Xi’an Jiaotong University, Xi’an, China.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Physical Layer Security Based on Interference Alignment in K-User MIMO Y Wiretap Channels2017In: IEEE Access, E-ISSN 2169-3536, Vol. 5, p. 5747-5759Article in journal (Refereed)
    Abstract [en]

    This paper studies the secure degree of freedom (SDOF) of the multiway relay wiretap system K -user MIMO Y wiretap channel, where each legitimate user equipped with M antennas intends to convey independent signals via an intermediate relay with N antennas. There exists an eavesdropper which is equipped with Neantennas close to the legitimate users. In order to eliminate the multiuser interference and keep the system security, interference alignment is mainly utilized in the proposed broadcast wiretap channel (BWC) and multi-access BWC (MBWC), and cooperative jamming is adopted as a supplementary approach in the MBWC model. The general feasibility conditions of the interference alignment are deduced asM≥K−1,2M>N and N≥((K(K−1))/2) . In the BWC model, we have deduced the secure degrees of freedom (SDOF) asKmin{M,N}−min{Ne,K(K−1)/2} , which is always a positive value. While in the MBWC model, the SDOF is given by Kmin{M,N} . Finally, since the relay transmits the synthesized signals of the legal signal and the jamming signal in the MBWC model, we propose a power allocation scheme to maximize the secrecy rate. Simulation results demonstrate that our proposed power allocation scheme can improve secrecy rate under various antenna configurations.

  • 31.
    Fu, Minglei
    et al.
    College of Information Engineering, Zhejiang University of Technology, Hangzhou, China.
    Fan, Tingchao
    College of Information Engineering, Zhejiang University of Technology, Hangzhou, China.
    Ding, Zi’ang
    College of Information Engineering, Zhejiang University of Technology, Hangzhou, China.
    Salih, Sinan Q.
    Institute of Research and Development, Duy Tan University, Da Nang, Vietnam.
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Yaseen, Zaher Mundher
    Sustainable Developments in Civil Engineering Research Group, Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietna.
    Deep Learning Data-Intelligence Model Based on Adjusted Forecasting Window Scale: Application in Daily Streamflow Simulation2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, no 1, p. 32632-32651Article in journal (Refereed)
    Abstract [en]

    Streamow forecasting is essential for hydrological engineering. In accordance with theadvancement of computer aids in this eld, various machine learning (ML) models have been explored tosolve this highly non-stationary, stochastic, and nonlinear problem. In the current research, a newly exploredversion of an ML model called the long short-term memory (LSTM) was investigated for streamowprediction using historical data for forecasting for a particular period. For a case study located in a tropicalenvironment, the Kelantan river in the northeast region of the Malaysia Peninsula was selected. Themodelling was performed according to several perspectives: (i) The feasibility of applying the developedLSTM model to streamow prediction was veried, and the performance of the developed LSTM modelwas compared with the classic backpropagation neural network model; (ii) In the experimental process ofapplying the LSTM model to the prediction of streamow, the inuence of the training set size on theperformance of the developed LSTM model was tested; (iii) The effect of the time interval between thetraining set and the testing set on the performance of the developed LSTM model was tested; (iv) The effectof the time span of the prediction data on the performance of the developed LSTM model was tested. Theexperimental data showthat not only does the developedLSTM model have obvious advantages in processingsteady streamow data in the dry season but it also shows good ability to capture data features in the rapidlyuctuant streamow data in the rainy season.

    Download full text (pdf)
    fulltext
  • 32.
    Galkin, Nikolai
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ruchkin, Michail
    Independent researcher, Belarus.
    Vyatkin, Valeriy
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Yang, Chen-Wei
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Dubinin, Viktor
    Independent researcher, Belarus.
    Automatic Generation of Data centre Digital Twins for Virtual Commissioning of their Automation Systems2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 4633-4644Article in journal (Refereed)
    Abstract [en]

    Data centres are becoming an increasingly important part of our society’s infrastructure. The number of data centres is growing constantly, making growing the gross level of electrical energy consumption. At the same time, the rapid spread of sophisticated electrical devices as well as other automation systems, in general, produces an opportunity for making data centres an attractive player in the constantly designing energy market. But for this, new advanced technologies must be applied to solve the problems of complexity and heterogeneity in various types of data centre design. A new concept, which is based on the automated generation of a digital twin (DT) system, is directly from its schematic representations presented in this paper. A DT is a virtual version of an object or system, designed to aid decision-making and virtual commissioning through simulation, machine learning, and reasoning. In the scope of current work, the IEC 61850 standard is chosen as a starting point for a multi-step generation of the DT combining simulation model and decentralized control logic. As a result, the designed DT “clone” of an electrical system consists of the SIMULINK model of the electrical system plus the automatically generated control application (based on the IEC 61499 standard).

    Download full text (pdf)
    fulltext
  • 33.
    Galkin, Nikolai
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Yang, Chen-Wei
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, Espoo, 02150, Finland.
    Towards the Automatic Transformation of the SIMULINK Model into an FPGA-in-Loop System and its Real-Time Simulation2024In: IEEE Access, E-ISSN 2169-3536Article in journal (Refereed)
    Abstract [en]

    Information is a key component of progress. Industry 4.0, and especially the future Industry 5.0, is closely related to the topic of the Digital Twin, where virtual components interact with each other. The main advantage of such a system is that it can fully mimic the behaviour of the complicated system, including various unexpected perturbations such as noise, jitter, delays, etc. Thus, the ability to create complex models and run them in real-time is a basic need for extending the Digital Twin. One of the major problems in the development of digital twins is the increasing complexity of the models. Therefore, large processing capacities and parallel computation become critical. The field-programmable gate array (FPGA) is a type of hardware that best fits this task. The FPGA-in-the-loop (FiL) can be considered as the container for running the Digital-Twin model. The transformation of a digital model into FiL is known and used by many companies at this time. However, the authors found that there are no publicly available model-to-FiL transformation methods. In this paper, the authors aim to fill this gap. We first discuss the major design challenges of a FiL system and provide recommendations to overcome them in the form of a road map. We then demonstrate a step-by-step process for converting a simple MATLAB/SIMULINK model to a FiL system using the proposed road map. Finally, we will demonstrate the validation process of the designed FiL systems and prove that they are running in real-time.

    Download full text (pdf)
    fulltext
  • 34.
    Galvez, Antonio
    et al.
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. Tecnalia, Basque Research and Technology Alliance (BRTA), Derio, Spain.
    Galar, Diego
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. Tecnalia, Basque Research and Technology Alliance (BRTA), Derio, Spain.
    Seneviratne, Dammika
    Tecnalia, Basque Research and Technology Alliance (BRTA), Derio, Spain.
    A Hybrid Model-Based Approach on Prognostics for Railway HVAC2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 108117-108127Article in journal (Refereed)
    Abstract [en]

    Prognostics and health management (PHM) of systems usually depends on appropriate prior knowledge and sufficient condition monitoring (CM) data on critical components’ degradation process to appropriately estimate the remaining useful life (RUL). A failure of complex or critical systems such as heating, ventilation, and air conditioning (HVAC) systems installed in a passenger train carriage may adversely affect people or the environment. Critical systems must meet restrictive regulations and standards, and this usually results in an early replacement of components. Therefore, the CM datasets lack data on advanced stages of degradation, and this has a significant impact on developing robust diagnostics and prognostics processes; therefore, it is difficult to find PHM implemented in HVAC systems. This paper proposes a methodology for implementing a hybrid model-based approach (HyMA) to overcome the limited representativeness of the training dataset for developing a prognostic model. The proposed methodology is evaluated building an HyMA which fuses information from a physics-based model with a deep learning algorithm to implement a prognostics process for a complex and critical system. The physics-based model of the HVAC system is used to generate run-to-failure data. This model is built and validated using information and data on the real asset; the failures are modelled according to expert knowledge and an experimental test to evaluate the behaviour of the HVAC system while working, with the air filter at different levels of degradation. In addition to using the sensors located in the real system, we model virtual sensors to observe parameters related to system components’ health. The run-to-failure datasets generated are normalized and directly used as inputs to a deep convolutional neural network (CNN) for RUL estimation. The effectiveness of the proposed methodology and approach is evaluated on datasets containing the air filter’s run-to-failure data. The experimental results show remarkable accuracy in the RUL estimation, thereby suggesting the proposed HyMA and methodology offer a promising approach for PHM.

  • 35.
    Gutierrez Ballesteros, Elena
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Rönnberg, Sarah
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Gil-De Castro, Aurora
    Departamento de Ingeniería Electrónica y de Computadores, Universidad de Córdoba, 14071 Córdoba, Spain.
    Stochastic assessment of risk of light flicker in a household2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 104106-104115Article in journal (Refereed)
    Abstract [en]

    This paper introduces the concept of risk applied to light flicker. It offers a novel stochastic assessment of the risk of light flicker produced by voltage fluctuations from devices in a household. The paper has contributed with stochastic models of voltage fluctuations patterns from several household devices. The voltage fluctuations severity has been determined based on the perception of light flicker from a generalized LED lamp using an index defined as severity factor, what is a novel assessment in contrast to the incandescent lamp considered in the IEC 61000-4-15 standard. The severity factor can be equal to 1, 2 or 3 in increasing order of severity. Each of the severity factors indicates a probability of light flicker equal to or over the perception threshold in a generalized LED lamp and, therefore, a probability of light flicker perceived by an average observer. The severity factor is later used to obtain the risk of light flicker by calculating an index defined as the probabilistic number of voltage fluctuations in a 10-minute interval producing light flicker equal to or over the perception threshold ( PLMinst = 1) in a generalized LED lamp. A PLMst value is obtained from the risk of light flicker index calculated. The results show a low risk of annoying light flicker due to voltage fluctuations from devices in a household for an average observer.

    Download full text (pdf)
    fulltext
  • 36.
    Gutierrez Ballesteros, Elena
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Rönnberg, Sarah
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Energy Science.
    Gil-de-Castro, Aurora
    Departamento de Ingeniería Electrónica y de Computadores, Universidad de Córdoba, Córdoba, 14071 Spain.
    Propagation of Voltage Fluctuations and Assessment of LED Lamps Light Flicker in Low Voltage Networks2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 129195-129204Article in journal (Refereed)
    Abstract [en]

    This paper offers an assessment of the risk and severity of light flicker, evaluated considering LED lamps as the only customer lighting technology as well as different household devices as source of voltage fluctuations. The study is based on the configuration of a real low voltage network. The results are calculated for a single-phase system without consider the impedance of the household devices themselves. The short-circuit impedances values needed for reaching the global emission limit of light flicker and the irritability threshold are at least over the 84 th percentile of a sample of short-circuit impedances from Swedish distribution networks, and in most of the cases over the 95 th percentile of the networks in Europe. The voltage fluctuations reduce in magnitude when propagating to adjacent phases, being this reduction different for each adjacent phase as has been observed from measurements in a controlled network. It has also been noted that the duration of the voltage fluctuations does not change when propagating between different points of the controlled network nor between phases. The severity of the light flicker values calculated in this study point to a ratio to estimate the severity of light flicker of a generalized LED lamp using the IEC flickermeter.

    Download full text (pdf)
    fulltext
  • 37.
    Hadi, Sinan Jasim
    et al.
    Department of the Real Estate Development and Management, Ankara University, Ankara, Turkey.
    Abba, S.I.
    Department of Physical Planning Development, Maitama Sule University Kano, Nigeria.
    Sammen, Saad Sh.
    Department of Civil Engineering, College of Engineering, University of Diyala, Diyala Governorate, Iraq.
    Salih, Sinan Q.
    Institute of Research and Development, Duy Tan University, Da Nang, Vietnam.
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Yaseen, Zaher Mundher
    Sustainable Developments in Civil Engineering Research Group, Faculty of civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam.
    Non-linear input variable selection approach integrated with non-tuned data intelligence model for streamflow pattern simulation2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 141533-141548Article in journal (Refereed)
    Abstract [en]

    Streamflow modeling is considered as an essential component for water resources planning and management. There are numerous challenges related to streamflow prediction that are facing water resources engineers. These challenges due to the complex processes associated with several natural variables such as non-stationarity, non-linearity, and randomness. In this study, a new model is proposed to predict long-term streamflow. Several lags that cover several years are abstracted using the potential of Extreme Gradient Boosting (XGB) then after the selected inputs variables are imposed into the predictive model (i.e., Extreme Learning Machine (ELM)). The proposed model is compared with the stand-alone schema in which the optimum lags of the variables are supplied into the XGB and ELM models. Hydrological variables including rainfall, temperature and evapotranspiration are used to build the model and predict the streamflow at Goksu-Himmeti basin in Turkey. The results showed that XGB model performed an excellent result in which can be used for predicting the streamflow pattern. Also, it is clear from the attained results that the accuracy of the streamflow prediction using XGB technique could be improved when the high number of lags was used. However, the implementation of the XGB is tree-based technique in which several issues could be raised such as overfitting problem. The proposed schema XGBELM in which XGB approach is selected the correlated inputs and ranking them according to their importance; then after, the selected inputs are supplied into the ELM model for the prediction process. The XGBELM model outperformed the stand-alone schema of both XGB and ELM models and the high-lagged schema of the XGB. It is important to indicate that the XGBELM model found to improve the prediction ability with minimum variables number.

    Download full text (pdf)
    fulltext
  • 38.
    Hadi, Sinan Jasim
    et al.
    Department of Real Estate Management and Development, Faculty of Applied Sciences, Ankara University, 00026 Ankara, Turkey.
    Tombul, Mustafa
    Department of Civil Engineering, Faculty of Engineering, Eskisehir Technical University, 26555 Eskisehir, Turkey.
    Salih, Sinan Q.
    Computer Science Department, College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Iraq.
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Yaseen, Zaher Mundher
    Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam.
    The Capacity of the Hybridizing Wavelet Transformation Approach With Data-Driven Models for Modeling Monthly-Scale Streamflow2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 101993-102006Article in journal (Refereed)
    Abstract [en]

    Hybrid models that combine wavelet transformation (WT) as a pre-processing tool with data-driven models (DDMs) as modeling approaches have been widely investigated for forecasting streamflow. The WT approach has been applied to original time series for decomposing processes prior to the application of DDM modeling. This procedure has been applied to eliminate redundant patterns or information that lead to a dramatic increase in the model performance. In this study, three experiments were implemented, including stand-alone data-driven modeling, hind cast decomposing using WT divided and entered into the extreme learning machine (ELM), and the extreme gradient boosting (XGB) model to forecast streamflow data. The WT method was applied in two forms: discrete and continuous (DWT and CWT). In this paper, a new hybrid model is proposed based on an integrative prediction model where XGB is used as an input selection tool for the importance attributes of the prediction matrix that are then supplied to the ELM model as a predictive model. The monthly streamflow, upstream flow, rainfall, temperature, and potential evapotranspiration of a basin named in 1805 and located in the south east of Turkey, are used for development of the model. The modeling results show that applying the WT method improved the performance in the hindcast experiment based on the CWT form with minimum root mean square error (RMSE = 4.910 m 3 /s). On the contrary, WT deteriorated the performance of the forecasting and the stand-alone models exhibited a better performance. WT increased the performance of the hindcast experiment due to the inclusion of future information caused by convolution of the time series. However, the forecast experiment experienced deterioration due to the border effect at the end of the time series. Hence, WT was found not to be a useful pre-processing technique in forecasting the streamflow.

    Download full text (pdf)
    fulltext
  • 39.
    Hai, Tao
    et al.
    Computer Science Department, Baoji University of Arts and Sciences, Baoji, China.
    Sharafati, Ahmad
    Department of Civil Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran.
    Mohammed, Achite
    Faculty of Nature and Life Sciences, Laboratory of Water and Environment, University Hassiba Benbouali Chlef, Hay Es-Salem Chlef, Algeria.
    Salih, Sinan Q.
    Institute of Research and Development, Duy Tan University, Da Nang, Vietnam.
    Deo, Ravinesh C.
    School of Agricultural, Computational and Environmental Sciences, Centre for Applied Climate Sciences, Institute of Life Sciences and Environment, University of Southern Queensland, Springfield, QLD, Australia.
    Al-Ansari, Nadhir
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Mining and Geotechnical Engineering.
    Yaseen, Zaher Mundher
    Sustainable Developments in Civil Engineering Research Group, Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam.
    Global Solar Radiation Estimation and Climatic Variability Analysis Using Extreme Learning Machine Based Predictive Model2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 12026-12042Article in journal (Refereed)
    Abstract [en]

    Sustainable utilization of the freely available solar radiation as renewable energy source requires accurate predictive models to quantitatively evaluate future energy potentials. In this research, an evaluation of the preciseness of extreme learning machine (ELM) model as a fast and efficient framework for estimating global incident solar radiation (G) is undertaken. Daily meteorological datasets suitable for G estimation belongs to the northern parts of the Cheliff Basin in Northwest Algeria, is used to construct the estimation model. Cross-correlation functions are applied between the inputs and the target variable (i.e., G) where several climatological information’s are used as the predictors for surface level G estimation. The most significant model inputs are determined in accordance with highest cross-correlations considering the covariance of the predictors with the G dataset. Subsequently, seven ELM models with unique neuronal architectures in terms of their input-hidden-output neurons are developed with appropriate input combinations. The prescribed ELM model’s estimation performance over the testing phase is evaluated against multiple linear regressions (MLR), autoregressive integrated moving average (ARIMA) models and several well-established literature studies. This is done in accordance with several statistical score metrics. In quantitative terms, the root mean square error (RMSE) and mean absolute error (MAE) are dramatically lower for the optimal ELM model with RMSE and MAE = 3.28 and 2.32 Wm −2 compared to 4.24 and 3.24 Wm −2 (MLR) and 8.33 and 5.37 Wm −2 (ARIMA).

    Download full text (pdf)
    fulltext
  • 40.
    Hameed, Mohamed Abdel
    et al.
    Department of Computer Science, Faculty of Computers and Information, Luxor University, Luxor, Egypt.
    Hassaballah, M.
    Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena, Egypt.
    Aly, Saleh
    Department of Electrical Engineering, Faculty of Engineering, Aswan University, Aswan, Egypt.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Adaptive Image Steganography Method Based on Histogram of Oriented Gradient and PVD-LSB Techniques2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 185189-18204Article in journal (Refereed)
    Abstract [en]

    Pixel value differencing (PVD) and least significant bit substitution (LSB) are two widely used schemes in image steganography. These two methods do not consider different content in a cover image for hiding the secret data. The content of most digital images has different edge directions in each pixel, and the local object shape or appearance is mostly characterized by the distribution of its intensity gradients or edge directions. Exploiting these characteristics for embedding various secret information in different edge directions will eliminate sequential embedding and improve robustness. Thus, a histogram of oriented gradient (HOG) algorithm is proposed to find the dominant edge direction for each $2\times 2$ block of cover images. Blocks of interest (BOIs) are determined adaptively based on the gradient magnitude and angle of the cover image. Then, the PVD algorithm is used to hide secret data in the dominant edge direction, while the LSB substitution is utilized in the other two remaining pixels. Extensive experiments using various standard images reveal that the proposed scheme provides high embedding capacity and better visual quality compared with several other PVD- and LSB-based methods. Moreover, it resists various steganalysis techniques, such as pixel difference histogram and RS analysis.

  • 41.
    Hashmi, Khurram Azeem
    et al.
    German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Stricker, Didier
    German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Afzal, Muhammad Adnan
    Bilojix Soft Technologies, Bahawalpur, Pakistan.
    Afzal, Muhammad Ahtsham
    Bilojix Soft Technologies, Bahawalpur, Pakistan.
    Afzal, Muhammad Zeshan
    German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Current Status and Performance Analysis of Table Recognition in Document Images with Deep Neural Networks2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 87663-87685Article, review/survey (Refereed)
    Abstract [en]

    The first phase of table recognition is to detect the tabular area in a document. Subsequently, the tabular structures are recognized in the second phase in order to extract information from the respective cells. Table detection and structural recognition are pivotal problems in the domain of table understanding. However, table analysis is a perplexing task due to the colossal amount of diversity and asymmetry in tables. Therefore, it is an active area of research in document image analysis. Recent advances in the computing capabilities of graphical processing units have enabled the deep neural networks to outperform traditional state-of-the-art machine learning methods. Table understanding has substantially benefited from the recent breakthroughs in deep neural networks. However, there has not been a consolidated description of the deep learning methods for table detection and table structure recognition. This review paper provides a thorough analysis of the modern methodologies that utilize deep neural networks. Moreover, it presents a comprehensive understanding of the current state-of-the-art and related challenges of table understanding in document images. The leading datasets and their intricacies have been elaborated along with the quantitative results. Furthermore, a brief overview is given regarding the promising directions that can further improve table analysis in document images.

  • 42.
    Hashmi, Khurram Azeem
    et al.
    German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Stricker, Didier
    search Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Afzal, Muhammad Noman
    Bilojix Soft Technologies, Bahawalpur, Pakistan.
    Afzal, Muhammad Zeshan
    German Research Center for Artificial Intelligence, 67663 Kaiserslautern, Germany; Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgrage, University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Guided Table Structure Recognition through Anchor Optimization2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 113521-113534Article in journal (Refereed)
    Abstract [en]

    This paper presents the novel approach towards table structure recognition by leveraging the guided anchors. The concept differs from current state-of-the-art systems for table structure recognition that naively apply object detection methods. In contrast to prior techniques, first, we estimate the viable anchors for table structure recognition. Subsequently, these anchors are exploited to locate the rows and columns in tabular images. Furthermore, the paper introduces a simple and effective method that improves the results using tabular layouts in realistic scenarios. The proposed method is exhaustively evaluated on the two publicly available datasets of table structure recognition: ICDAR-2013 and TabStructDB. Moreover, we empirically established the validity of our method by implementing it on the previous approaches. We accomplished state-of-the-art results on the ICDAR-2013 dataset with an average F-measure of 94.19% (92.06% for rows and 96.32% for columns). Thus, a relative error reduction of more than 25% is achieved. Furthermore, our proposed post-processing improves the average F-measure to 95.46% that results in a relative error reduction of more than 35%. Moreover, we surpassed the baseline results on the TabStructDB dataset with an average F-measure of 94.57% (94.08% for rows and 95.06% for columns).

    Download full text (pdf)
    fulltext
  • 43.
    Hassan, Abbas M.
    et al.
    Department of Architecture, Faculty of Engineering, Al Azhar University .
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Electrical Engineering Department, Faculty of Engineering, Al Azhar University.
    Urban Transition in the Era of the Internet of Things: Social Implications and Privacy Challenges2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 36428-36440Article in journal (Refereed)
    Abstract [en]

    The Internet of Things (IoT) could become an important aspect of urban life in the next decade. In the IoT paradigm, various information and communication technologies (ICTs) are used in concert to substantially reduce urban problems. Smart cities and ubiquitous cities will adopt ICTs in the urban development process; however, IoT-based cities will experience considerably stronger effects than those that adopt conventional ICTs. IoT cities allow urban residents and “things”to be connected to the Internet by virtue of the extension of the Internet Protocol from IPv4 to IPv6 and of cutting-edge device and sensor technology. Therefore, the urban transition resulting from the influence of IoT may be a critical issue. The privacy-related vulnerabilities of IoT technologies may negatively affect city residents. Furthermore, disparities in the spread of IoT systems across different countries may allow some countries to subvert the privacy of other countries' citizens. The aim of this paper is to identify the potential prospects and privacy challenges that will emerge from IoT deployment in urban environments. This paper reviews the prospects of and barriers to IoT implementation at the regional, city, and residential scales from the perspectives of security and privacy. The IoT technology will be a continual presence in life in general and in urban life in particular. However, the adoption of the IoT paradigm in cities will be complicated due to the inherent presence of unsecured connections. Moreover, the IoT systems may rob people of some of their humanity, infringing on their privacy, because people are also regarded as “things”in the IoT paradigm. Given the social trepidation surrounding IoT implementation, local and international associations related to IoT privacy, and legislation and international laws, are needed to maintain the personal right to privacy and to satisfy the demands of institutional privacy in urban contexts.

  • 44.
    Hatamzad, Mahshid
    et al.
    Department of Industrial Engineering, UiT/The Arctic University of Norway, 8514 Narvik, Nordland, Norway.
    Pinerez, Geanette Polanco
    Department of Industrial Engineering, UiT/The Arctic University of Norway, 8514 Narvik, Nordland, Norway.
    Casselgren, Johan
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Fluid and Experimental Mechanics.
    Using Slightly Imbalanced Binary Classification to Predict the Efficiency of Winter Road Maintenance2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 160048-160063Article in journal (Refereed)
    Abstract [en]

    The prediction of efficiency scores for winter road maintenance (WRM) is a challenging and serious issue in countries with cold climates. While effective and efficient WRM is a key contributor to maximizing road transportation safety and minimizing costs and environmental impacts, it has not yet been included in intelligent prediction methods. Therefore, this study aims to design a WRM efficiency classification prediction model that combines data envelopment analysis and machine learning techniques to improve decision support systems for decision-making units. The proposed methodology consists of six stages and starts with road selection. Real data are obtained by observing road conditions in equal time intervals via road weather information systems, optical sensors, and road-mounted sensors. Then, data preprocessing is performed, and efficiency scores are calculated with the data envelopment analysis method to classify the decision-making units into efficient and inefficient classes. Next, the WRM efficiency classes are considered targets for machine learning classification algorithms, and the dataset is split into training and test datasets. A slightly imbalanced binary classification case is encountered since the distributions of inefficient and efficient classes in the training dataset are unequal, with a low ratio between classes. The proposed methodology includes a comparison of different machine learning classification techniques. The graphical and numerical results indicate that the combination of a support vector machine and genetic algorithm yields the best generalization performance. The results include analyzing the variables that affect the WRM and using efficiency classes to drive future insights to improve the process of decision-making.

  • 45.
    Islam, Raihan Ul
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, University-4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Deep Learning Inspired Belief Rule-based Expert System2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 190637-190651Article in journal (Refereed)
    Abstract [en]

    Recent technological advancements in the area of the Internet of Things (IoT) and cloud services, enable the generation of large amounts of raw data. However, the accurate prediction by using this data is considered as challenging for machine learning methods. Deep Learning (DL) methods are widely used to process large amounts of data because they need less preprocessing than traditional machine learning methods. Various types of uncertainty associated with large amounts of raw data hinder the prediction accuracy. Belief Rule-Based Expert Systems (BRBES) are widely used to handle uncertain data. However, due to their incapability of integrating associative memory within the inference procedures, they demonstrate poor accuracy of prediction when large amounts of data is considered. Therefore, we propose the integration of an associative memory based DL method within the BRBES inference procedures, allowing to discover accurate data patterns and hence, the improvement of prediction under uncertainty. To demonstrate the applicability of the proposed method, which is named BRB-DL, it has been fine tuned against two datasets, one in the area of air pollution and the other in the area of power generation. The reliability of the proposed BRB-DL method, has also been compared with other DL methods such as Long-Short Term Memory and Deep Neural Network, and BRBES by taking into account of the air quality dataset from Beijing city and the power generation dataset of a combined cycle power plant. BRB-DL outperforms the above-mentioned methods in terms of prediction accuracy. For example, the Mean Square Error value of BRB-DL is 4.12 whereas for Long-Short Term Memory, Deep Neural Network, Fuzzy Deep Neural Network, Adaptive Neuro Fuzzy Inference System and BRBES it is 18.66, 28.49, 17.05, 16.37 and 38.15 for combined cycle power plant respectively, which are significantly higher.

    Download full text (pdf)
    fulltext
  • 46.
    Jamil, Mohammad Newaj
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Workload Orchestration in Multi-access Edge Computing Using Belief Rule-Based Approach2023In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 118002-118023Article in journal (Refereed)
    Abstract [en]

    Multi-access Edge Computing (MEC) is a standard network architecture for edge computing, which is proposed to handle enormous computation demands from emerging resource-intensive and latency-sensitive applications and services as well as accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.

    Download full text (pdf)
    fulltext
  • 47.
    Jhunjhunwala, Pranay
    et al.
    Aalto University, Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Aalto University, Espoo, Finland.
    Proposing and Prototyping an Extension to the Adapter Concept in the IEC 61499 Standard2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 2564-2577Article in journal (Refereed)
    Abstract [en]

    Component-Design Architecture has been in demand based on the growing needs for modularity and flexibility in the automation industry. IEC 61499 standard, a component-based automation architecture, provides various tools and techniques for automation developers to accommodate the need for flexibility in automation sequences. However, the adapter concept, one of the significant features of the standard, remains untouched and undeveloped since its inclusion in the standard and lacks the utilization of its true potential. In this work, we enhance the adapter concept by proposing the addition of logic into them. This proposition advances the adapter technology and gives the automation standard more capabilities to support higher levels of modularization without the increase of applications complexity.

  • 48.
    Kanellakis, Christoforos
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Fresk, Emil
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Mansouri, Sina Sharif
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Kominiak, Dariusz
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Nikolakopoulos, George
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Towards Visual Inspection of Wind Turbines: A Case of Visual Data Acquisition using Autonomous Aerial Robots2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 181650-181661Article in journal (Refereed)
    Abstract [en]

    This article presents a novel framework for acquiring visual data around 3D infrastructures, by establishing a team of fully autonomous Micro Aerial Vehicles (MAVs) with robust localization, planning and perception capabilities. The proposed aerial system reaches high level of autonomy on a large scale, while pushing to the boundaries the real life deployment of aerial robotics. In the presented approach, the MAVs deployed around the structure rely only on their onboard computer and sensory systems. The developed framework envisions a modular system, combining open research challenges in the fields of localization, path planning and mapping, with an overall capability for a fast on site deployment and a reduced execution time that can repeatably perform the mission according to the operator needs. The architecture of the established system includes: 1) a geometry-based path planner for coverage of complex structures by multiple MAVs, 2) an accurate yet flexible localization component, which provides an accurate pose estimation for the MAVs by utilizing an Ultra Wideband fused inertial estimation scheme, and 3) visual data post-processing scheme for the 3D model building. The performance of the proposed framework has been experimentally demonstrated in multiple realistic outdoor field trials, all focusing on the challenging structure of a wind turbine as the main test case. The successful experimental results, depict the merits of the proposed autonomous navigation system as the enabling technology towards aerial robotic inspectors.

  • 49.
    Kanellakis, Christoforos
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Karvelis, Petros S.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Mansouri, Sina Sharif
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Agha-mohammadi, Ali-akbar
    Jet Propulsion Laboratory California Institute of Technology Pasadena, CA, 91109.
    Nikolakopoulos, George
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Towards Autonomous Aerial Scouting Using Multi-Rotors in Subterranean Tunnel Navigation2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 66477-66485Article in journal (Refereed)
    Abstract [en]

    This work establishes a robocentric framework around a non-linear Model Predictive Control (NMPC) for autonomous navigation of quadrotors in tunnel-like environments. The proposed framework enables obstacle free navigation capabilities for resource constraint platforms in areas with critical challenges including darkness, textureless surfaces as well as areas with self-similar geometries, without any prior knowledge. The core contribution of the proposed framework stems from the merging of perception dynamics in a model-based optimization approach, aligning the vehicles heading to the tunnels’ open space expressed in the x axis coordinate in the image frame of the most distant area. Moreover, the aerial vehicle is considered as a free-flying object that plans its actions using egocentric onboard sensors. The proposed method can be deployed in both fully illuminated indoor corridors or featureless dark tunnels, leveraging visual processing from either RGB-D or monocular sensors for generating direction commands to keep flying in the proper direction. Multiple experimental field trials demonstrate the effectiveness of the proposed method in challenging environments.

  • 50.
    Karie, Nickson M.
    et al.
    Cyber Security Cooperative Research Centre Limited, Australia; School of Science, Edith Cowan University, Security Research Institute, Joondalup WA, 6027, Australia.
    Masri Sahri, Nor
    Cyber Security Cooperative Research Centre Limited, Australia; School of Science, Edith Cowan University, Security Research Institute, Joondalup WA, 6027, Australia.
    Yang, Wencheng
    Cyber Security Cooperative Research Centre Limited, Australia; School of Science, Edith Cowan University, Security Research Institute, Joondalup WA, 6027, Australia.
    Valli, Craig
    Cyber Security Cooperative Research Centre Limited, Australia; School of Science, Edith Cowan University, Security Research Institute, Joondalup WA, 6027, Australia.
    Kebande, Victor R.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems. Department of Computer Science, Blekinge Institute of Technology, 371 79, Karlskrona, Sweden.
    A Review of Security Standards and Frameworks for IoT-Based Smart Environments2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9Article, review/survey (Refereed)
    Abstract [en]

    Assessing the security of IoT-based smart environments such as smart homes and smart cities is becoming fundamentally essential to implementing the correct control measures and effectively reducing security threats and risks brought about by deploying IoT-based smart technologies. The problem, however, is in finding security standards and assessment frameworks that best meets the security requirements as well as comprehensively assesses and exposes the security posture of IoT-based smart environments. To explore this gap, this paper presents a review of existing security standards and assessment frameworks which also includes several NIST special publications on security techniques highlighting their primary areas of focus to uncover those that can potentially address some of the security needs of IoT-based smart environments. Cumulatively a total of 80 ISO/IEC security standards, 32 ETSI standards and 37 different conventional security assessment frameworks which included seven NIST special publications on security techniques were reviewed. To present an all-inclusive and up-to-date state-of-the-art research, the review process considered both published security standards and assessment frameworks as well as those under development. The findings show that most of the conventional security standards and assessment frameworks do not directly address the security needs of IoT-based smart environments but have the potential to be adapted into IoT-based smart environments. With this insight into the state-of-the-art research on security standards and assessment frameworks, this study helps advance the IoT field by opening new research directions as well as opportunities for developing new security standards and assessment frameworks that will address future IoT-based smart environments security concerns. This paper also discusses open problems and challenges related to IoT-based smart environments security issues. As a new contribution, a taxonomy of challenges for IoT-based smart environment security concerns drawn from the extensive literature examined during this study is proposed in this paper which also maps the identified challenges to potential proposed solutions.

    Download full text (pdf)
    fulltext
123 1 - 50 of 102
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf