Change search
Refine search result
1234567 1 - 50 of 2183
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aaltonen, Harri
    et al.
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Sierla, Seppo
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Kyrki, Ville
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Pourakbari-Kasmaei, Mahdi
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Bidding a Battery on Electricity Markets and Minimizing Battery Aging Costs: A Reinforcement Learning Approach2022In: Energies, E-ISSN 1996-1073, Vol. 15, no 14, article id 4960Article in journal (Refereed)
    Abstract [en]

    Battery storage is emerging as a key component of intelligent green electricitiy systems. The battery is monetized through market participation, which usually involves bidding. Bidding is a multi‐objective optimization problem, involving targets such as maximizing market compensation and minimizing penalties for failing to provide the service and costs for battery aging. In this article, battery participation is investigated on primary frequency reserve markets. Reinforcement learning is applied for the optimization. In previous research, only simplified formulations of battery aging have been used in the reinforcement learning formulation, so it is unclear how the optimizer would perform with a real battery. In this article, a physics‐based battery aging model is used to assess the aging. The contribution of this article is a methodology involving a realistic battery simulation to assess the performance of the trained RL agent with respect to battery aging in order to inform the selection of the weighting of the aging term in the RL reward formula. The RL agent performs day-ahead bidding on the Finnish Frequency Containment Reserves for Normal Operation market, with the objective of maximizing market compensation, minimizing market penalties and minimizing aging costs.

  • 2.
    Aaltonen, Harri
    et al.
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Sierla, Seppo
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Subramanya, Rakshith
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland; International Research Laboratory of Computer Technologies, ITMO University, 197101 St. Petersburg, Russia.
    A simulation environment for training a reinforcement learning agent trading a battery storage2021In: Energies, E-ISSN 1996-1073, Vol. 14, no 17, article id 5587Article in journal (Refereed)
    Abstract [en]

    Battery storages are an essential element of the emerging smart grid. Compared to other distributed intelligent energy resources, batteries have the advantage of being able to rapidly react to events such as renewable generation fluctuations or grid disturbances. There is a lack of research on ways to profitably exploit this ability. Any solution needs to consider rapid electrical phenomena as well as the much slower dynamics of relevant electricity markets. Reinforcement learning is a branch of artificial intelligence that has shown promise in optimizing complex problems involving uncertainty. This article applies reinforcement learning to the problem of trading batteries. The problem involves two timescales, both of which are important for profitability. Firstly, trading the battery capacity must occur on the timescale of the chosen electricity markets. Secondly, the real-time operation of the battery must ensure that no financial penalties are incurred from failing to meet the technical specification. The trading-related decisions must be done under uncertainties, such as unknown future market prices and unpredictable power grid disturbances. In this article, a simulation model of a battery system is proposed as the environment to train a reinforcement learning agent to make such decisions. The system is demonstrated with an application of the battery to Finnish primary frequency reserve markets.

  • 3.
    Abdelaziz, Ahmed
    et al.
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Ang, Tanfong
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Sookhak, Mehdi
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Khan, Suleman
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liew, Cheesun
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Akhunzada, Adnan
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Survey on network virtualization using openflow: Taxonomy, opportunities, and open issues2016In: KSII Transactions on Internet and Information Systems, ISSN 1976-7277, Vol. 10, no 10, p. 4902-4932Article in journal (Refereed)
    Abstract [en]

    The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed

  • 4.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Al-Madina Higher Institute for Engineering and Technology.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Khalaf, Ashraf A. M.
    Minia University, Egypt.
    Hamed, Hesham F. A.
    Minia University, Egypt.
    Classification of Brain Tumor MRIs Using a Kernel Support Vector Machine2016In: Building Sustainable Health Ecosystems: 6th International Conference on Well-Being in the Information Society, WIS 2016, Tampere, Finland, September 16-18, 2016, Proceedings / [ed] Hongxiu Li, Pirkko Nykänen, Reima Suomi, Nilmini Wickramasinghe, Gunilla Widén, Ming Zhan, Springer International Publishing , 2016, p. 151-160Conference paper (Refereed)
    Abstract [en]

    The use of medical images has been continuously increasing, which makes manual investigations of every image a difficult task. This study focuses on classifying brain magnetic resonance images (MRIs) as normal, where a brain tumor is absent, or as abnormal, where a brain tumor is present. A hybrid intelligent system for automatic brain tumor detection and MRI classification is proposed. This system assists radiologists in interpreting the MRIs, improves the brain tumor diagnostic accuracy, and directs the focus toward the abnormal images only. The proposed computer-aided diagnosis (CAD) system consists of five steps: MRI preprocessing to remove the background noise, image segmentation by combining Otsu binarization and K-means clustering, feature extraction using the discrete wavelet transform (DWT) approach, and dimensionality reduction of the features by applying the principal component analysis (PCA) method. The major features were submitted to a kernel support vector machine (KSVM) for performing the MRI classification. The performance evaluation of the proposed system measured a maximum classification accuracy of 100 % using an available MRIs database. The processing time for all processes was recorded as 1.23 seconds. The obtained results have demonstrated the superiority of the proposed system.

  • 5.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Electronic and Communication Department Al-Madina Higher Institute for Engineering and Technology, Giza.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Khalaf, Ashraf A. M.
    Faculty of Engineering, Minia University.
    Hamed, Hesham F. A.
    Faculty of Engineering, Minia University.
    Design and implementation of a computer-aided diagnosis system for brain tumor classification2017In: 2016 28th International Conference on Microelectronics (ICM), 2017, p. 73-76, article id 7847911Conference paper (Refereed)
    Abstract [en]

    Computer-aided diagnosis (CAD) systems have become very important for the medical diagnosis of brain tumors. The systems improve the diagnostic accuracy and reduce the required time. In this paper, a two-stage CAD system has been developed for automatic detection and classification of brain tumor through magnetic resonance images (MRIs). In the first stage, the system classifies brain tumor MRI into normal and abnormal images. In the second stage, the type of tumor is classified as benign (Noncancerous) or malignant (Cancerous) from the abnormal MRIs. The proposed CAD ensembles the following computational methods: MRI image segmentation by K-means clustering, feature extraction using discrete wavelet transform (DWT), feature reduction by applying principal component analysis (PCA). The two-stage classification has been conducted using a support vector machine (SVM). Performance evaluation of the proposed CAD has achieved promising results using a non-standard MRIs database.

  • 6.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Electronics and Communications Department, Al-Madina Higher Institute for Engineering and Technology, Giza, Egypt.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Faculty of Engineering, Al-Azhar University, Qena, Egypt.
    Khalaf, Ashraf A.M.
    Electronics and Communications Department, Faculty of Engineering, Minia University, Minia, Egypt.
    Hamed, Hesham F.A.
    Electronics and Communications Department, Faculty of Engineering, Minia University, Minia, Egypt.
    A Review on Brain Tumor Diagnosis from MRI Images: Practical Implications, Key Achievements, and Lessons Learned2019In: Magnetic Resonance Imaging, ISSN 0730-725X, E-ISSN 1873-5894, Vol. 61, p. 300-318Article in journal (Refereed)
    Abstract [en]

    The successful early diagnosis of brain tumors plays a major role in improving the treatment outcomes and thus improving patient survival. Manually evaluating the numerous magnetic resonance imaging (MRI) images produced routinely in the clinic is a difficult process. Thus, there is a crucial need for computer-aided methods with better accuracy for early tumor diagnosis. Computer-aided brain tumor diagnosis from MRI images consists of tumor detection, segmentation, and classification processes. Over the past few years, many studies have focused on traditional or classical machine learning techniques for brain tumor diagnosis. Recently, interest has developed in using deep learning techniques for diagnosing brain tumors with better accuracy and robustness. This study presents a comprehensive review of traditional machine learning techniques and evolving deep learning techniques for brain tumor diagnosis. This review paper identifies the key achievements reflected in the performance measurement metrics of the applied algorithms in the three diagnosis processes. In addition, this study discusses the key findings and draws attention to the lessons learned as a roadmap for future research.

  • 7.
    Abdukalikova, Anara
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Kleyko, Denis
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Wiklund, Urban
    Umeå University, Umeå, Sweden.
    Detection of Atrial Fibrillation from Short ECGs: Minimalistic Complexity Analysis for Feature-Based Classifiers2018In: Computing in Cardiology 2018: Proceedings / [ed] Christine Pickett; Cristiana Corsi; Pablo Laguna; Rob MacLeod, IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.

  • 8.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology Chittagong.
    Chowdhury, Abu Sayeed
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Karim, Razuan
    University of Science and Technology Chittagong.
    An Interoperable IP based WSN for Smart Irrigation Systems2017Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.

    Download full text (pdf)
    fulltext
  • 9.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Paul, Sukanta
    University of Science and Technology, Chittagong.
    Akhter, Sharmin
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology, Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Selection of Energy Efficient Routing Protocol for Irrigation Enabled by Wireless Sensor Networks2017In: Proceedings of 2017 IEEE 42nd Conference on Local Computer Networks Workshops, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 75-81Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.

    Download full text (pdf)
    fulltext
  • 10.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018In: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, p. 1-7Conference paper (Refereed)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

    Download full text (pdf)
    fulltext
  • 11.
    Abrishambaf, Reza
    et al.
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Bal, Mert
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Distributed home automation system based on IEC61499 function blocks and wireless sensor networks2017In: Proceedings of the IEEE International Conference on Industrial Technology, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1354-1359, article id 7915561Conference paper (Refereed)
    Abstract [en]

    In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution

  • 12.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, WitoldDepartment of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.Vasilakos, AthanasiosLuleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.Vitiello, AutiliaDepartment of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems2020Collection (editor) (Other academic)
    Abstract [en]

    This book provides a comprehensive overview of computational intelligence methods for semantic knowledge management. Contrary to popular belief, the methods for semantic management of information were created several decades ago, long before the birth of the Internet. In fact, it was back in 1945 when Vannevar Bush introduced the idea for the first protohypertext: the MEMEX (MEMory + indEX) machine. In the years that followed, Bush’s idea influenced the development of early hypertext systems until, in the 1980s, Tim Berners Lee developed the idea of the World Wide Web (WWW) as it is known today. From then on, there was an exponential growth in research and industrial activities related to the semantic management of the information and its exploitation in different application domains, such as healthcare, e-learning and energy management. 

    However, semantics methods are not yet able to address some of the problems that naturally characterize knowledge management, such as the vagueness and uncertainty of information. This book reveals how computational intelligence methodologies, due to their natural inclination to deal with imprecision and partial truth, are opening new positive scenarios for designing innovative semantic knowledge management architectures.

  • 13.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, Witold
    Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vitiello, Autilia
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Preface2020In: Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems / [ed] Giovanni Acampora; Witold Pedrycz; Athanasios V. Vasilakos; Autilia Vitiello, Springer Nature, 2020, Vol. 837, p. vii-xChapter in book (Other academic)
  • 14.
    Acharya, Soam
    et al.
    Cornell University, Ithaca.
    Smith, Brian P
    Cornell University, Ithaca.
    Parnes, Peter
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Characterizing user access to videos on the World Wide Web1999In: Multimedia computing and networking 2000 / [ed] Klara Nahrstedt, Bellingham, Wash: SPIE - International Society for Optical Engineering, 1999, p. 130-141Conference paper (Refereed)
    Abstract [en]

    Despite evidence of rising popularity of video on the web (or VOW), little is known about how users access video. However, such a characterization can greatly benefit the design of multimedia systems such as web video proxies and VOW servers. Hence, this paper presents an analysis of trace data obtained from an ongoing VOW experiment in Lulea University of Technology, Sweden. This experiment is unique as video material is distributed over a high bandwidth network allowing users to make access decisions without the network being a major factor. Our analysis revealed a number of interesting discoveries regarding user VOW access. For example, accesses display high temporal locality: several requests for the same video title often occur within a short time span. Accesses also exhibited spatial locality of reference whereby a small number of machines accounted for a large number of overall requests. Another finding was a browsing pattern where users preview the initial portion of a video to find out if they are interested. If they like it, they continue watching, otherwise they halt it. This pattern suggests that caching the first several minutes of video data should prove effective. Lastly, the analysis shows that, contrary to previous studies, ranking of video titles by popularity did not fit a Zipfian distribution.

    Download full text (pdf)
    FULLTEXT01
  • 15.
    Adalat, Mohsin
    et al.
    COSMOSE Research Group, Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan.
    Niazi, Muaz A.
    COSMOSE Research Group, Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Variations in power of opinion leaders in online communication networks2018In: Royal Society Open Science, E-ISSN 2054-5703, Vol. 5, no 10, article id 180642Article in journal (Refereed)
    Abstract [en]

    Online social media has completely transformed how we communicate with each other. While online discussion platforms are available in the form of applications and websites, an emergent outcome of this transformation is the phenomenon of ‘opinion leaders’. A number of previous studies have been presented to identify opinion leaders in online discussion networks. In particular, Feng (2016 Comput. Hum. Behav. 54, 43–53. (doi:10.1016/j.chb.2015.07.052)) has identified five different types of central users besides outlining their communication patterns in an online communication network. However, the presented work focuses on a limited time span. The question remains as to whether similar communication patterns exist that will stand the test of time over longer periods. Here, we present a critical analysis of the Feng framework both for short-term as well as for longer periods. Additionally, for validation, we take another case study presented by Udanor et al. (2016 Program 50, 481–507. (doi:10.1108/PROG-02-2016-0011)) to further understand these dynamics. Results indicate that not all Feng-based central users may be identifiable in the longer term. Conversation starter and influencers were noted as opinion leaders in the network. These users play an important role as information sources in long-term discussions. Whereas network builder and active engager help in connecting otherwise sparse communities. Furthermore, we discuss the changing positions of opinion leaders and their power to keep isolates interested in an online discussion network.

  • 16.
    Afroze, Tasnim
    et al.
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Akther, Shumia
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Chowdhury, Mohammed Armanuzzaman
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Hossain, Emam
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Glaucoma Detection Using Inception Convolutional Neural Network V32021In: Applied Intelligence and Informatics: First International Conference, AII 2021, Nottingham, UK, July 30–31, 2021, Proceedings / [ed] Mufti Mahmud; M. Shamim Kaiser; Nikola Kasabov; Khan Iftekharuddin; Ning Zhong, Springer, 2021, p. 17-28Conference paper (Refereed)
    Abstract [en]

    Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.

    Download full text (pdf)
    fulltext
  • 17.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland; Computing and Data Science, Willamette University, Salem, OR, USA.
    Olaleye, Sunday Adewale
    School of Business, Jamk University of Applied Sciences, Rajakatu 35, 40100, Jyvaskyla, Finland.
    Bower, Matt
    School of Education, Macquarie University, Sydney, NSW, Australia.
    Oyelere, Solomon
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Examining the relationships between students’ perceptions of technology, pedagogy, and cognition: the case of immersive virtual reality mini games to foster computational thinking in higher education2023In: Smart Learning Environments, E-ISSN 2196-7091, Vol. 10, no 1, article id 16Article in journal (Refereed)
    Abstract [en]

    Researchers are increasingly exploring educational games in immersive virtual reality (IVR) environments to facilitate students’ learning experiences. Mainly, the effect of IVR on learning outcomes has been the focus. However, far too little attention has been paid to the influence of game elements and IVR features on learners’ perceived cognition. This study examined the relationship between game elements (challenge, goal clarity, and feedback) as pedagogical approach, features of IVR technology (immersion and interaction), and learners’ perceived cognition (reflective thinking and comprehension). An experiment was conducted with 49 undergraduate students who played an IVR game-based application (iThinkSmart) containing mini games developed to facilitate learners’ computational thinking competency. The study employed partial least squares structural equation modelling to investigate the effect of educational game elements and learning contents on learner’s cognition. Findings show that goal clarity is the main predictor of learners’ reflective thinking and comprehension in an educational game-based IVR application. It was also confirmed that immersion and interaction experience impact learner’s comprehension. Notably, adequate learning content in terms of the organisation and relevance of the content contained in an IVR game-based application significantly moderate learners’ reflective thinking and comprehension. The findings of this study have implications for educators and developers of IVR game-based intervention to facilitate learning in the higher education context. In particular, the implication of this study touches on the aspect of learners’ cognitive factors that aim to produce 21st-century problem-solving skills through critical thinking.

    Download full text (pdf)
    fulltext
  • 18.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Laine, Teemu H.
    Department of Digital Media, Ajou University, 16499, Suwon, Republic of Korea.
    Co-design of mini games for learning computational thinking in an online environment2021In: Education and Information Technologies: Official Journal of the IFIP technical committee on Education, ISSN 1360-2357, E-ISSN 1573-7608, Vol. 26, no 5, p. 5815-5849Article in journal (Refereed)
    Abstract [en]

    Understanding the principles of computational thinking (CT), e.g., problem abstraction, decomposition, and recursion, is vital for computer science (CS) students. Unfortunately, these concepts can be difficult for novice students to understand. One way students can develop CT skills is to involve them in the design of an application to teach CT. This study focuses on co-designing mini games to support teaching and learning CT principles and concepts in an online environment. Online co-design (OCD) of mini games enhances students’ understanding of problem-solving through a rigorous process of designing contextual educational games to aid their own learning. Given the current COVID-19 pandemic, where face-to-face co-designing between researchers and stakeholders could be difficult, OCD is a suitable option. CS students in a Nigerian higher education institution were recruited to co-design mini games with researchers. Mixed research methods comprising qualitative and quantitative strategies were employed in this study. Findings show that the participants gained relevant knowledge, for example, how to (i) create game scenarios and game elements related to CT, (ii) connect contextual storyline to mini games, (iii) collaborate in a group to create contextual low-fidelity mini game prototypes, and (iv) peer review each other’s mini game concepts. In addition, students were motivated toward designing educational mini games in their future studies. This study also demonstrates how to conduct OCD with students, presents lesson learned, and provides recommendations based on the authors’ experience.

    Download full text (pdf)
    fulltext
  • 19.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland; School of Computing and Data Science, Willamette University, Salem, OR, 97301, USA.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Design, development, and evaluation of a virtual reality game-based application to support computational thinking2023In: Educational technology research and development, ISSN 1042-1629, E-ISSN 1556-6501, Vol. 71, no 2, p. 505-537Article in journal (Refereed)
    Abstract [en]

    Computational thinking (CT) has become an essential skill nowadays. For young students, CT competency is required to prepare them for future jobs. This competency can facilitate students’ understanding of programming knowledge which has been a challenge for many novices pursuing a computer science degree. This study focuses on designing and implementing a virtual reality (VR) game-based application (iThinkSmart) to support CT knowledge. The study followed the design science research methodology to design, implement, and evaluate the first prototype of the VR application. An initial evaluation of the prototype was conducted with 47 computer science students from a Nigerian university who voluntarily participated in an experimental process. To determine what works and what needs to be improved in the iThinkSmart VR game-based application, two groups were randomly formed, consisting of the experimental (n = 21) and the control (n = 26) groups respectively. Our findings suggest that VR increases motivation and therefore increase students’ CT skills, which contribute to knowledge regarding the affordances of VR in education and particularly provide evidence on the use of visualization of CT concepts to facilitate programming education. Furthermore, the study revealed that immersion, interaction, and engagement in a VR educational application can promote students’ CT competency in higher education institutions (HEI). In addition, it was shown that students who played the iThinkSmart VR game-based application gained higher cognitive benefits, increased interest and attitude to learning CT concepts. Although further investigation is required in order to gain more insights into students learning process, this study made significant contributions in positioning CT in the HEI context and provides empirical evidence regarding the use of educational VR mini games to support students learning achievements.

    Download full text (pdf)
    fulltext
  • 20.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    iThinkSmart: Immersive Virtual Reality Mini Games to Facilitate Students’ Computational Thinking Skills2021In: Koli Calling '21: 21st Koli Calling International Conference on Computing Education Research / [ed] Otto Seppälä; Andrew Petersen, Association for Computing Machinery , 2021, article id 33Conference paper (Refereed)
    Abstract [en]

    This paper presents iThinkSmart, an immersive virtual reality-based application to facilitate the learning of computational thinking (CT) concepts. The tool was developed to supplement the traditional teaching and learning of CT by integrating three virtual mini games, namely, River Crossing, Tower of Hanoi, and Mount Patti treasure hunt, to foster immersion, interaction, engagement, and personalization for an enhanced learning experience. iThinkSmart mini games can be played on a smartphone with a Goggle Cardboard and hand controller. This first prototype of the game accesses players' competency of CT and renders feedback based on learning progress.  

     

  • 21.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Scientific production and thematic breakthroughs in smart learning environments: a bibliometric analysis2021In: Smart Learning Environments, E-ISSN 2196-7091, Vol. 8, article id 1Article, review/survey (Refereed)
    Abstract [en]

    This study examines the research landscape of smart learning environments by conducting a comprehensive bibliometric analysis of the field over the years. The study focused on the research trends, scholar’s productivity, and thematic focus of scientific publications in the field of smart learning environments. A total of 1081 data consisting of peer-reviewed articles were retrieved from the Scopus database. A bibliometric approach was applied to analyse the data for a comprehensive overview of the trend, thematic focus, and scientific production in the field of smart learning environments. The result from this bibliometric analysis indicates that the first paper on smart learning environments was published in 2002; implying the beginning of the field. Among other sources, “Computers & Education,” “Smart Learning Environments,” and “Computers in Human Behaviour” are the most relevant outlets publishing articles associated with smart learning environments. The work of Kinshuk et al., published in 2016, stands out as the most cited work among the analysed documents. The United States has the highest number of scientific productions and remained the most relevant country in the smart learning environment field. Besides, the results also showed names of prolific scholars and most relevant institutions in the field. Keywords such as “learning analytics,” “adaptive learning,” “personalized learning,” “blockchain,” and “deep learning” remain the trending keywords. Furthermore, thematic analysis shows that “digital storytelling” and its associated components such as “virtual reality,” “critical thinking,” and “serious games” are the emerging themes of the smart learning environments but need to be further developed to establish more ties with “smart learning”. The study provides useful contribution to the field by clearly presenting a comprehensive overview and research hotspots, thematic focus, and future direction of the field. These findings can guide scholars, especially the young ones in field of smart learning environments in defining their research focus and what aspect of smart leaning can be explored.

    Download full text (pdf)
    fulltext
  • 22.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Sanusi, Ismaila Temitayo
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Application of Virtual Reality in Computer Science Education: A Systemic Review Based on Bibliometric and Content Analysis Methods2021In: Education Sciences, E-ISSN 2227-7102, Vol. 11, no 3, article id 142Article, review/survey (Refereed)
    Abstract [en]

    This study investigated the role of virtual reality (VR) in computer science (CS) education over the last 10 years by conducting a bibliometric and content analysis of articles related to the use of VR in CS education. A total of 971 articles published in peer-reviewed journals and conferences were collected from Web of Science and Scopus databases to conduct the bibliometric analysis. Furthermore, content analysis was conducted on 39 articles that met the inclusion criteria. This study demonstrates that VR research for CS education was faring well around 2011 but witnessed low production output between the years 2013 and 2016. However, scholars have increased their contribution in this field recently, starting from the year 2017. This study also revealed prolific scholars contributing to the field. It provides insightful information regarding research hotspots in VR that have emerged recently, which can be further explored to enhance CS education. In addition, the quantitative method remains the most preferred research method, while the questionnaire was the most used data collection technique. Moreover, descriptive analysis was primarily used in studies on VR in CS education. The study concludes that even though scholars are leveraging VR to advance CS education, more effort needs to be made by stakeholders across countries and institutions. In addition, a more rigorous methodological approach needs to be employed in future studies to provide more evidence-based research output. Our future study would investigate the pedagogy, content, and context of studies on VR in CS education.

    Download full text (pdf)
    fulltext
  • 23.
    Agreste, Santa
    et al.
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    De Meo, Pasquale
    of Ancient and Modern Civilizations, University of Messina.
    Fiumara, Giacomo
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccione, Giuseppe
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccolo, Sebastiano
    Department of Management Engineering - Engineering Systems Division at the Technical University of Denmark.
    Rosaci, Domenico
    DIIES Department, University of Reggio Calabria Via Graziella.
    Sarné, Giuseppe M. L.
    DICEAM Department, University of Reggio Calabria Via Graziella.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An empirical comparison of algorithms to find communities in directed graphs and their application in Web Data Analytics2017In: IEEE Transactions on Big Data, E-ISSN 2332-7790, Vol. 3, no 3, p. 289-306Article in journal (Refereed)
    Abstract [en]

    Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.

  • 24.
    Ahmad, Iftikhar
    et al.
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Noor, Rafidah Md
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Ali, Ihsan
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Characterizing the role of vehicular cloud computing in road traffic management2017In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 13, no 5Article in journal (Refereed)
    Abstract [en]

    Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles

  • 25.
    Ahmed, Ejaz
    et al.
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Yaqoob, Ibrar
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Hashem, Ibrahim Abaker Targio
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Khan, Imran
    Schneider Electric Industries, Grenoble.
    Ahmed, Abdelmuttlib Ibrahim Abdalla
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    The role of big data analytics in Internet of Things2017In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069, Vol. 129, no 2, p. 459-471Article in journal (Refereed)
    Abstract [en]

    The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.

  • 26.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hasan, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong University, Chittagong, 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Comparative Performance of Tree Based Machine Learning Classifiers in Product Backorder Prediction2023In: Intelligent Computing & Optimization: Proceedings of the 5th International Conference on Intelligent Computing and Optimization 2022 (ICO2022) / [ed] Pandian Vasant; Gerhard-Wilhelm Weber; José Antonio Marmolejo-Saucedo; Elias Munapo; J. Joshua Thomas, Springer, 2023, 1, p. 572-584Chapter in book (Refereed)
    Abstract [en]

    Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.

  • 27.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram 4000, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Evolutionary Belief Rule-Based Clinical Decision Support System to Predict COVID-19 Severity under Uncertainty2021In: Applied Sciences, E-ISSN 2076-3417, Vol. 11, no 13, article id 5810Article in journal (Refereed)
    Abstract [en]

    Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.

    Download full text (pdf)
    fulltext
  • 28.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Naim Uddin Rahi, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Uddin, Raihan
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Sen, Anik
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Shahadat Hossain, Mohammad
    University of Chittagong, Chattogram, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Machine Learning-Based Tomato Leaf Disease Diagnosis Using Radiomics Features2023In: Proceedings of the Fourth International Conference on Trends in Computational and Cognitive Engineering - TCCE 2022 / [ed] M. Shamim Kaiser; Sajjad Waheed; Anirban Bandyopadhyay; Mufti Mahmud; Kanad Ray, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, p. 25-35Conference paper (Refereed)
    Abstract [en]

    Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.

  • 29.
    Ahmed, Mumtahina
    et al.
    Department of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Explainable Text Classification Model for COVID-19 Fake News Detection2022In: Journal of Internet Services and Information Security (JISIS), ISSN 2182-2069, E-ISSN 2182-2077, Vol. 12, no 2, p. 51-69Article in journal (Refereed)
    Abstract [en]

    Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.

  • 30.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Deep Learning Approach with Data Augmentation to Recognize Facial Expressions in Real Time2022In: Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering: TCCE 2021 / [ed] M. Shamim Kaiser; Kanad Ray; Anirban Bandyopadhyay; Kavikumar Jacob; Kek Sie Long, Springer Nature, 2022, p. 487-500Conference paper (Refereed)
    Abstract [en]

    The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.

  • 31.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Jamil, Mohammad Newaj
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Integrated Deep Learning and Belief Rule Base Intelligent System to Predict Survival of COVID-19 Patient under Uncertainty2022In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 14, no 2, p. 660-676Article in journal (Refereed)
    Abstract [en]

    The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.

  • 32.
    Akbar, Mariam
    et al.
    COMSATS Institute of Information Technology, Islamabad.
    Javaid, Nadeem
    COMSATS Institute of Information Technology, Islamabad.
    Kahn, Ayesha Hussain
    COMSATS Institute of Information Technology, Islamabad.
    Imran, Muhammad Al
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Shoaib, Muhammad
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility2016In: Sensors, E-ISSN 1424-8220, Vol. 16, no 3, article id 404Article in journal (Refereed)
    Abstract [en]

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability

  • 33.
    Akkaya, Kemal
    et al.
    Southern Illinois University.
    Aust, Stefan
    NEC Communication Systems, Ltd..
    Hollick, Matthias
    TU Darmstadt.
    Itaya, Satoko
    Smart Wireless Laboratory, NICT.
    Kantarci, Burak
    University of Ottawa.
    Pfeiffer, Tom
    TU Berlin.
    Senel, Fatih
    International Antalya University.
    Strayer, Tim
    BBN Technologies.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Distance- Spanning Technology.
    Message from the demonstrations chair2014Conference paper (Other academic)
    Abstract [en]

    It is my pleasure to welcome you to the sixth Demonstration Session at the IEEE Conference on Local Computer Networks (LCN) 2014. We were looking for demonstrations for all topics covered by the main conference as well as all the workshops held in conjunction with the conference. The technical demonstrations were strongly encouraged to show innovative and original research. The main purpose of the demo session is to provide demonstrations that validate important research issues and/or show innovative prototypes.

  • 34.
    Akram, Waseem
    et al.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Niazi, Muaz A.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Iantovics, Laszlo Barna
    Petru Maior University of Tirgu Mures, Informatics Department, Tirgu Mures, Romania.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Towards agent-based model specification of smart grid: a cognitive agent-based computing approach2019In: Interdisciplinary Description of Complex Systems, ISSN 1334-4684, E-ISSN 1334-4676, Vol. 17, no 3B, p. 546-585Article in journal (Refereed)
    Abstract [en]

    A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.

  • 35.
    Akter, Mehenika
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hand-Drawn Emoji Recognition using Convolutional Neural Network2021In: Proceedings of 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), IEEE, 2021, p. 147-152Conference paper (Refereed)
    Abstract [en]

    Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.

    Download full text (pdf)
    fulltext
  • 36.
    Akter, Mehenika
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Uddin Ahmed, Tawsin
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mosquito Classification Using Convolutional Neural Network with Data Augmentation2021In: Intelligent Computing and Optimization: Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020) / [ed] Pandian Vasant, Ivan Zelinka, Gerhard-Wilhelm Weber, Springer Nature, 2021, p. 865-879Conference paper (Refereed)
    Abstract [en]

    Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.

    Download full text (pdf)
    fulltext
  • 37.
    Akter, Nasrin
    et al.
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Junjun, Jubair Ahmed
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Nahar, Nazmun
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Shahadat Hossain, Mohammad
    University of Chittagong, University-4331, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hoassain, Md. Sazzad
    University of Liberal Arts Bangladesh, Dhaka, 1209, Bangladesh.
    Brain Tumor Classification using Transfer Learning from MRI Images2022In: Proceedings of International Conference on Fourth Industrial Revolution and Beyond 2021 / [ed] Sazzad Hossain, Md. Shahadat Hossain, M. Shamim Kaiser, Satya Prasad Majumder, Kanad Ray, Springer, 2022, p. 575-587Chapter in book (Refereed)
    Abstract [en]

    One of the most vital parts of medical image analysis is the classification of brain tumors. Because tumors are thought to be origins to cancer, accurate brain tumor classification can save lives. As a result, CNN (Convolutional Neural Network)-based techniques for classifying brain cancers are frequently employed. However, there is a problem: CNNs are exposed to vast amounts of training data in order to produce good performance. This is where transfer learning enters into the picture. We present a 4-class transfer learning approach for categorizing Glioma, Meningioma, and Pituitary tumors and non-tumors in this study. The three most prevalent types of brain tumors are glioma, meningioma, and pituitary tumors. Our presented method, which employs the theory of transfer learning, utilizes a pre-trained InceptionResnetV1 method for classifying brain MRI images by extracting features from them using the softmax classifier method. The proposed approach outperforms all prior techniques with a mean classification accuracy of 93.95%. For the evaluation of our method we use kaggle dataset. Precision, recall, and F-score are one of the key performance metrics employed in this study.

  • 38.
    Akter, Shamima
    et al.
    International Islamic University, Chittagong, Bangladesh.
    Nahar, Nazmun
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A New Crossover Technique to Improve Genetic Algorithm and Its Application to TSP2019In: Proceedings of 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE, 2019, article id 18566123Conference paper (Refereed)
    Abstract [en]

    Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.

    Download full text (pdf)
    fulltext
  • 39.
    Al Arafat, Md. Mahedi
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, 4331, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, 4331, Bangladesh.
    Hossain, Delowar
    Cumming School of Medicine, University of Calgary, Calgary, AB, T2N 1N4, Canada.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Neural Network-Based Obstacle and Pothole Avoiding Robot2023In: Proceedings of the Fourth International Conference on Trends in Computational and Cognitive Engineering - TCCE 2022 / [ed] M. Shamim Kaiser; Sajjad Waheed; Anirban Bandyopadhyay; Mufti Mahmud; Kanad Ray, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, p. 173-184Conference paper (Refereed)
    Abstract [en]

    The main challenge of any mobile robot is to detect and avoid obstacles and potholes. This paper presents the development and implementation of a novel mobile robot. An Arduino Uno is used as the processing unit of the robot. A Sharp distance measurement sensor and Ultrasonic sensors are used for taking inputs from the environment. The robot trains a neural network based on a feedforward backpropagation algorithm to detect and avoid obstacles and potholes. For that purpose, we have used a truth table. Our experimental results show that our developed system can ideally detect and avoid obstacles and potholes and navigate environments.

  • 40.
    Al Banna, Md. Hasan
    et al.
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Ghosh, Tapotosh
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Al Nahian, Md. Jaber
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Taher, Kazi Abu
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka 1342, Bangladesh.
    Mahmud, Mufti
    Department of Computer Science, Nottingham Trent University, NG11 8NS – Nottingham, UK.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Attention-based Bi-directional Long-Short Term Memory Network for Earthquake Prediction2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 56589-56603Article in journal (Refereed)
    Abstract [en]

    An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.

    Download full text (pdf)
    fulltext
  • 41.
    Alakärppä, Ismo
    et al.
    University of Lapland, Rovaniemi.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Distance- Spanning Technology.
    Hosio, Simo
    University of Oulu.
    Johansson, Dan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ojala, Timo
    University of Oulu.
    NIMO overall architecture and service enablers2014Report (Other academic)
    Abstract [en]

    This article describes the architecture and service enablers developed in the NIMO project. Furthermore, it identifies future challenges and knowledge gaps in upcoming ICT service development for public sector units empowering citizens with enhanced tools for interaction and participation. We foresee crowdsourced applications where citizens contribute with dynamic, timely and geographically spread gathered information.

    Download full text (pdf)
    FULLTEXT01
  • 42.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An IoT-Belief Rule Base Smart System to Assess Autism2018In: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, p. 671-675Conference paper (Refereed)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

    Download full text (pdf)
    fulltext
  • 43.
    Alam, Quratulain
    et al.
    Department of Computer Sciences, Institute of Management Sciences, Peshawar.
    Tabbasum, Saher
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Malik, Saif U.R.
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Alam, Masoom
    Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad.
    Ali, Tamleek
    Department of Computer Sciences, Institute of Management Sciences, Peshawar.
    Akhunzada, Adnan
    Center for Mobile Cloud Computing Research (C4MCCR), University of Malaya, 50603 Kuala Lumpur.
    Khan, Samee U.
    Department of electrical and computer engineering, North Dakota State University, Fargo, ND.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Buyya, Rajkumar
    Cloud Computing and Distributed Systems, (CLOUDS) Laboratory, Department of Computing and Information Systems, The University of Melbourne.
    Formal Verification of the xDAuth Protocol2016In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 11, no 9, p. 1956-1969Article in journal (Refereed)
    Abstract [en]

    Service Oriented Architecture (SOA) offers a flexible paradigm for information flow among collaborating organizations. As information moves out of an organization boundary, various security concerns may arise, such as confidentiality, integrity, and authenticity that needs to be addressed. Moreover, verifying the correctness of the communication protocol is also an important factor. This paper focuses on the formal verification of the xDAuth protocol, which is one of the prominent protocols for identity management in cross domain scenarios. We have modeled the information flow of xDAuth protocol using High Level Petri Nets (HLPN) to understand protocol information flow in a distributed environment. We analyze the rules of information flow using Z language while Z3 SMT solver is used for verification of the model. Our formal analysis and verification results reveal the fact that the protocol fulfills its intended purpose and provides the security for the defined protocol specific properties, e.g. secure secret key authentication, Chinese wall security policy and secrecy specific properties, e.g. confidentiality, integrity, authenticity.

  • 44.
    Albshri, Adel
    et al.
    School of Computing, Newcastle University, Newcastle Upon Tyne, UK; College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia.
    Alzubaidi, Ali
    College of Computing, Umm Al-Qura University, Al-Lith, Saudi Arabia.
    Alharby, Maher
    Department of Computer Science, College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia.
    Awaji, Bakri
    College of Computer Science and Information Systems, Najran University, Najran, Saudi Arabia.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Solaiman, Ellis
    School of Computing, Newcastle University, Newcastle Upon Tyne, UK.
    A conceptual architecture for simulating blockchain-based IoT ecosystems2023In: Journal of Cloud Computing: Advances, Systems and Applications, E-ISSN 2192-113X, Vol. 12, article id 103Article in journal (Refereed)
    Abstract [en]

    Recently, the convergence between Blockchain and IoT has been appealing in many domains including, but not limited to, healthcare, supply chain, agriculture, and telecommunication. Both Blockchain and IoT are sophisticated technologies whose feasibility and performance in large-scale environments are difficult to evaluate. Consequently, a trustworthy Blockchain-based IoT simulator presents an alternative to costly and complicated actual implementation. Our primary analysis finds that there has not been so far a satisfactory simulator for the creation and assessment of blockchain-based IoT applications, which is the principal impetus for our effort. Therefore, this study gathers the thoughts of experts about the development of a simulation environment for blockchain-based IoT applications. To do this, we conducted two different investigations. First, a questionnaire is created to determine whether the development of such a simulator would be of substantial use. Second, interviews are conducted to obtain participants’ opinions on the most pressing challenges they encounter with blockchain-based IoT applications. The outcome is a conceptual architecture for simulating blockchain-based IoT applications that we evaluate using two research methods; a questionnaire and a focus group with experts. All in all, we find that the proposed architecture is generally well-received due to its comprehensive range of key features and capabilities for blockchain-based IoT purposes.

    Download full text (pdf)
    fulltext
  • 45.
    Al-Dulaimi, Anwer
    et al.
    ECE, University of Toronto.
    Anpalagan, Alagan
    WINCORE Lab, Ryerson University, Toronto.
    Bennis, Mehdi
    University of Oulu, Centre for Wireless Communications, University of Oulu.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    5G Green Communications: C-RAN Provisioning of CoMP and Femtocells for Power Management2016In: 2015 IEEE International Conference on Ubiquitous Wireless Broadband (ICUWB): Montreal, Canada, 4-7 October 2015, Piscataway, NJ: IEEE Communications Society, 2016, article id 7324392Conference paper (Refereed)
    Abstract [en]

    The fifth generation (5G) wireless network is expected to have dense deployments of cells in order to provide efficient Internet and cellular connections. The cloud radio access network (C-RAN) emerges as one of the 5G solutions to steer the network architecture and control resources beyond the legacy radio access technologies. The C-RAN decouples the traffic management operations from the radio access technologies leading to a new combination of virtualized network core and fronthaul architecture. In this paper, we first investigate the power consumption impact due to the aggressive deployments of low-power neighborhood femtocell networks (NFNs) under the umbrella of a coordinated multipoint (CoMP) macrocell. We show that power savings obtained from employing low power NFN start to decline as the density of deployed femtocells exceed certain threshold. The analysis considers two CoMP sites at the cell-edge and intra-cell areas. Second, to restore the power efficiency and network stabilization, a C-RAN model is proposed to restructure the NFN into clusters to ease the energy burden in the evolving 5G systems. Tailoring this to traffic load, selected clusters will be switched off to save power when they operate with low traffic loads

  • 46.
    Alhamazani, Khalid
    et al.
    University of New South Wales, Sydney.
    Ranjan, Rajiv
    CSIRO, Canberra.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rabhi, Fethi
    University of New South Wales, Sydney.
    Jayaraman, Prem Prakash
    CSIRO, Canberra.
    Khan, Samee Ullah
    North Dakota State University, Fargo.
    Guabtni, Adnene
    NICTA, Sydney.
    Bhatnagar, Vasudha
    University of Delhi.
    An overview of the commercial cloud monitoring tools: research dimensions, design issues, and state-of-the-art2015In: Computing, ISSN 0010-485X, E-ISSN 1436-5057, Vol. 97, no 4, p. 357-377Article in journal (Refereed)
    Abstract [en]

    Cloud monitoring activity involves dynamically tracking the Quality of Service (QoS) parameters related to virtualized resources (e.g., VM, storage, network, appliances, etc.), the physical resources they share, the applications running on them and data hosted on them. Applications and resources configuration in cloud computing environment is quite challenging considering a large number of heterogeneous cloud resources. Further, considering the fact that at given point of time, there may be need to change cloud resource configuration (number of VMs, types of VMs, number of appliance instances, etc.) for meet application QoS requirements under uncertainties (resource failure, resource overload, workload spike, etc.). Hence, cloud monitoring tools can assist a cloud providers or application developers in: (i) keeping their resources and applications operating at peak efficiency, (ii) detecting variations in resource and application performance, (iii) accounting the service level agreement violations of certain QoS parameters, and (iv) tracking the leave and join operations of cloud resources due to failures and other dynamic configuration changes. In this paper, we identify and discuss the major research dimensions and design issues related to engineering cloud monitoring tools. We further discuss how the aforementioned research dimensions and design issues are handled by current academic research as well as by commercial monitoring tools.

  • 47.
    Alhamazani, Khalid
    et al.
    School of Computer Science and Engineering, University of New South Wales.
    Ranjan, Rajiv
    CSIRO Digital Productivity, Acton.
    Jayaraman, Prem
    CSIRO Digital Productivity, Acton.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liu, Chang
    Sydney University of Technology.
    Rabhi, Fethi
    School of Computer Science and Engineering, University of New South Wales.
    Georgakopoulos, Dimitrios
    Royal Melbourne Institute of Technology, Melbourne.
    Wang, Lizhe
    Chinese Academy of Sciences, Beijing.
    Cross-Layer Multi-Cloud Real-Time Application QoS Monitoring and Benchmarking As-a-Service Framework2019In: I E E E Transactions on Cloud Computing, ISSN 2168-7161, Vol. 7, no 1, p. 48-61Article in journal (Refereed)
    Abstract [en]

    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers and data processing frameworks) platforms with features such as elasticity, pay-per-use, low upfront investment and low time to market. This has led to the proliferation of business critical applications that leverage various cloud platforms. Such applications hosted on single/multiple cloud provider platforms have diverse characteristics requiring extensive monitoring and benchmarking mechanisms to ensure run-time Quality of Service (QoS) (e.g., latency and throughput). This paper proposes, develops and validates CLAMBS—Cross-Layer Multi Cloud Application Monitoring and Benchmarking as-a-Service for efficient QoS monitoring and benchmarking of cloud applications hosted on multi-clouds environments. The major highlight of CLAMBS is its capability of monitoring and benchmarking individual application components such as databases and web servers, distributed across cloud layers (*-aaS), spread among multiple cloud providers. We validate CLAMBS using prototype implementation and extensive experimentation and show that CLAMBS efficiently monitors and benchmarks application components on multi-cloud platforms including Amazon EC2 and Microsoft Azure.  

    Download full text (pdf)
    fulltext
  • 48.
    Alhamazani, Khalid
    et al.
    School of Computer Science and Engineering, University of New South Wales.
    Ranjan, Rajiv
    CSIRO Computational Informatics, Canberra.
    Jayaraman, Prem Prakash
    CSIRO Computational Informatics, Canberra.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Wang, Meisong
    CSIRO Computational Informatics, Canberra.
    Huang, Zhiquiang (George)
    CSIRO Computational Informatics, Canberra.
    Wang, Lizhe
    Chinese Academy of Sciences, Beijing.
    Rabhi, Fethi
    School of Computer Science and Engineering, University of New South Wales.
    Real-time QoS Monitoring for Cloud-based Big Data Analytics Application in Mobile Environments2014In: 2014 15th IEEE International Conference on Mobile Data Management (MDM 2014): Brisbane, Australia 15 -18 July 2014, Piscataway, NY: IEEE Communications Society, 2014, Vol. 1, p. 337-340, article id 6916940Conference paper (Refereed)
    Abstract [en]

    The service delivery model of cloud computing acts as a key enabler for big data analytics applications enhancing productivity, efficiency and reducing costs. The ever increasing flood of data generated from smart phones and sensors such as RFID readers, traffic cams etc require innovative provisioning and QoS monitoring approaches to continuously support big data analytics. To provide essential information for effective and efficient bid data analytics application QoS monitoring, in this paper we propose and develop CLAMS-Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework: (a) performs multi-cloud monitoring, and (b) addresses the issue of cross-layer monitoring of applications. We implement and demonstrate CLAMS functions on real-world multi-cloud platforms such as Amazon and Azure.

  • 49.
    Alhamazani, Khalid
    et al.
    University of New South Wales, Sydney.
    Ranjan, Rajiv
    CSIRO Computational Informatics, Canberra.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Jayaraman, Prem Prakash
    CSIRO, Canberra.
    Huang, Zhiqiang
    CSIRO Computational Informatics, Canberra.
    Wang, Lizhe
    Center for Earth Observation & Digital Earth, Chinese Academy of Sciences.
    Rabhi, Fethi
    University of New South Wales, Sydney.
    CLAMS: Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework2014In: Proceedings of the 11th IEEE International Conference on Services Computing (IEEE SCC 2014), Piscataway, NJ: IEEE Communications Society, 2014, p. 283-290Conference paper (Refereed)
    Abstract [en]

    Cloud computing provides on-demand access to affordable hardware (e.g., multi-core CPUs, GPUs, disks, and networking equipment) and software (e.g., databases, application servers, data processing frameworks, etc.) platforms. Application services hosted on single/multiple cloud provider platforms have diverse characteristics that require extensive monitoring mechanisms to aid in controlling run-time quality of service (e.g., access latency and number of requests being served per second, etc.). To provide essential real-time information for effective and efficient cloud application quality of service (QoS) monitoring, in this paper we propose, develop and validate CLAMS—Cross-Layer Multi-Cloud Application Monitoring-as-a-Service Framework. The proposed framework is capable of: (a) performing QoS monitoring of application components (e.g., database, web server, application server, etc.) that may be deployed across multiple cloud platforms (e.g., Amazon and Azure); and (b) giving visibility into the QoS of individual application component, which is something not supported by current monitoring services and techniques. We conduct experiments on real-world multi-cloud platforms such as Amazon and Azure to empirically evaluate our framework and the results validate that CLAMS efficiently monitors applications running across multiple clouds.

  • 50.
    Ali, Bako
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Faculty of Engineering, Al Azhar University.
    Cyber and Physical Security Vulnerability Assessment for IoT-Based Smart Homes2018In: Sensors, E-ISSN 1424-8220, Vol. 18, no 3, article id 817Article in journal (Refereed)
    Abstract [en]

    The Internet of Things (IoT) is an emerging paradigm focusing on the connection of devices, objects, or “things” to each other, to the Internet, and to users. IoT technology is anticipated to become an essential requirement in the development of smart homes, as it offers convenience and efficiency to home residents so that they can achieve better quality of life. Application of the IoT model to smart homes, by connecting objects to the Internet, poses new security and privacy challenges in terms of the confidentiality, authenticity, and integrity of the data sensed, collected, and exchanged by the IoT objects. These challenges make smart homes extremely vulnerable to different types of security attacks, resulting in IoT-based smart homes being insecure. Therefore, it is necessary to identify the possible security risks to develop a complete picture of the security status of smart homes. This article applies the operationally critical threat, asset, and vulnerability evaluation (OCTAVE) methodology, known as OCTAVE Allegro, to assess the security risks of smart homes. The OCTAVE Allegro method focuses on information assets and considers different information containers such as databases, physical papers, and humans. The key goals of this study are to highlight the various security vulnerabilities of IoT-based smart homes, to present the risks on home inhabitants, and to propose approaches to mitigating the identified risks. The research findings can be used as a foundation for improving the security requirements of IoT-based smart homes.

1234567 1 - 50 of 2183
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf