Change search
Refine search result
1234567 1 - 50 of 2328
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aaltonen, Harri
    et al.
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Sierla, Seppo
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Kyrki, Ville
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Pourakbari-Kasmaei, Mahdi
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Bidding a Battery on Electricity Markets and Minimizing Battery Aging Costs: A Reinforcement Learning Approach2022In: Energies, E-ISSN 1996-1073, Vol. 15, no 14, article id 4960Article in journal (Refereed)
    Abstract [en]

    Battery storage is emerging as a key component of intelligent green electricitiy systems. The battery is monetized through market participation, which usually involves bidding. Bidding is a multi‐objective optimization problem, involving targets such as maximizing market compensation and minimizing penalties for failing to provide the service and costs for battery aging. In this article, battery participation is investigated on primary frequency reserve markets. Reinforcement learning is applied for the optimization. In previous research, only simplified formulations of battery aging have been used in the reinforcement learning formulation, so it is unclear how the optimizer would perform with a real battery. In this article, a physics‐based battery aging model is used to assess the aging. The contribution of this article is a methodology involving a realistic battery simulation to assess the performance of the trained RL agent with respect to battery aging in order to inform the selection of the weighting of the aging term in the RL reward formula. The RL agent performs day-ahead bidding on the Finnish Frequency Containment Reserves for Normal Operation market, with the objective of maximizing market compensation, minimizing market penalties and minimizing aging costs.

  • 2.
    Aaltonen, Harri
    et al.
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Sierla, Seppo
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Subramanya, Rakshith
    Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, School of Electrical Engineering, Aalto University, FI-00076 Espoo, Finland; International Research Laboratory of Computer Technologies, ITMO University, 197101 St. Petersburg, Russia.
    A simulation environment for training a reinforcement learning agent trading a battery storage2021In: Energies, E-ISSN 1996-1073, Vol. 14, no 17, article id 5587Article in journal (Refereed)
    Abstract [en]

    Battery storages are an essential element of the emerging smart grid. Compared to other distributed intelligent energy resources, batteries have the advantage of being able to rapidly react to events such as renewable generation fluctuations or grid disturbances. There is a lack of research on ways to profitably exploit this ability. Any solution needs to consider rapid electrical phenomena as well as the much slower dynamics of relevant electricity markets. Reinforcement learning is a branch of artificial intelligence that has shown promise in optimizing complex problems involving uncertainty. This article applies reinforcement learning to the problem of trading batteries. The problem involves two timescales, both of which are important for profitability. Firstly, trading the battery capacity must occur on the timescale of the chosen electricity markets. Secondly, the real-time operation of the battery must ensure that no financial penalties are incurred from failing to meet the technical specification. The trading-related decisions must be done under uncertainties, such as unknown future market prices and unpredictable power grid disturbances. In this article, a simulation model of a battery system is proposed as the environment to train a reinforcement learning agent to make such decisions. The system is demonstrated with an application of the battery to Finnish primary frequency reserve markets.

  • 3.
    Abdelaziz, Ahmed
    et al.
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Ang, Tanfong
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Sookhak, Mehdi
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Khan, Suleman
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liew, Cheesun
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Akhunzada, Adnan
    Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur.
    Survey on network virtualization using openflow: Taxonomy, opportunities, and open issues2016In: KSII Transactions on Internet and Information Systems, ISSN 1976-7277, Vol. 10, no 10, p. 4902-4932Article in journal (Refereed)
    Abstract [en]

    The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed

  • 4.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Al-Madina Higher Institute for Engineering and Technology.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Khalaf, Ashraf A. M.
    Minia University, Egypt.
    Hamed, Hesham F. A.
    Minia University, Egypt.
    Classification of Brain Tumor MRIs Using a Kernel Support Vector Machine2016In: Building Sustainable Health Ecosystems: 6th International Conference on Well-Being in the Information Society, WIS 2016, Tampere, Finland, September 16-18, 2016, Proceedings / [ed] Hongxiu Li, Pirkko Nykänen, Reima Suomi, Nilmini Wickramasinghe, Gunilla Widén, Ming Zhan, Springer International Publishing , 2016, p. 151-160Conference paper (Refereed)
    Abstract [en]

    The use of medical images has been continuously increasing, which makes manual investigations of every image a difficult task. This study focuses on classifying brain magnetic resonance images (MRIs) as normal, where a brain tumor is absent, or as abnormal, where a brain tumor is present. A hybrid intelligent system for automatic brain tumor detection and MRI classification is proposed. This system assists radiologists in interpreting the MRIs, improves the brain tumor diagnostic accuracy, and directs the focus toward the abnormal images only. The proposed computer-aided diagnosis (CAD) system consists of five steps: MRI preprocessing to remove the background noise, image segmentation by combining Otsu binarization and K-means clustering, feature extraction using the discrete wavelet transform (DWT) approach, and dimensionality reduction of the features by applying the principal component analysis (PCA) method. The major features were submitted to a kernel support vector machine (KSVM) for performing the MRI classification. The performance evaluation of the proposed system measured a maximum classification accuracy of 100 % using an available MRIs database. The processing time for all processes was recorded as 1.23 seconds. The obtained results have demonstrated the superiority of the proposed system.

  • 5.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Electronic and Communication Department Al-Madina Higher Institute for Engineering and Technology, Giza.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Khalaf, Ashraf A. M.
    Faculty of Engineering, Minia University.
    Hamed, Hesham F. A.
    Faculty of Engineering, Minia University.
    Design and implementation of a computer-aided diagnosis system for brain tumor classification2017In: 2016 28th International Conference on Microelectronics (ICM), 2017, p. 73-76, article id 7847911Conference paper (Refereed)
    Abstract [en]

    Computer-aided diagnosis (CAD) systems have become very important for the medical diagnosis of brain tumors. The systems improve the diagnostic accuracy and reduce the required time. In this paper, a two-stage CAD system has been developed for automatic detection and classification of brain tumor through magnetic resonance images (MRIs). In the first stage, the system classifies brain tumor MRI into normal and abnormal images. In the second stage, the type of tumor is classified as benign (Noncancerous) or malignant (Cancerous) from the abnormal MRIs. The proposed CAD ensembles the following computational methods: MRI image segmentation by K-means clustering, feature extraction using discrete wavelet transform (DWT), feature reduction by applying principal component analysis (PCA). The two-stage classification has been conducted using a support vector machine (SVM). Performance evaluation of the proposed CAD has achieved promising results using a non-standard MRIs database.

  • 6.
    Abd-Ellah, Mahmoud Khaled
    et al.
    Electronics and Communications Department, Al-Madina Higher Institute for Engineering and Technology, Giza, Egypt.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Faculty of Engineering, Al-Azhar University, Qena, Egypt.
    Khalaf, Ashraf A.M.
    Electronics and Communications Department, Faculty of Engineering, Minia University, Minia, Egypt.
    Hamed, Hesham F.A.
    Electronics and Communications Department, Faculty of Engineering, Minia University, Minia, Egypt.
    A Review on Brain Tumor Diagnosis from MRI Images: Practical Implications, Key Achievements, and Lessons Learned2019In: Magnetic Resonance Imaging, ISSN 0730-725X, E-ISSN 1873-5894, Vol. 61, p. 300-318Article in journal (Refereed)
    Abstract [en]

    The successful early diagnosis of brain tumors plays a major role in improving the treatment outcomes and thus improving patient survival. Manually evaluating the numerous magnetic resonance imaging (MRI) images produced routinely in the clinic is a difficult process. Thus, there is a crucial need for computer-aided methods with better accuracy for early tumor diagnosis. Computer-aided brain tumor diagnosis from MRI images consists of tumor detection, segmentation, and classification processes. Over the past few years, many studies have focused on traditional or classical machine learning techniques for brain tumor diagnosis. Recently, interest has developed in using deep learning techniques for diagnosing brain tumors with better accuracy and robustness. This study presents a comprehensive review of traditional machine learning techniques and evolving deep learning techniques for brain tumor diagnosis. This review paper identifies the key achievements reflected in the performance measurement metrics of the applied algorithms in the three diagnosis processes. In addition, this study discusses the key findings and draws attention to the lessons learned as a roadmap for future research.

  • 7.
    Abdukalikova, Anara
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Kleyko, Denis
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Wiklund, Urban
    Umeå University, Umeå, Sweden.
    Detection of Atrial Fibrillation from Short ECGs: Minimalistic Complexity Analysis for Feature-Based Classifiers2018In: Computing in Cardiology 2018: Proceedings / [ed] Christine Pickett; Cristiana Corsi; Pablo Laguna; Rob MacLeod, IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.

  • 8.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology Chittagong.
    Chowdhury, Abu Sayeed
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Karim, Razuan
    University of Science and Technology Chittagong.
    An Interoperable IP based WSN for Smart Irrigation Systems2017Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.

    Download full text (pdf)
    fulltext
  • 9.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Paul, Sukanta
    University of Science and Technology, Chittagong.
    Akhter, Sharmin
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology, Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Selection of Energy Efficient Routing Protocol for Irrigation Enabled by Wireless Sensor Networks2017In: Proceedings of 2017 IEEE 42nd Conference on Local Computer Networks Workshops, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 75-81Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.

    Download full text (pdf)
    fulltext
  • 10.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018In: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, p. 1-7Conference paper (Refereed)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

    Download full text (pdf)
    fulltext
  • 11.
    Abid, Nosheen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hemback, Theo
    Kovacs, Gyorgy
    Shafait, Faisal
    Liwicki, Marcus
    UCL: Unsupervised Curriculum Learning for Image ClassificationManuscript (preprint) (Other (popular science, discussion, etc.))
    Abstract [en]

    In many real-world applications of computer vision complex domains, such as medical diagnostics and document analysis, the lack of labeled data often limits the effectiveness of traditional deep learning models. This study addresses these challenges by enhancing Unsupervised Curriculum Learning (UCL), a deep learning framework that automatically discovers meaningful patterns without the need for labeled data. Originally designed for remote sensing imagery, UCL has been expanded in this work to improve classification performance in a variety of domain-specific applications. UCL integrates a convolutional neural network, clustering algorithms, and selection techniques to classify images unsupervised. We introduce key improvements, such as spectral clustering, outlier detection, and dimensionality reduction, to boost the framework’s accuracy. Experimental results demonstrate significant performance gains, with F1-scores increasing from 68% to 94% on a three-class subset of the CIFAR-10 dataset and from 68% to 75% on a five-class subset. The updated UCL also achieved F1-scores of 85% in medical diagnosis, 82% in scene recognition, and 62% in historical document classification. These findings underscore the potential of UCL in complex real-world applications and point to areas where further advancements are needed to maximize its utility across diverse fields.

  • 12.
    Abrishambaf, Reza
    et al.
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Bal, Mert
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Distributed home automation system based on IEC61499 function blocks and wireless sensor networks2017In: Proceedings of the IEEE International Conference on Industrial Technology, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1354-1359, article id 7915561Conference paper (Refereed)
    Abstract [en]

    In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution

  • 13.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, WitoldDepartment of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.Vasilakos, AthanasiosLuleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.Vitiello, AutiliaDepartment of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems2020Collection (editor) (Other academic)
    Abstract [en]

    This book provides a comprehensive overview of computational intelligence methods for semantic knowledge management. Contrary to popular belief, the methods for semantic management of information were created several decades ago, long before the birth of the Internet. In fact, it was back in 1945 when Vannevar Bush introduced the idea for the first protohypertext: the MEMEX (MEMory + indEX) machine. In the years that followed, Bush’s idea influenced the development of early hypertext systems until, in the 1980s, Tim Berners Lee developed the idea of the World Wide Web (WWW) as it is known today. From then on, there was an exponential growth in research and industrial activities related to the semantic management of the information and its exploitation in different application domains, such as healthcare, e-learning and energy management. 

    However, semantics methods are not yet able to address some of the problems that naturally characterize knowledge management, such as the vagueness and uncertainty of information. This book reveals how computational intelligence methodologies, due to their natural inclination to deal with imprecision and partial truth, are opening new positive scenarios for designing innovative semantic knowledge management architectures.

  • 14.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, Witold
    Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vitiello, Autilia
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Preface2020In: Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems / [ed] Giovanni Acampora; Witold Pedrycz; Athanasios V. Vasilakos; Autilia Vitiello, Springer Nature, 2020, Vol. 837, p. vii-xChapter in book (Other academic)
  • 15.
    Acharya, Soam
    et al.
    Cornell University, Ithaca.
    Smith, Brian P
    Cornell University, Ithaca.
    Parnes, Peter
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Characterizing user access to videos on the World Wide Web1999In: Multimedia computing and networking 2000 / [ed] Klara Nahrstedt, Bellingham, Wash: SPIE - International Society for Optical Engineering, 1999, p. 130-141Conference paper (Refereed)
    Abstract [en]

    Despite evidence of rising popularity of video on the web (or VOW), little is known about how users access video. However, such a characterization can greatly benefit the design of multimedia systems such as web video proxies and VOW servers. Hence, this paper presents an analysis of trace data obtained from an ongoing VOW experiment in Lulea University of Technology, Sweden. This experiment is unique as video material is distributed over a high bandwidth network allowing users to make access decisions without the network being a major factor. Our analysis revealed a number of interesting discoveries regarding user VOW access. For example, accesses display high temporal locality: several requests for the same video title often occur within a short time span. Accesses also exhibited spatial locality of reference whereby a small number of machines accounted for a large number of overall requests. Another finding was a browsing pattern where users preview the initial portion of a video to find out if they are interested. If they like it, they continue watching, otherwise they halt it. This pattern suggests that caching the first several minutes of video data should prove effective. Lastly, the analysis shows that, contrary to previous studies, ranking of video titles by popularity did not fit a Zipfian distribution.

    Download full text (pdf)
    FULLTEXT01
  • 16.
    Adalat, Mohsin
    et al.
    COSMOSE Research Group, Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan.
    Niazi, Muaz A.
    COSMOSE Research Group, Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Variations in power of opinion leaders in online communication networks2018In: Royal Society Open Science, E-ISSN 2054-5703, Vol. 5, no 10, article id 180642Article in journal (Refereed)
    Abstract [en]

    Online social media has completely transformed how we communicate with each other. While online discussion platforms are available in the form of applications and websites, an emergent outcome of this transformation is the phenomenon of ‘opinion leaders’. A number of previous studies have been presented to identify opinion leaders in online discussion networks. In particular, Feng (2016 Comput. Hum. Behav. 54, 43–53. (doi:10.1016/j.chb.2015.07.052)) has identified five different types of central users besides outlining their communication patterns in an online communication network. However, the presented work focuses on a limited time span. The question remains as to whether similar communication patterns exist that will stand the test of time over longer periods. Here, we present a critical analysis of the Feng framework both for short-term as well as for longer periods. Additionally, for validation, we take another case study presented by Udanor et al. (2016 Program 50, 481–507. (doi:10.1108/PROG-02-2016-0011)) to further understand these dynamics. Results indicate that not all Feng-based central users may be identifiable in the longer term. Conversation starter and influencers were noted as opinion leaders in the network. These users play an important role as information sources in long-term discussions. Whereas network builder and active engager help in connecting otherwise sparse communities. Furthermore, we discuss the changing positions of opinion leaders and their power to keep isolates interested in an online discussion network.

  • 17.
    Afroze, Tasnim
    et al.
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Akther, Shumia
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Chowdhury, Mohammed Armanuzzaman
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Hossain, Emam
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Glaucoma Detection Using Inception Convolutional Neural Network V32021In: Applied Intelligence and Informatics: First International Conference, AII 2021, Nottingham, UK, July 30–31, 2021, Proceedings / [ed] Mufti Mahmud; M. Shamim Kaiser; Nikola Kasabov; Khan Iftekharuddin; Ning Zhong, Springer, 2021, p. 17-28Conference paper (Refereed)
    Abstract [en]

    Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.

    Download full text (pdf)
    fulltext
  • 18.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland; Computing and Data Science, Willamette University, Salem, OR, USA.
    Olaleye, Sunday Adewale
    School of Business, Jamk University of Applied Sciences, Rajakatu 35, 40100, Jyvaskyla, Finland.
    Bower, Matt
    School of Education, Macquarie University, Sydney, NSW, Australia.
    Oyelere, Solomon
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Examining the relationships between students’ perceptions of technology, pedagogy, and cognition: the case of immersive virtual reality mini games to foster computational thinking in higher education2023In: Smart Learning Environments, E-ISSN 2196-7091, Vol. 10, no 1, article id 16Article in journal (Refereed)
    Abstract [en]

    Researchers are increasingly exploring educational games in immersive virtual reality (IVR) environments to facilitate students’ learning experiences. Mainly, the effect of IVR on learning outcomes has been the focus. However, far too little attention has been paid to the influence of game elements and IVR features on learners’ perceived cognition. This study examined the relationship between game elements (challenge, goal clarity, and feedback) as pedagogical approach, features of IVR technology (immersion and interaction), and learners’ perceived cognition (reflective thinking and comprehension). An experiment was conducted with 49 undergraduate students who played an IVR game-based application (iThinkSmart) containing mini games developed to facilitate learners’ computational thinking competency. The study employed partial least squares structural equation modelling to investigate the effect of educational game elements and learning contents on learner’s cognition. Findings show that goal clarity is the main predictor of learners’ reflective thinking and comprehension in an educational game-based IVR application. It was also confirmed that immersion and interaction experience impact learner’s comprehension. Notably, adequate learning content in terms of the organisation and relevance of the content contained in an IVR game-based application significantly moderate learners’ reflective thinking and comprehension. The findings of this study have implications for educators and developers of IVR game-based intervention to facilitate learning in the higher education context. In particular, the implication of this study touches on the aspect of learners’ cognitive factors that aim to produce 21st-century problem-solving skills through critical thinking.

    Download full text (pdf)
    fulltext
  • 19.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Laine, Teemu H.
    Department of Digital Media, Ajou University, 16499, Suwon, Republic of Korea.
    Co-design of mini games for learning computational thinking in an online environment2021In: Education and Information Technologies: Official Journal of the IFIP technical committee on Education, ISSN 1360-2357, E-ISSN 1573-7608, Vol. 26, no 5, p. 5815-5849Article in journal (Refereed)
    Abstract [en]

    Understanding the principles of computational thinking (CT), e.g., problem abstraction, decomposition, and recursion, is vital for computer science (CS) students. Unfortunately, these concepts can be difficult for novice students to understand. One way students can develop CT skills is to involve them in the design of an application to teach CT. This study focuses on co-designing mini games to support teaching and learning CT principles and concepts in an online environment. Online co-design (OCD) of mini games enhances students’ understanding of problem-solving through a rigorous process of designing contextual educational games to aid their own learning. Given the current COVID-19 pandemic, where face-to-face co-designing between researchers and stakeholders could be difficult, OCD is a suitable option. CS students in a Nigerian higher education institution were recruited to co-design mini games with researchers. Mixed research methods comprising qualitative and quantitative strategies were employed in this study. Findings show that the participants gained relevant knowledge, for example, how to (i) create game scenarios and game elements related to CT, (ii) connect contextual storyline to mini games, (iii) collaborate in a group to create contextual low-fidelity mini game prototypes, and (iv) peer review each other’s mini game concepts. In addition, students were motivated toward designing educational mini games in their future studies. This study also demonstrates how to conduct OCD with students, presents lesson learned, and provides recommendations based on the authors’ experience.

    Download full text (pdf)
    fulltext
  • 20.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland; School of Computing and Data Science, Willamette University, Salem, OR, 97301, USA.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Design, development, and evaluation of a virtual reality game-based application to support computational thinking2023In: Educational technology research and development, ISSN 1042-1629, E-ISSN 1556-6501, Vol. 71, no 2, p. 505-537Article in journal (Refereed)
    Abstract [en]

    Computational thinking (CT) has become an essential skill nowadays. For young students, CT competency is required to prepare them for future jobs. This competency can facilitate students’ understanding of programming knowledge which has been a challenge for many novices pursuing a computer science degree. This study focuses on designing and implementing a virtual reality (VR) game-based application (iThinkSmart) to support CT knowledge. The study followed the design science research methodology to design, implement, and evaluate the first prototype of the VR application. An initial evaluation of the prototype was conducted with 47 computer science students from a Nigerian university who voluntarily participated in an experimental process. To determine what works and what needs to be improved in the iThinkSmart VR game-based application, two groups were randomly formed, consisting of the experimental (n = 21) and the control (n = 26) groups respectively. Our findings suggest that VR increases motivation and therefore increase students’ CT skills, which contribute to knowledge regarding the affordances of VR in education and particularly provide evidence on the use of visualization of CT concepts to facilitate programming education. Furthermore, the study revealed that immersion, interaction, and engagement in a VR educational application can promote students’ CT competency in higher education institutions (HEI). In addition, it was shown that students who played the iThinkSmart VR game-based application gained higher cognitive benefits, increased interest and attitude to learning CT concepts. Although further investigation is required in order to gain more insights into students learning process, this study made significant contributions in positioning CT in the HEI context and provides empirical evidence regarding the use of educational VR mini games to support students learning achievements.

    Download full text (pdf)
    fulltext
  • 21.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, Joensuu, Finland.
    iThinkSmart: Immersive Virtual Reality Mini Games to Facilitate Students’ Computational Thinking Skills2021In: Koli Calling '21: 21st Koli Calling International Conference on Computing Education Research / [ed] Otto Seppälä; Andrew Petersen, Association for Computing Machinery , 2021, article id 33Conference paper (Refereed)
    Abstract [en]

    This paper presents iThinkSmart, an immersive virtual reality-based application to facilitate the learning of computational thinking (CT) concepts. The tool was developed to supplement the traditional teaching and learning of CT by integrating three virtual mini games, namely, River Crossing, Tower of Hanoi, and Mount Patti treasure hunt, to foster immersion, interaction, engagement, and personalization for an enhanced learning experience. iThinkSmart mini games can be played on a smartphone with a Goggle Cardboard and hand controller. This first prototype of the game accesses players' competency of CT and renders feedback based on learning progress.  

     

  • 22.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, P.O. Box 111, FIN-80101, Joensuu, Finland.
    Scientific production and thematic breakthroughs in smart learning environments: a bibliometric analysis2021In: Smart Learning Environments, E-ISSN 2196-7091, Vol. 8, article id 1Article, review/survey (Refereed)
    Abstract [en]

    This study examines the research landscape of smart learning environments by conducting a comprehensive bibliometric analysis of the field over the years. The study focused on the research trends, scholar’s productivity, and thematic focus of scientific publications in the field of smart learning environments. A total of 1081 data consisting of peer-reviewed articles were retrieved from the Scopus database. A bibliometric approach was applied to analyse the data for a comprehensive overview of the trend, thematic focus, and scientific production in the field of smart learning environments. The result from this bibliometric analysis indicates that the first paper on smart learning environments was published in 2002; implying the beginning of the field. Among other sources, “Computers & Education,” “Smart Learning Environments,” and “Computers in Human Behaviour” are the most relevant outlets publishing articles associated with smart learning environments. The work of Kinshuk et al., published in 2016, stands out as the most cited work among the analysed documents. The United States has the highest number of scientific productions and remained the most relevant country in the smart learning environment field. Besides, the results also showed names of prolific scholars and most relevant institutions in the field. Keywords such as “learning analytics,” “adaptive learning,” “personalized learning,” “blockchain,” and “deep learning” remain the trending keywords. Furthermore, thematic analysis shows that “digital storytelling” and its associated components such as “virtual reality,” “critical thinking,” and “serious games” are the emerging themes of the smart learning environments but need to be further developed to establish more ties with “smart learning”. The study provides useful contribution to the field by clearly presenting a comprehensive overview and research hotspots, thematic focus, and future direction of the field. These findings can guide scholars, especially the young ones in field of smart learning environments in defining their research focus and what aspect of smart leaning can be explored.

    Download full text (pdf)
    fulltext
  • 23.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Sanusi, Ismaila Temitayo
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Application of Virtual Reality in Computer Science Education: A Systemic Review Based on Bibliometric and Content Analysis Methods2021In: Education Sciences, E-ISSN 2227-7102, Vol. 11, no 3, article id 142Article, review/survey (Refereed)
    Abstract [en]

    This study investigated the role of virtual reality (VR) in computer science (CS) education over the last 10 years by conducting a bibliometric and content analysis of articles related to the use of VR in CS education. A total of 971 articles published in peer-reviewed journals and conferences were collected from Web of Science and Scopus databases to conduct the bibliometric analysis. Furthermore, content analysis was conducted on 39 articles that met the inclusion criteria. This study demonstrates that VR research for CS education was faring well around 2011 but witnessed low production output between the years 2013 and 2016. However, scholars have increased their contribution in this field recently, starting from the year 2017. This study also revealed prolific scholars contributing to the field. It provides insightful information regarding research hotspots in VR that have emerged recently, which can be further explored to enhance CS education. In addition, the quantitative method remains the most preferred research method, while the questionnaire was the most used data collection technique. Moreover, descriptive analysis was primarily used in studies on VR in CS education. The study concludes that even though scholars are leveraging VR to advance CS education, more effort needs to be made by stakeholders across countries and institutions. In addition, a more rigorous methodological approach needs to be employed in future studies to provide more evidence-based research output. Our future study would investigate the pedagogy, content, and context of studies on VR in CS education.

    Download full text (pdf)
    fulltext
  • 24.
    Agreste, Santa
    et al.
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    De Meo, Pasquale
    of Ancient and Modern Civilizations, University of Messina.
    Fiumara, Giacomo
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccione, Giuseppe
    Department of Mathematics and Computer Science, Physical Sciences and Earth Sciences, University of Messina.
    Piccolo, Sebastiano
    Department of Management Engineering - Engineering Systems Division at the Technical University of Denmark.
    Rosaci, Domenico
    DIIES Department, University of Reggio Calabria Via Graziella.
    Sarné, Giuseppe M. L.
    DICEAM Department, University of Reggio Calabria Via Graziella.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An empirical comparison of algorithms to find communities in directed graphs and their application in Web Data Analytics2017In: IEEE Transactions on Big Data, E-ISSN 2332-7790, Vol. 3, no 3, p. 289-306Article in journal (Refereed)
    Abstract [en]

    Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.

  • 25.
    Ahmad, Iftikhar
    et al.
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Noor, Rafidah Md
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Ali, Ihsan
    Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Characterizing the role of vehicular cloud computing in road traffic management2017In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 13, no 5Article in journal (Refereed)
    Abstract [en]

    Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles

  • 26.
    Ahmed, Ejaz
    et al.
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Yaqoob, Ibrar
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Hashem, Ibrahim Abaker Targio
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Khan, Imran
    Schneider Electric Industries, Grenoble.
    Ahmed, Abdelmuttlib Ibrahim Abdalla
    The Centre for Mobile Cloud Computing Research, Faculty of Computer Science and Information Technology, University of Malaya.
    Imran, Muhammad
    College of Computer and Information Sciences, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    The role of big data analytics in Internet of Things2017In: Computer Networks, ISSN 1389-1286, E-ISSN 1872-7069, Vol. 129, no 2, p. 459-471Article in journal (Refereed)
    Abstract [en]

    The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.

  • 27.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hasan, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong University, Chittagong, 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Comparative Performance of Tree Based Machine Learning Classifiers in Product Backorder Prediction2023In: Intelligent Computing & Optimization: Proceedings of the 5th International Conference on Intelligent Computing and Optimization 2022 (ICO2022) / [ed] Pandian Vasant; Gerhard-Wilhelm Weber; José Antonio Marmolejo-Saucedo; Elias Munapo; J. Joshua Thomas, Springer, 2023, 1, p. 572-584Chapter in book (Refereed)
    Abstract [en]

    Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.

  • 28.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram 4000, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Evolutionary Belief Rule-Based Clinical Decision Support System to Predict COVID-19 Severity under Uncertainty2021In: Applied Sciences, E-ISSN 2076-3417, Vol. 11, no 13, article id 5810Article in journal (Refereed)
    Abstract [en]

    Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.

    Download full text (pdf)
    fulltext
  • 29.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Naim Uddin Rahi, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Uddin, Raihan
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Sen, Anik
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Shahadat Hossain, Mohammad
    University of Chittagong, Chattogram, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Machine Learning-Based Tomato Leaf Disease Diagnosis Using Radiomics Features2023In: Proceedings of the Fourth International Conference on Trends in Computational and Cognitive Engineering - TCCE 2022 / [ed] M. Shamim Kaiser; Sajjad Waheed; Anirban Bandyopadhyay; Mufti Mahmud; Kanad Ray, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, p. 25-35Conference paper (Refereed)
    Abstract [en]

    Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.

  • 30.
    Ahmed, Mumtahina
    et al.
    Department of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Explainable Text Classification Model for COVID-19 Fake News Detection2022In: Journal of Internet Services and Information Security (JISIS), ISSN 2182-2069, E-ISSN 2182-2077, Vol. 12, no 2, p. 51-69Article in journal (Refereed)
    Abstract [en]

    Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.

  • 31.
    Ahmed, Shamim
    et al.
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladeshand.
    Shamim Kaiser, M.
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladeshand.
    Hossain, Mohammad Shahadat
    Department of Computer Science & Engineering, University of Chittagong, Chattogram, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Comparative Analysis of LIME and SHAP Interpreters with Explainable ML-Based Diabetes Predictions2024In: IEEE Access, E-ISSN 2169-3536Article in journal (Refereed)
    Abstract [en]

    Explainable artificial intelligence is beneficial in converting opaque machine learning models into transparent ones and outlining how each one makes decisions in the healthcare industry. To comprehend the variables that affect decision-making regarding diabetes prediction that can be accounted for by model-agnostic techniques. In this project, we investigate how to generate local and global explanations for a machine-learning model built on a logistic regression architecture. We trained on 253,680 survey responses from diabetes patients using the explainable AI techniques LIME and SHAP. LIME and SHAP were then used to explain the predictions produced by the logistic regression and Random forest-based model on the validation and test sets.With a discussion of future work, the comparative analysis and discussion of various experimental findings between LIME and SHAP are provided, along with their strengths and weaknesses in terms of interpretation. With a high accuracy of 86% on the test set, we used LR architecture with a spatial attention mechanism, demonstrating the possibility of merging machine learning and explainable AI to improve diabetes prediction, diagnosis, and treatment.We also focus on various applications, difficulties, and probable future directions of machine learning models for LIME and SHAP interpreters.

    Download full text (pdf)
    fulltext
  • 32.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Deep Learning Approach with Data Augmentation to Recognize Facial Expressions in Real Time2022In: Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering: TCCE 2021 / [ed] M. Shamim Kaiser; Kanad Ray; Anirban Bandyopadhyay; Kavikumar Jacob; Kek Sie Long, Springer Nature, 2022, p. 487-500Conference paper (Refereed)
    Abstract [en]

    The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.

  • 33.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Jamil, Mohammad Newaj
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An Integrated Deep Learning and Belief Rule Base Intelligent System to Predict Survival of COVID-19 Patient under Uncertainty2022In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 14, no 2, p. 660-676Article in journal (Refereed)
    Abstract [en]

    The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.

  • 34.
    Akbar, Mariam
    et al.
    COMSATS Institute of Information Technology, Islamabad.
    Javaid, Nadeem
    COMSATS Institute of Information Technology, Islamabad.
    Kahn, Ayesha Hussain
    COMSATS Institute of Information Technology, Islamabad.
    Imran, Muhammad Al
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Shoaib, Muhammad
    College of Computer and Information Sciences, Almuzahmiyah, King Saud University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Efficient Data Gathering in 3D Linear Underwater Wireless Sensor Networks Using Sink Mobility2016In: Sensors, E-ISSN 1424-8220, Vol. 16, no 3, article id 404Article in journal (Refereed)
    Abstract [en]

    Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability

  • 35.
    Akifev, Daniil
    et al.
    Independent researcher.
    Liakh, Tatiana
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ovsiannikova, Polina
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Sorokin, Radimir
    Independent researcher.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Debugging approach for IEC 61499 control applications in FBME2023In: 2023 IEEE 32nd International Symposium on Industrial Electronics (ISIE), IEEE, 2023Conference paper (Refereed)
  • 36.
    Akram, Waseem
    et al.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Niazi, Muaz A.
    COMSATS University Islamabad, Computer Science Department, Islamabad, Pakistan.
    Iantovics, Laszlo Barna
    Petru Maior University of Tirgu Mures, Informatics Department, Tirgu Mures, Romania.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Towards agent-based model specification of smart grid: a cognitive agent-based computing approach2019In: Interdisciplinary Description of Complex Systems, ISSN 1334-4684, E-ISSN 1334-4676, Vol. 17, no 3B, p. 546-585Article in journal (Refereed)
    Abstract [en]

    A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.

  • 37.
    Akter, Mehenika
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hand-Drawn Emoji Recognition using Convolutional Neural Network2021In: Proceedings of 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), IEEE, 2021, p. 147-152Conference paper (Refereed)
    Abstract [en]

    Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.

    Download full text (pdf)
    fulltext
  • 38.
    Akter, Mehenika
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Uddin Ahmed, Tawsin
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mosquito Classification Using Convolutional Neural Network with Data Augmentation2021In: Intelligent Computing and Optimization: Proceedings of the 3rd International Conference on Intelligent Computing and Optimization 2020 (ICO 2020) / [ed] Pandian Vasant, Ivan Zelinka, Gerhard-Wilhelm Weber, Springer Nature, 2021, p. 865-879Conference paper (Refereed)
    Abstract [en]

    Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.

    Download full text (pdf)
    fulltext
  • 39.
    Akter, Nasrin
    et al.
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Junjun, Jubair Ahmed
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Nahar, Nazmun
    BGC Trust University Bangladesh, Bidyanagar, Chandanaish, Bangladesh.
    Shahadat Hossain, Mohammad
    University of Chittagong, University-4331, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hoassain, Md. Sazzad
    University of Liberal Arts Bangladesh, Dhaka, 1209, Bangladesh.
    Brain Tumor Classification using Transfer Learning from MRI Images2022In: Proceedings of International Conference on Fourth Industrial Revolution and Beyond 2021 / [ed] Sazzad Hossain, Md. Shahadat Hossain, M. Shamim Kaiser, Satya Prasad Majumder, Kanad Ray, Springer, 2022, p. 575-587Chapter in book (Refereed)
    Abstract [en]

    One of the most vital parts of medical image analysis is the classification of brain tumors. Because tumors are thought to be origins to cancer, accurate brain tumor classification can save lives. As a result, CNN (Convolutional Neural Network)-based techniques for classifying brain cancers are frequently employed. However, there is a problem: CNNs are exposed to vast amounts of training data in order to produce good performance. This is where transfer learning enters into the picture. We present a 4-class transfer learning approach for categorizing Glioma, Meningioma, and Pituitary tumors and non-tumors in this study. The three most prevalent types of brain tumors are glioma, meningioma, and pituitary tumors. Our presented method, which employs the theory of transfer learning, utilizes a pre-trained InceptionResnetV1 method for classifying brain MRI images by extracting features from them using the softmax classifier method. The proposed approach outperforms all prior techniques with a mean classification accuracy of 93.95%. For the evaluation of our method we use kaggle dataset. Precision, recall, and F-score are one of the key performance metrics employed in this study.

  • 40.
    Akter, Shamima
    et al.
    International Islamic University, Chittagong, Bangladesh.
    Nahar, Nazmun
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A New Crossover Technique to Improve Genetic Algorithm and Its Application to TSP2019In: Proceedings of 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE, 2019, article id 18566123Conference paper (Refereed)
    Abstract [en]

    Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.

    Download full text (pdf)
    fulltext
  • 41.
    Akter, Tahmina
    et al.
    Dept. of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Akter, Mst. Sharmin
    Dept. of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Chakma, Rishita
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Hossain, Mohammad Shahadat
    Dept. of Computer Science and Engineering, University of Chittagong, Chittagong -4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Evaluating the Performance of Machine Learning Models in Handwritten Signature Verification2024In: 2024 Asia Pacific Conference on Innovation in Technology (APCIT), Institute of Electrical and Electronics Engineers Inc. , 2024Conference paper (Refereed)
    Abstract [en]

    Handwritten signatures remain a widely used method for personal authentication in various official documents, including bank checks and legal papers. The verification process is often labor-intensive and time-consuming, necessitating the development of efficient methods. This study evaluates the performance of machine learning models in handwritten signature verification using the ICDAR 2011 Signature and CEDAR datasets. The investigation involves preprocessing, feature extraction using CNN architectures, and optimization techniques. The most effective models undergo a rigorous evaluation process, followed by classification using supervised ML algorithms, such as linear SVM, random forest, logistic regression, and polynomial SVM. The results indicate that the VGG16 architecture, optimized with the Adam optimizer, achieves the satisfactory performance metrics. This study demonstrates the potential of ML methodologies to enhance the efficiency and accuracy of signature verification, offering a robust solution for document authentication.

  • 42.
    Akter, Tahmina
    et al.
    Dept. of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Akter, Mst. Sharmin
    Dept. of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Islam, Dilshad
    Dept. of Physical and Mathematical Sciences, Chattogram Veterinary and Animal Sciences University, Bangladesh.
    Hossain, Mohammad Shahadat
    Dept. of Computer Science and Engineering, University of Chittagong, Chittagong -4331,Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Evaluating Machine Learning Methods for Bangla Text Emotion Analysis2024In: 2024 Asia Pacific Conference on Innovation in Technology (APCIT), Institute of Electrical and Electronics Engineers Inc. , 2024Conference paper (Refereed)
    Abstract [en]

    Text-based emotion identification goes beyond simple sentiment analysis by capturing emotions in a more nuanced way, akin to shades of gray rather than just positive or negative sentiments. This paper details our experiments with emotion analysis on Bangla text. We collected a corpus of user comments from various social media groups discussing socioeconomic and political topics to identify six emotions: sadness, disgust, surprise, fear, anger, and joy. We evaluated the performance of four widely used machine learning algorithms—RF, DT, k-NN, and SVM—alongside three popular deep learning algorithms—CNNs, LSTM, and Transformer Learning—using TF-IDF feature extraction and word embedding techniques. The results showed that among the machine learning algorithms, DT, RF, k-NN, and SVM achieved accuracy scores of 82%, 84%, 73%, and 83%, respectively. In contrast, the deep learning models CNN and LSTM both achieved higher performance with an accuracy of 85% and 86% respectively. These findings highlight the effectiveness of traditional ML and DL approaches in detecting emotions from Bangla social media texts, indicating significant potential for further advancements in this area.

  • 43.
    Akter, Tahmina
    et al.
    Dept. of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Chakma, Rishita
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Datta, Nippon
    Dept. of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Dept. of Computer Science and Engineering, University of Chittagong, Chittagong -4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    IoT in Action: Design and Implementation of a Tank Water Monitoring System2024In: 2024 Second International Conference on Inventive Computing and Informatics (ICICI), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 755-760Conference paper (Refereed)
    Abstract [en]

    Water management in residential areas often faces challenges such as unpredictable shortages and damaging overflows due to inadequate monitoring of tank water levels. This study presents the design and implementation of an Internet of Things (IoT)-based tank water monitoring system aimed at providing a reliable and efficient solution to these issues. Utilizing advanced sensor technology, the system accurately monitors water levels, volume, and quality within storage tanks. It incorporates dual sensors that enhance reliability: one sensor manages the water level, initiating pump activation when levels fall below a critical threshold, while the second sensor prevents overflows by deactivating the pump once the water level reaches a pre-set maximum. The system is designed to be user-friendly, offering real-time water level data and control via an Android application or web dashboard. This allows for remote operation of the motor pump, ensuring that water availability is consistent and secure, while also safeguarding against overflows and subsequent water wastage. The implementation of this IoT system demonstrates significant potential for enhancing water resource management in residential settings, promoting both sustainability and ease of use.

  • 44.
    Akter, Tahmina
    et al.
    Dept. of CSE, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Department of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Chakma, Rishita
    Dept. of CSE, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Datta, Nippon
    Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Dept. of CSE, University of Chittagong, Chittagong -4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    IoT-based Precision Agriculture Monitoring System: Enhancing Agricultural Efficiency2024In: 2024 Second International Conference on Inventive Computing and Informatics (ICICI), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 749-754Conference paper (Refereed)
    Abstract [en]

    This research study introduces an IoT-based Agricultural Monitoring System designed to enhance precision farming practices. Employing Arduino microcontrollers and a network of sensors including water level, soil moisture, temperature, and pH, the system enables real-time monitoring of agricultural parameters. Integration with GSM and WiFi modules facilitates data communication and remote-control capabilities. Additionally, solar power integration enhances sustainability. The collected data is uploaded to a cloud platform for analysis, providing farmers with actionable insights for informed decision-making. The study aims to optimize resource utilization, improve crop yield, and promote sustainable agriculture practices.

  • 45.
    Akter, Tahmina
    et al.
    Dept. of CSE, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Dept. of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Chakma, Rishita
    Dept. of CSE, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Datta, Nippon
    Dept. of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Dept. of Computer Science and Engineering, University of Chittagong, Chittagong -4331,Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Smart Monitoring and Control of Hydroponic Systems Using IoT Solutions2024In: 2024 Second International Conference on Inventive Computing and Informatics (ICICI), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 761-767Conference paper (Refereed)
    Abstract [en]

    Hydroponics, a method of growing plants without soil, offers a viable solution for agricultural production in areas with limited space or adverse soil conditions. This soil-less cultivation method circumvents the lengthy decomposition process associated with traditional soil-based agriculture, reducing the risk of disease and the associated costs. This study introduces a sophisticated Internet of Things (IoT) framework designed to optimize the monitoring and management of hydroponic systems. The core of this system is a suite of multimodal sensors that continuously measure critical environmental and nutritional parameters such as temperature, humidity, nutrient levels, pH, and water levels. These sensors are integrated with a microcontroller that collects and transmits data to a cloud-based platform for storage and further analysis. Furthermore, an interactive website allows users to access this data remotely and adjust the system settings based on real-time information. The system also incorporates an automated control mechanism that adjusts the environment of the hydroponic system based on sensor inputs and predefined algorithms, ensuring optimal plant growth conditions. By providing a comprehensive and adaptive approach to hydroponic management, this IoT-based system enhances the efficiency and effectiveness of hydroponic farming, making it adaptable to various setups and scalable for different operational sizes. This study demonstrates the potential of integrating advanced technologies like IoT into agricultural practices to enhance productivity and sustainability.

  • 46.
    Akter, Tahmina
    et al.
    Dept. of CSE, Port City International University, Chittagong, Bangladesh.
    Mahmud, Tanjim
    Dept. of CSE, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Das, Utpol Kanti
    Dept. of CSE, Port City International University, Chittagong, Bangladesh.
    Chakraborty, Prosenjit
    Dept. of CSE, Rangamati Science and Technology University, Rangamati-4500, Bangladesh.
    Sharmen, Nahed
    Applied Microbiology Kitami Institute of Technology 090-8507, Hokkaido, Japan.
    Hossain, Mohammad Shahadat
    Dept. of CSE, University of Chittagong, Chittagong -4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Heartbeat Sound Analysis: Integrating Deep Learning Models for Classification2024In: International Conference on Electrical, Computer, and Energy Technologies, ICECET 2024, Institute of Electrical and Electronics Engineers Inc. , 2024Conference paper (Refereed)
  • 47.
    Al Arafat, Md. Mahedi
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, 4331, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, 4331, Bangladesh.
    Hossain, Delowar
    Cumming School of Medicine, University of Calgary, Calgary, AB, T2N 1N4, Canada.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Neural Network-Based Obstacle and Pothole Avoiding Robot2023In: Proceedings of the Fourth International Conference on Trends in Computational and Cognitive Engineering - TCCE 2022 / [ed] M. Shamim Kaiser; Sajjad Waheed; Anirban Bandyopadhyay; Mufti Mahmud; Kanad Ray, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, p. 173-184Conference paper (Refereed)
    Abstract [en]

    The main challenge of any mobile robot is to detect and avoid obstacles and potholes. This paper presents the development and implementation of a novel mobile robot. An Arduino Uno is used as the processing unit of the robot. A Sharp distance measurement sensor and Ultrasonic sensors are used for taking inputs from the environment. The robot trains a neural network based on a feedforward backpropagation algorithm to detect and avoid obstacles and potholes. For that purpose, we have used a truth table. Our experimental results show that our developed system can ideally detect and avoid obstacles and potholes and navigate environments.

  • 48.
    Al Banna, Md. Hasan
    et al.
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Ghosh, Tapotosh
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Al Nahian, Md. Jaber
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Taher, Kazi Abu
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka 1342, Bangladesh.
    Mahmud, Mufti
    Department of Computer Science, Nottingham Trent University, NG11 8NS – Nottingham, UK.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Attention-based Bi-directional Long-Short Term Memory Network for Earthquake Prediction2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 56589-56603Article in journal (Refereed)
    Abstract [en]

    An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.

    Download full text (pdf)
    fulltext
  • 49.
    Alakärppä, Ismo
    et al.
    University of Lapland, Rovaniemi.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Distance- Spanning Technology.
    Hosio, Simo
    University of Oulu.
    Johansson, Dan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ojala, Timo
    University of Oulu.
    NIMO overall architecture and service enablers2014Report (Other academic)
    Abstract [en]

    This article describes the architecture and service enablers developed in the NIMO project. Furthermore, it identifies future challenges and knowledge gaps in upcoming ICT service development for public sector units empowering citizens with enhanced tools for interaction and participation. We foresee crowdsourced applications where citizens contribute with dynamic, timely and geographically spread gathered information.

    Download full text (pdf)
    FULLTEXT01
  • 50.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An IoT-Belief Rule Base Smart System to Assess Autism2018In: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, p. 671-675Conference paper (Refereed)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 2328
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf