Battery storage is emerging as a key component of intelligent green electricitiy systems. The battery is monetized through market participation, which usually involves bidding. Bidding is a multiâobjective optimization problem, involving targets such as maximizing market compensation and minimizing penalties for failing to provide the service and costs for battery aging. In this article, battery participation is investigated on primary frequency reserve markets. Reinforcement learning is applied for the optimization. In previous research, only simplified formulations of battery aging have been used in the reinforcement learning formulation, so it is unclear how the optimizer would perform with a real battery. In this article, a physicsâbased battery aging model is used to assess the aging. The contribution of this article is a methodology involving a realistic battery simulation to assess the performance of the trained RL agent with respect to battery aging in order to inform the selection of the weighting of the aging term in the RL reward formula. The RL agent performs day-ahead bidding on the Finnish Frequency Containment Reserves for Normal Operation market, with the objective of maximizing market compensation, minimizing market penalties and minimizing aging costs.
Battery storages are an essential element of the emerging smart grid. Compared to other distributed intelligent energy resources, batteries have the advantage of being able to rapidly react to events such as renewable generation fluctuations or grid disturbances. There is a lack of research on ways to profitably exploit this ability. Any solution needs to consider rapid electrical phenomena as well as the much slower dynamics of relevant electricity markets. Reinforcement learning is a branch of artificial intelligence that has shown promise in optimizing complex problems involving uncertainty. This article applies reinforcement learning to the problem of trading batteries. The problem involves two timescales, both of which are important for profitability. Firstly, trading the battery capacity must occur on the timescale of the chosen electricity markets. Secondly, the real-time operation of the battery must ensure that no financial penalties are incurred from failing to meet the technical specification. The trading-related decisions must be done under uncertainties, such as unknown future market prices and unpredictable power grid disturbances. In this article, a simulation model of a battery system is proposed as the environment to train a reinforcement learning agent to make such decisions. The system is demonstrated with an application of the battery to Finnish primary frequency reserve markets.
The popularity of network virtualization has recently regained considerable momentum because of the emergence of OpenFlow technology. It is essentially decouples a data plane from a control plane and promotes hardware programmability. Subsequently, OpenFlow facilitates the implementation of network virtualization. This study aims to provide an overview of different approaches to create a virtual network using OpenFlow technology. The paper also presents the OpenFlow components to compare conventional network architecture with OpenFlow network architecture, particularly in terms of the virtualization. A thematic OpenFlow network virtualization taxonomy is devised to categorize network virtualization approaches. Several testbeds that support OpenFlow network virtualization are discussed with case studies to show the capabilities of OpenFlow virtualization. Moreover, the advantages of popular OpenFlow controllers that are designed to enhance network virtualization is compared and analyzed. Finally, we present key research challenges that mainly focus on security, scalability, reliability, isolation, and monitoring in the OpenFlow virtual environment. Numerous potential directions to tackle the problems related to OpenFlow network virtualization are likewise discussed
The use of medical images has been continuously increasing, which makes manual investigations of every image a difficult task. This study focuses on classifying brain magnetic resonance images (MRIs) as normal, where a brain tumor is absent, or as abnormal, where a brain tumor is present. A hybrid intelligent system for automatic brain tumor detection and MRI classification is proposed. This system assists radiologists in interpreting the MRIs, improves the brain tumor diagnostic accuracy, and directs the focus toward the abnormal images only. The proposed computer-aided diagnosis (CAD) system consists of five steps: MRI preprocessing to remove the background noise, image segmentation by combining Otsu binarization and K-means clustering, feature extraction using the discrete wavelet transform (DWT) approach, and dimensionality reduction of the features by applying the principal component analysis (PCA) method. The major features were submitted to a kernel support vector machine (KSVM) for performing the MRI classification. The performance evaluation of the proposed system measured a maximum classification accuracy of 100 % using an available MRIs database. The processing time for all processes was recorded as 1.23 seconds. The obtained results have demonstrated the superiority of the proposed system.
Computer-aided diagnosis (CAD) systems have become very important for the medical diagnosis of brain tumors. The systems improve the diagnostic accuracy and reduce the required time. In this paper, a two-stage CAD system has been developed for automatic detection and classification of brain tumor through magnetic resonance images (MRIs). In the first stage, the system classifies brain tumor MRI into normal and abnormal images. In the second stage, the type of tumor is classified as benign (Noncancerous) or malignant (Cancerous) from the abnormal MRIs. The proposed CAD ensembles the following computational methods: MRI image segmentation by K-means clustering, feature extraction using discrete wavelet transform (DWT), feature reduction by applying principal component analysis (PCA). The two-stage classification has been conducted using a support vector machine (SVM). Performance evaluation of the proposed CAD has achieved promising results using a non-standard MRIs database.
The successful early diagnosis of brain tumors plays a major role in improving the treatment outcomes and thus improving patient survival. Manually evaluating the numerous magnetic resonance imaging (MRI) images produced routinely in the clinic is a difficult process. Thus, there is a crucial need for computer-aided methods with better accuracy for early tumor diagnosis. Computer-aided brain tumor diagnosis from MRI images consists of tumor detection, segmentation, and classification processes. Over the past few years, many studies have focused on traditional or classical machine learning techniques for brain tumor diagnosis. Recently, interest has developed in using deep learning techniques for diagnosing brain tumors with better accuracy and robustness. This study presents a comprehensive review of traditional machine learning techniques and evolving deep learning techniques for brain tumor diagnosis. This review paper identifies the key achievements reflected in the performance measurement metrics of the applied algorithms in the three diagnosis processes. In addition, this study discusses the key findings and draws attention to the lessons learned as a roadmap for future research.
In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.
Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.
Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.
Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.
In many real-world applications of computer vision complex domains, such as medical diagnostics and document analysis, the lack of labeled data often limits the effectiveness of traditional deep learning models. This study addresses these challenges by enhancing Unsupervised Curriculum Learning (UCL), a deep learning framework that automatically discovers meaningful patterns without the need for labeled data. Originally designed for remote sensing imagery, UCL has been expanded in this work to improve classification performance in a variety of domain-specific applications. UCL integrates a convolutional neural network, clustering algorithms, and selection techniques to classify images unsupervised. We introduce key improvements, such as spectral clustering, outlier detection, and dimensionality reduction, to boost the framework’s accuracy. Experimental results demonstrate significant performance gains, with F1-scores increasing from 68% to 94% on a three-class subset of the CIFAR-10 dataset and from 68% to 75% on a five-class subset. The updated UCL also achieved F1-scores of 85% in medical diagnosis, 82% in scene recognition, and 62% in historical document classification. These findings underscore the potential of UCL in complex real-world applications and point to areas where further advancements are needed to maximize its utility across diverse fields.
In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution
This book provides a comprehensive overview of computational intelligence methods for semantic knowledge management. Contrary to popular belief, the methods for semantic management of information were created several decades ago, long before the birth of the Internet. In fact, it was back in 1945 when Vannevar Bush introduced the idea for the first protohypertext: the MEMEX (MEMory + indEX) machine. In the years that followed, Bush’s idea influenced the development of early hypertext systems until, in the 1980s, Tim Berners Lee developed the idea of the World Wide Web (WWW) as it is known today. From then on, there was an exponential growth in research and industrial activities related to the semantic management of the information and its exploitation in different application domains, such as healthcare, e-learning and energy management.
However, semantics methods are not yet able to address some of the problems that naturally characterize knowledge management, such as the vagueness and uncertainty of information. This book reveals how computational intelligence methodologies, due to their natural inclination to deal with imprecision and partial truth, are opening new positive scenarios for designing innovative semantic knowledge management architectures.
Despite evidence of rising popularity of video on the web (or VOW), little is known about how users access video. However, such a characterization can greatly benefit the design of multimedia systems such as web video proxies and VOW servers. Hence, this paper presents an analysis of trace data obtained from an ongoing VOW experiment in Lulea University of Technology, Sweden. This experiment is unique as video material is distributed over a high bandwidth network allowing users to make access decisions without the network being a major factor. Our analysis revealed a number of interesting discoveries regarding user VOW access. For example, accesses display high temporal locality: several requests for the same video title often occur within a short time span. Accesses also exhibited spatial locality of reference whereby a small number of machines accounted for a large number of overall requests. Another finding was a browsing pattern where users preview the initial portion of a video to find out if they are interested. If they like it, they continue watching, otherwise they halt it. This pattern suggests that caching the first several minutes of video data should prove effective. Lastly, the analysis shows that, contrary to previous studies, ranking of video titles by popularity did not fit a Zipfian distribution.
Online social media has completely transformed how we communicate with each other. While online discussion platforms are available in the form of applications and websites, an emergent outcome of this transformation is the phenomenon of ‘opinion leaders’. A number of previous studies have been presented to identify opinion leaders in online discussion networks. In particular, Feng (2016 Comput. Hum. Behav. 54, 43–53. (doi:10.1016/j.chb.2015.07.052)) has identified five different types of central users besides outlining their communication patterns in an online communication network. However, the presented work focuses on a limited time span. The question remains as to whether similar communication patterns exist that will stand the test of time over longer periods. Here, we present a critical analysis of the Feng framework both for short-term as well as for longer periods. Additionally, for validation, we take another case study presented by Udanor et al. (2016 Program 50, 481–507. (doi:10.1108/PROG-02-2016-0011)) to further understand these dynamics. Results indicate that not all Feng-based central users may be identifiable in the longer term. Conversation starter and influencers were noted as opinion leaders in the network. These users play an important role as information sources in long-term discussions. Whereas network builder and active engager help in connecting otherwise sparse communities. Furthermore, we discuss the changing positions of opinion leaders and their power to keep isolates interested in an online discussion network.
Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.
Researchers are increasingly exploring educational games in immersive virtual reality (IVR) environments to facilitate students’ learning experiences. Mainly, the effect of IVR on learning outcomes has been the focus. However, far too little attention has been paid to the influence of game elements and IVR features on learners’ perceived cognition. This study examined the relationship between game elements (challenge, goal clarity, and feedback) as pedagogical approach, features of IVR technology (immersion and interaction), and learners’ perceived cognition (reflective thinking and comprehension). An experiment was conducted with 49 undergraduate students who played an IVR game-based application (iThinkSmart) containing mini games developed to facilitate learners’ computational thinking competency. The study employed partial least squares structural equation modelling to investigate the effect of educational game elements and learning contents on learner’s cognition. Findings show that goal clarity is the main predictor of learners’ reflective thinking and comprehension in an educational game-based IVR application. It was also confirmed that immersion and interaction experience impact learner’s comprehension. Notably, adequate learning content in terms of the organisation and relevance of the content contained in an IVR game-based application significantly moderate learners’ reflective thinking and comprehension. The findings of this study have implications for educators and developers of IVR game-based intervention to facilitate learning in the higher education context. In particular, the implication of this study touches on the aspect of learners’ cognitive factors that aim to produce 21st-century problem-solving skills through critical thinking.
Understanding the principles of computational thinking (CT), e.g., problem abstraction, decomposition, and recursion, is vital for computer science (CS) students. Unfortunately, these concepts can be difficult for novice students to understand. One way students can develop CT skills is to involve them in the design of an application to teach CT. This study focuses on co-designing mini games to support teaching and learning CT principles and concepts in an online environment. Online co-design (OCD) of mini games enhances students’ understanding of problem-solving through a rigorous process of designing contextual educational games to aid their own learning. Given the current COVID-19 pandemic, where face-to-face co-designing between researchers and stakeholders could be difficult, OCD is a suitable option. CS students in a Nigerian higher education institution were recruited to co-design mini games with researchers. Mixed research methods comprising qualitative and quantitative strategies were employed in this study. Findings show that the participants gained relevant knowledge, for example, how to (i) create game scenarios and game elements related to CT, (ii) connect contextual storyline to mini games, (iii) collaborate in a group to create contextual low-fidelity mini game prototypes, and (iv) peer review each other’s mini game concepts. In addition, students were motivated toward designing educational mini games in their future studies. This study also demonstrates how to conduct OCD with students, presents lesson learned, and provides recommendations based on the authors’ experience.
Computational thinking (CT) has become an essential skill nowadays. For young students, CT competency is required to prepare them for future jobs. This competency can facilitate students’ understanding of programming knowledge which has been a challenge for many novices pursuing a computer science degree. This study focuses on designing and implementing a virtual reality (VR) game-based application (iThinkSmart) to support CT knowledge. The study followed the design science research methodology to design, implement, and evaluate the first prototype of the VR application. An initial evaluation of the prototype was conducted with 47 computer science students from a Nigerian university who voluntarily participated in an experimental process. To determine what works and what needs to be improved in the iThinkSmart VR game-based application, two groups were randomly formed, consisting of the experimental (n = 21) and the control (n = 26) groups respectively. Our findings suggest that VR increases motivation and therefore increase students’ CT skills, which contribute to knowledge regarding the affordances of VR in education and particularly provide evidence on the use of visualization of CT concepts to facilitate programming education. Furthermore, the study revealed that immersion, interaction, and engagement in a VR educational application can promote students’ CT competency in higher education institutions (HEI). In addition, it was shown that students who played the iThinkSmart VR game-based application gained higher cognitive benefits, increased interest and attitude to learning CT concepts. Although further investigation is required in order to gain more insights into students learning process, this study made significant contributions in positioning CT in the HEI context and provides empirical evidence regarding the use of educational VR mini games to support students learning achievements.
This paper presents iThinkSmart, an immersive virtual reality-based application to facilitate the learning of computational thinking (CT) concepts. The tool was developed to supplement the traditional teaching and learning of CT by integrating three virtual mini games, namely, River Crossing, Tower of Hanoi, and Mount Patti treasure hunt, to foster immersion, interaction, engagement, and personalization for an enhanced learning experience. iThinkSmart mini games can be played on a smartphone with a Goggle Cardboard and hand controller. This first prototype of the game accesses players' competency of CT and renders feedback based on learning progress.
This study examines the research landscape of smart learning environments by conducting a comprehensive bibliometric analysis of the field over the years. The study focused on the research trends, scholar’s productivity, and thematic focus of scientific publications in the field of smart learning environments. A total of 1081 data consisting of peer-reviewed articles were retrieved from the Scopus database. A bibliometric approach was applied to analyse the data for a comprehensive overview of the trend, thematic focus, and scientific production in the field of smart learning environments. The result from this bibliometric analysis indicates that the first paper on smart learning environments was published in 2002; implying the beginning of the field. Among other sources, “Computers & Education,” “Smart Learning Environments,” and “Computers in Human Behaviour” are the most relevant outlets publishing articles associated with smart learning environments. The work of Kinshuk et al., published in 2016, stands out as the most cited work among the analysed documents. The United States has the highest number of scientific productions and remained the most relevant country in the smart learning environment field. Besides, the results also showed names of prolific scholars and most relevant institutions in the field. Keywords such as “learning analytics,” “adaptive learning,” “personalized learning,” “blockchain,” and “deep learning” remain the trending keywords. Furthermore, thematic analysis shows that “digital storytelling” and its associated components such as “virtual reality,” “critical thinking,” and “serious games” are the emerging themes of the smart learning environments but need to be further developed to establish more ties with “smart learning”. The study provides useful contribution to the field by clearly presenting a comprehensive overview and research hotspots, thematic focus, and future direction of the field. These findings can guide scholars, especially the young ones in field of smart learning environments in defining their research focus and what aspect of smart leaning can be explored.
This study investigated the role of virtual reality (VR) in computer science (CS) education over the last 10 years by conducting a bibliometric and content analysis of articles related to the use of VR in CS education. A total of 971 articles published in peer-reviewed journals and conferences were collected from Web of Science and Scopus databases to conduct the bibliometric analysis. Furthermore, content analysis was conducted on 39 articles that met the inclusion criteria. This study demonstrates that VR research for CS education was faring well around 2011 but witnessed low production output between the years 2013 and 2016. However, scholars have increased their contribution in this field recently, starting from the year 2017. This study also revealed prolific scholars contributing to the field. It provides insightful information regarding research hotspots in VR that have emerged recently, which can be further explored to enhance CS education. In addition, the quantitative method remains the most preferred research method, while the questionnaire was the most used data collection technique. Moreover, descriptive analysis was primarily used in studies on VR in CS education. The study concludes that even though scholars are leveraging VR to advance CS education, more effort needs to be made by stakeholders across countries and institutions. In addition, a more rigorous methodological approach needs to be employed in future studies to provide more evidence-based research output. Our future study would investigate the pedagogy, content, and context of studies on VR in CS education.
Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web - e.g. microblogging Web sites, trust networks or the Web graph itself - are often directed. Few community detection algorithms deal with directed graphs but we lack their experimental comparison. In this paper we evaluated some community detection algorithms across accuracy and scalability. A first group of algorithms (Label Propagation and Infomap) are explicitly designed to manage directed graphs while a second group (e.g., WalkTrap) simply ignores edge directionality; finally, a third group of algorithms (e.g., Eigenvector) maps input graphs onto undirected ones and extracts communities from the symmetrized version of the input graph. We ran our tests on both artificial and real graphs and, on artificial graphs, WalkTrap achieved the highest accuracy, closely followed by other algorithms; Label Propagation has outstanding performance in scalability on both artificial and real graphs. The Infomap algorithm showcased the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes.
Vehicular cloud computing is envisioned to deliver services that provide traffic safety and efficiency to vehicles. Vehicular cloud computing has great potential to change the contemporary vehicular communication paradigm. Explicitly, the underutilized resources of vehicles can be shared with other vehicles to manage traffic during congestion. These resources include but are not limited to storage, computing power, and Internet connectivity. This study reviews current traffic management systems to analyze the role and significance of vehicular cloud computing in road traffic management. First, an abstraction of the vehicular cloud infrastructure in an urban scenario is presented to explore the vehicular cloud computing process. A taxonomy of vehicular clouds that defines the cloud formation, integration types, and services is presented. A taxonomy of vehicular cloud services is also provided to explore the object types involved and their positions within the vehicular cloud. A comparison of the current state-of-the-art traffic management systems is performed in terms of parameters, such as vehicular ad hoc network infrastructure, Internet dependency, cloud management, scalability, traffic flow control, and emerging services. Potential future challenges and emerging technologies, such as the Internet of vehicles and its incorporation in traffic congestion control, are also discussed. Vehicular cloud computing is envisioned to have a substantial role in the development of smart traffic management solutions and in emerging Internet of vehicles
The explosive growth in the number of devices connected to the Internet of Things (IoT) and the exponential increase in data consumption only reflect how the growth of big data perfectly overlaps with that of IoT. The management of big data in a continuously expanding network gives rise to non-trivial concerns regarding data collection efficiency, data processing, analytics, and security. To address these concerns, researchers have examined the challenges associated with the successful deployment of IoT. Despite the large number of studies on big data, analytics, and IoT, the convergence of these areas creates several opportunities for flourishing big data and analytics for IoT systems. In this paper, we explore the recent advances in big data analytics for IoT systems as well as the key requirements for managing big data and for enabling analytics in an IoT environment. We taxonomized the literature based on important parameters. We identify the opportunities resulting from the convergence of big data, analytics, and IoT as well as discuss the role of big data analytics in IoT applications. Finally, several open challenges are presented as future research directions.
Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.
Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.
Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.
Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.
Explainable artificial intelligence is beneficial in converting opaque machine learning models into transparent ones and outlining how each one makes decisions in the healthcare industry. To comprehend the variables that affect decision-making regarding diabetes prediction that can be accounted for by model-agnostic techniques. In this project, we investigate how to generate local and global explanations for a machine-learning model built on a logistic regression architecture. We trained on 253,680 survey responses from diabetes patients using the explainable AI techniques LIME and SHAP. LIME and SHAP were then used to explain the predictions produced by the logistic regression and Random forest-based model on the validation and test sets.With a discussion of future work, the comparative analysis and discussion of various experimental findings between LIME and SHAP are provided, along with their strengths and weaknesses in terms of interpretation. With a high accuracy of 86% on the test set, we used LR architecture with a spatial attention mechanism, demonstrating the possibility of merging machine learning and explainable AI to improve diabetes prediction, diagnosis, and treatment.We also focus on various applications, difficulties, and probable future directions of machine learning models for LIME and SHAP interpreters.
The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.
The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.
Due to the unpleasant and unpredictable underwater environment, designing an energy-efficient routing protocol for underwater wireless sensor networks (UWSNs) demands more accuracy and extra computations. In the proposed scheme, we introduce a mobile sink (MS), i.e., an autonomous underwater vehicle (AUV), and also courier nodes (CNs), to minimize the energy consumption of nodes. MS and CNs stop at specific stops for data gathering; later on, CNs forward the received data to the MS for further transmission. By the mobility of CNs and MS, the overall energy consumption of nodes is minimized. We perform simulations to investigate the performance of the proposed scheme and compare it to preexisting techniques. Simulation results are compared in terms of network lifetime, throughput, path loss, transmission loss and packet drop ratio. The results show that the proposed technique performs better in terms of network lifetime, throughput, path loss and scalability
A smart grid can be considered as a complex network where each node represents a generation unit or a consumer, whereas links can be used to represent transmission lines. One way to study complex systems is by using the agent-based modeling paradigm. The agent-based modeling is a way of representing a complex system of autonomous agents interacting with each other. Previously, a number of studies have been presented in the smart grid domain making use of the agent-based modeling paradigm. However, to the best of our knowledge, none of these studies have focused on the specification aspect of the model. The model specification is important not only for understanding but also for replication of the model. To fill this gap, this study focuses on specification methods for smart grid modeling. We adopt two specification methods named as Overview, design concept, and details and Descriptive agent-based modeling. By using specification methods, we provide tutorials and guidelines for model developing of smart grid starting from conceptual modeling to validated agent-based model through simulation. The specification study is exemplified through a case study from the smart grid domain. In the case study, we consider a large set of network, in which different consumers and power generation units are connected with each other through different configuration. In such a network, communication takes place between consumers and generating units for energy transmission and data routing. We demonstrate how to effectively model a complex system such as a smart grid using specification methods. We analyze these two specification approaches qualitatively as well as quantitatively. Extensive experiments demonstrate that Descriptive agent-based modeling is a more useful approach as compared with Overview, design concept, and details method for modeling as well as for replication of models for the smart grid.
Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.
Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.
One of the most vital parts of medical image analysis is the classification of brain tumors. Because tumors are thought to be origins to cancer, accurate brain tumor classification can save lives. As a result, CNN (Convolutional Neural Network)-based techniques for classifying brain cancers are frequently employed. However, there is a problem: CNNs are exposed to vast amounts of training data in order to produce good performance. This is where transfer learning enters into the picture. We present a 4-class transfer learning approach for categorizing Glioma, Meningioma, and Pituitary tumors and non-tumors in this study. The three most prevalent types of brain tumors are glioma, meningioma, and pituitary tumors. Our presented method, which employs the theory of transfer learning, utilizes a pre-trained InceptionResnetV1 method for classifying brain MRI images by extracting features from them using the softmax classifier method. The proposed approach outperforms all prior techniques with a mean classification accuracy of 93.95%. For the evaluation of our method we use kaggle dataset. Precision, recall, and F-score are one of the key performance metrics employed in this study.
Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.
Handwritten signatures remain a widely used method for personal authentication in various official documents, including bank checks and legal papers. The verification process is often labor-intensive and time-consuming, necessitating the development of efficient methods. This study evaluates the performance of machine learning models in handwritten signature verification using the ICDAR 2011 Signature and CEDAR datasets. The investigation involves preprocessing, feature extraction using CNN architectures, and optimization techniques. The most effective models undergo a rigorous evaluation process, followed by classification using supervised ML algorithms, such as linear SVM, random forest, logistic regression, and polynomial SVM. The results indicate that the VGG16 architecture, optimized with the Adam optimizer, achieves the satisfactory performance metrics. This study demonstrates the potential of ML methodologies to enhance the efficiency and accuracy of signature verification, offering a robust solution for document authentication.
Text-based emotion identification goes beyond simple sentiment analysis by capturing emotions in a more nuanced way, akin to shades of gray rather than just positive or negative sentiments. This paper details our experiments with emotion analysis on Bangla text. We collected a corpus of user comments from various social media groups discussing socioeconomic and political topics to identify six emotions: sadness, disgust, surprise, fear, anger, and joy. We evaluated the performance of four widely used machine learning algorithms—RF, DT, k-NN, and SVM—alongside three popular deep learning algorithms—CNNs, LSTM, and Transformer Learning—using TF-IDF feature extraction and word embedding techniques. The results showed that among the machine learning algorithms, DT, RF, k-NN, and SVM achieved accuracy scores of 82%, 84%, 73%, and 83%, respectively. In contrast, the deep learning models CNN and LSTM both achieved higher performance with an accuracy of 85% and 86% respectively. These findings highlight the effectiveness of traditional ML and DL approaches in detecting emotions from Bangla social media texts, indicating significant potential for further advancements in this area.
Recognition of Bengali sign language characters is crucial for facilitating communication for the deaf and hard-of-hearing population in Bengali-speaking regions, which encompass approximately 430 million people worldwide. Despite the significant number of individuals requiring this support, research on Bengali sign language character recognition remains underdeveloped. This article presents a novel approach to categorize Bengali sign language characters using the Ishara-Lipi dataset, based on convolutional neural networks (CNNs) and pretrained models. We evaluated our approach using metrics such as accuracy, precision, recall, F1-score, and confusion matrices. Our findings indicate that the CNN model achieved the highest performance with an accuracy of 98%, followed by VGG19 with 94% and ResNet variants achieving around 88%. The proposed model demonstrates robust and efficient classification capabilities, significantly bridging the gap in existing literature. This study holds substantial promise for enhancing assistive technology, thereby improving social inclusion and quality of life for Bengalispeaking deaf and hard-of-hearing individuals.
Water management in residential areas often faces challenges such as unpredictable shortages and damaging overflows due to inadequate monitoring of tank water levels. This study presents the design and implementation of an Internet of Things (IoT)-based tank water monitoring system aimed at providing a reliable and efficient solution to these issues. Utilizing advanced sensor technology, the system accurately monitors water levels, volume, and quality within storage tanks. It incorporates dual sensors that enhance reliability: one sensor manages the water level, initiating pump activation when levels fall below a critical threshold, while the second sensor prevents overflows by deactivating the pump once the water level reaches a pre-set maximum. The system is designed to be user-friendly, offering real-time water level data and control via an Android application or web dashboard. This allows for remote operation of the motor pump, ensuring that water availability is consistent and secure, while also safeguarding against overflows and subsequent water wastage. The implementation of this IoT system demonstrates significant potential for enhancing water resource management in residential settings, promoting both sustainability and ease of use.
This research study introduces an IoT-based Agricultural Monitoring System designed to enhance precision farming practices. Employing Arduino microcontrollers and a network of sensors including water level, soil moisture, temperature, and pH, the system enables real-time monitoring of agricultural parameters. Integration with GSM and WiFi modules facilitates data communication and remote-control capabilities. Additionally, solar power integration enhances sustainability. The collected data is uploaded to a cloud platform for analysis, providing farmers with actionable insights for informed decision-making. The study aims to optimize resource utilization, improve crop yield, and promote sustainable agriculture practices.
Hydroponics, a method of growing plants without soil, offers a viable solution for agricultural production in areas with limited space or adverse soil conditions. This soil-less cultivation method circumvents the lengthy decomposition process associated with traditional soil-based agriculture, reducing the risk of disease and the associated costs. This study introduces a sophisticated Internet of Things (IoT) framework designed to optimize the monitoring and management of hydroponic systems. The core of this system is a suite of multimodal sensors that continuously measure critical environmental and nutritional parameters such as temperature, humidity, nutrient levels, pH, and water levels. These sensors are integrated with a microcontroller that collects and transmits data to a cloud-based platform for storage and further analysis. Furthermore, an interactive website allows users to access this data remotely and adjust the system settings based on real-time information. The system also incorporates an automated control mechanism that adjusts the environment of the hydroponic system based on sensor inputs and predefined algorithms, ensuring optimal plant growth conditions. By providing a comprehensive and adaptive approach to hydroponic management, this IoT-based system enhances the efficiency and effectiveness of hydroponic farming, making it adaptable to various setups and scalable for different operational sizes. This study demonstrates the potential of integrating advanced technologies like IoT into agricultural practices to enhance productivity and sustainability.
In the rapidly advancing field of computer vision, object detection has become crucial for various applications, including animal tracking, face detection, and surveillance systems. This study investigates the efficacy of contemporary object detection methodologies by evaluating the performance of the You Only Look Once (YOLO) models and TensorFlow Model Zoo architectures for animal tracking. YOLO models, known for their ability to process entire images in real-time and predict bounding boxes and class probabilities simultaneously, offer significant advantages over traditional methods such as Convolutional Neural Networks (CNNs) and Fast R-CNNs. This paper compares the performance of YOLOv5 and YOLOv7, alongside TensorFlow-based models like Faster R-CNN ResNetv152 and SSD ResNet101, using a dataset of animal images. Our findings reveal that YOLOv5 outperforms other models with a mean average precision (mAP) of 9 7.5%, demonstrating superior accuracy and efficiency in object detection tasks. YOLOv7 also shows strong performance with an mAP of 96.7%, while TensorFlow Model Zoo’s Faster R-CNN and SSD models lag behind with mAPs of 81.9% and 81.6%, respectively. The results highlight the significant advancements in deep learning and object detection algorithms, particularly the advantages of YOLO’s architecture in handling complex detection tasks in real-world scenarios.
The main challenge of any mobile robot is to detect and avoid obstacles and potholes. This paper presents the development and implementation of a novel mobile robot. An Arduino Uno is used as the processing unit of the robot. A Sharp distance measurement sensor and Ultrasonic sensors are used for taking inputs from the environment. The robot trains a neural network based on a feedforward backpropagation algorithm to detect and avoid obstacles and potholes. For that purpose, we have used a truth table. Our experimental results show that our developed system can ideally detect and avoid obstacles and potholes and navigate environments.
An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.