Ändra sökning
Avgränsa sökresultatet
1234567 1 - 50 av 1254
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    A. Oliveira, Roger
    et al.
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Energivetenskap.
    S. Salles, Rafael
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Energivetenskap.
    Rönnberg, Sarah K.
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Energivetenskap.
    Deep Learning for Power Quality with Special Reference to Unsupervised Learning2023Ingår i: 27th International Conference on Electricity Distribution (CIRED 2023), IEEE, 2023, s. 935-939, artikel-id 10417Konferensbidrag (Refereegranskat)
  • 2.
    Abbasi, Jasim
    Luleå tekniska universitet, Institutionen för system- och rymdteknik.
    Predictive Maintenance in Industrial Machinery using Machine Learning2021Självständigt arbete på avancerad nivå (magisterexamen), 10 poäng / 15 hpStudentuppsats (Examensarbete)
    Abstract [en]

    Background: The gearbox and machinery faults prediction are expensive both in terms of repair and loss output in production. These losses or faults may lead to complete machinery or plant breakdown. 

    Objective: The goal of this study was to apply advanced machine learning techniques to avoid these losses and faults and replace them with predictive maintenance. To identify and predict the faults in industrial machinery using Machine Learning (ML)  and Deep Learning (DL) approaches. 

    Methods: Our study was based on two types of datasets which includes gearbox and rotatory machinery dataset. These datasets were analyzed to predict the faults using machine learning and deep neural network models. The performance of the model was evaluated for both the datasets with binary and multi-classification problems using the different machine learning models and their statistics.

    Results: In the case of the gearbox fault dataset with a binary classification problem, we observed random forest and deep neural network models performed equally well, with the highest F1-score and AUC score of around 0.98 and with the least error rate of 7%.  In addition to this, in the case of the multi-classification rotatory machinery fault prediction dataset, the random forest model outperformed the deep neural network model with an AUC score of 0.98. 

    Conclusions: In conclusion classification efficiency of the Machine Learning (ML) and Deep Neural Network (DNN) model were tested and evaluated. Our results show Random Forest (RF) and Deep Neural Network (DNN) models have better fault prediction ability to identify the different types of rotatory machinery and gearbox faults as compared to the decision tree and AdaBoost. 

    Keywords: Machine Learning, Deep Learning, Big Data, Predictive Maintenance, Rotatory Machinery Fault Prediction, Gearbox Fault Prediction, Machinery Fault Database, Internet of Things (IoT), Spectra quest machinery fault simulator, Cloud Computing, Industry 4.0

    Ladda ner fulltext (pdf)
    Predictive Maintenance in Industrial Machinery using Machine Learning
  • 3.
    Abdukalikova, Anara
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Kleyko, Denis
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Osipov, Evgeny
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Wiklund, Urban
    Umeå University, Umeå, Sweden.
    Detection of Atrial Fibrillation from Short ECGs: Minimalistic Complexity Analysis for Feature-Based Classifiers2018Ingår i: Computing in Cardiology 2018: Proceedings / [ed] Christine Pickett; Cristiana Corsi; Pablo Laguna; Rob MacLeod, IEEE, 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    In order to facilitate data-driven solutions for early detection of atrial fibrillation (AF), the 2017 CinC conference challenge was devoted to automatic AF classification based on short ECG recordings. The proposed solutions concentrated on maximizing the classifiers F 1 score, whereas the complexity of the classifiers was not considered. However, we argue that this must be addressed as complexity places restrictions on the applicability of inexpensive devices for AF monitoring outside hospitals. Therefore, this study investigates the feasibility of complexity reduction by analyzing one of the solutions presented for the challenge.

  • 4.
    Abdunabiev, Isomiddin
    et al.
    Department of Computer and Software, Hanyang University.
    Lee, Choonhwa
    Department of Computer and Software, Hanyang University.
    Hanif, Muhammad
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Digitala tjänster och system.
    An Auto-Scaling Architecture for Container Clusters Using Deep Learning2021Ingår i: 2021년도 대한전자공학회 하계종합학술대회 논문집, DBpia , 2021, s. 1660-1663Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the past decade, cloud computing has become one of the essential techniques of many business areas, including social media, online shopping, music streaming, and many more. It is difficult for cloud providers to provision their systems in advance due to fluctuating changes in input workload and resultant resource demand. Therefore, there is a need for auto-scaling technology that can dynamically adjust resource allocation of cloud services based on incoming workload. In this paper, we present a predictive auto-scaler for Kubernetes environments to improve the quality of service. Being based on a proactive model, our proposed auto-scaling method serves as a foundation on which to build scalable and resource-efficient cloud systems.

  • 5.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology Chittagong.
    Chowdhury, Abu Sayeed
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Karim, Razuan
    University of Science and Technology Chittagong.
    An Interoperable IP based WSN for Smart Irrigation Systems2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.

    Ladda ner fulltext (pdf)
    fulltext
  • 6.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Paul, Sukanta
    University of Science and Technology, Chittagong.
    Akhter, Sharmin
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology, Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Selection of Energy Efficient Routing Protocol for Irrigation Enabled by Wireless Sensor Networks2017Ingår i: Proceedings of 2017 IEEE 42nd Conference on Local Computer Networks Workshops, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 75-81Konferensbidrag (Refereegranskat)
    Abstract [en]

    Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018Ingår i: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, s. 1-7Konferensbidrag (Refereegranskat)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

    Ladda ner fulltext (pdf)
    fulltext
  • 8.
    Abid, Nosheen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Deep Learning for Geo-referenced Data: Case Study: Earth Observation2021Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    The thesis focuses on machine learning methods for Earth Observation (EO) data, more specifically, remote sensing data acquired by satellites and drones. EO plays a vital role in monitoring the Earth’s surface and modelling climate change to take necessary precautionary measures. Initially, these efforts were dominated by methods relying on handcrafted features and expert knowledge. The recent advances of machine learning methods, however, have also led to successful applications in EO. This thesis explores supervised and unsupervised approaches of Deep Learning (DL) to monitor natural resources of water bodies and forests. 

    The first study of this thesis introduces an Unsupervised Curriculum Learning (UCL) method based on widely-used DL models to classify water resources from RGB remote sensing imagery. In traditional settings, human experts labeled images to train the deep models which is costly and time-consuming. UCL, instead, can learn the features progressively in an unsupervised fashion from the data, reducing the exhausting efforts of labeling. Three datasets of varying resolution are used to evaluate UCL and show its effectiveness: SAT-6, EuroSAT, and PakSAT. UCL outperforms the supervised methods in domain adaptation, which demonstrates the effectiveness of the proposed algorithm. 

    The subsequent study is an extension of UCL for the multispectral imagery of Australian wildfires. This study has used multispectral Sentinel-2 imagery to create the dataset for the forest fires ravaging Australia in late 2019 and early 2020. 12 out of the 13 spectral bands of Sentinel-2 are concatenated in a way to make them suitable as a three-channel input to the unsupervised architecture. The unsupervised model then classified the patches as either burnt or not burnt. This work attains 87% F1-Score mapping the burnt regions of Australia, demonstrating the effectiveness of the proposed method. 

    The main contributions of this work are (i) the creation of two datasets using Sentinel-2 Imagery, PakSAT dataset and Australian Forest Fire dataset; (ii) the introduction of UCL that learns the features progressively without the need of labelled data; and (iii) experimentation on relevant datasets for water body and forest fire classification. 

    This work focuses on patch-level classification which could in future be expanded to pixel-based classification. Moreover, the methods proposed in this study can be extended to the multi-class classification of aerial imagery. Further possible future directions include the combination of geo-referenced meteorological and remotely sensed image data to explore proposed methods. Lastly, the proposed method can also be adapted to other domains involving multi-spectral and multi-modal input, such as, historical documents analysis, forgery detection in documents, and Natural Language Processing (NLP) classification tasks.

    Ladda ner fulltext (pdf)
    cover
    Ladda ner fulltext (pdf)
    fulltext
  • 9.
    Abid, Nosheen
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Kovács, György
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Wedin, Jacob
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Paszkowsky, Nuria Agues
    Research Institutes of Sweden, Sweden.
    Shafait, Faisal
    Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    UCL: Unsupervised Curriculum Learning for Utility Pole Detection from Aerial Imagery2022Ingår i: Proceedings of the Digital Image Computing: Technqiues and Applications (DICTA), IEEE, 2022Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper introduces a machine learning-based approach for detecting electric poles, an essential part of power grid maintenance. With the increasing popularity of deep learning, several such approaches have been proposed for electric pole detection. However, most of these approaches are supervised, requiring a large amount of labeled data, which is time-consuming and labor-intensive. Unsupervised deep learning approaches have the potential to overcome the need for huge amounts of training data. This paper presents an unsupervised deep learning framework for utility pole detection. The framework combines Convolutional Neural Network (CNN) and clustering algorithms with a selection operation. The CNN architecture for extracting meaningful features from aerial imagery, a clustering algorithm for generating pseudo labels for the resulting features, and a selection operation to filter out reliable samples to fine-tune the CNN architecture further. The fine-tuned version then replaces the initial CNN model, thus improving the framework, and we iteratively repeat this process so that the model learns the prominent patterns in the data progressively. The presented framework is trained and tested on a small dataset of utility poles provided by “Mention Fuvex” (a Spanish company utilizing long-range drones for power line inspection). Our extensive experimentation demonstrates the progressive learning behavior of the proposed method and results in promising classification scores with significance test having p−value<0.00005 on the utility pole dataset.

    Ladda ner fulltext (pdf)
    fulltext
  • 10.
    Abid, Nosheen
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan.
    Malik, Muhammad Imran
    Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan.
    Shahzad, Muhammad
    Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan; Technical University of Munich (TUM), Munich, Germany.
    Shafait, Faisal
    Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan.
    Ali, Haider
    Engineering, TU, Kaiserslautern, Germany.
    Ghaffar, Muhammad Mohsin
    Johns Hopkins University, USA.
    Weis, Christian
    Johns Hopkins University, USA.
    Wehn, Norbert
    Johns Hopkins University, USA.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Burnt Forest Estimation from Sentinel-2 Imagery of Australia using Unsupervised Deep Learning2021Ingår i: Proceedings of the Digital Image Computing: Technqiues and Applications (DICTA), IEEE, 2021, s. 74-81Konferensbidrag (Refereegranskat)
    Abstract [en]

    Massive wildfires not only in Australia, but also worldwide are burning millions of hectares of forests and green land affecting the social, ecological, and economical situation. Widely used indices-based threshold methods like Normalized Burned Ratio (NBR) require a huge amount of data preprocessing and are specific to the data capturing source. State-of-the-art deep learning models, on the other hand, are supervised and require domain experts knowledge for labeling the data in huge quantity. These limitations make the existing models difficult to be adaptable to new variations in the data and capturing sources. In this work, we have proposed an unsupervised deep learning based architecture to map the burnt regions of forests by learning features progressively. The model considers small patches of satellite imagery and classifies them into burnt and not burnt. These small patches are concatenated into binary masks to segment out the burnt region of the forests. The proposed system is composed of two modules: 1) a state-of-the-art deep learning architecture for feature extraction and 2) a clustering algorithm for the generation of pseudo labels to train the deep learning architecture. The proposed method is capable of learning the features progressively in an unsupervised fashion from the data with pseudo labels, reducing the exhausting efforts of data labeling that requires expert knowledge. We have used the realtime data of Sentinel-2 for training the model and mapping the burnt regions. The obtained F1-Score of 0.87 demonstrates the effectiveness of the proposed model.

    Ladda ner (pdf)
    attachment
  • 11.
    Abid, Nosheen
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan; Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan.
    Shahzad, Muhammad
    School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan; Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan; Data Science in Earth Observation, Department of Aerospace and Geodesy, Technical University of Munich (TUM), Munich, Germany.
    Malik, Muhammad Imran
    School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan; Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan.
    Schwanecke, Ulrich
    RheinMain University of Applied Sciences, Germany.
    Ulges, Adrian
    RheinMain University of Applied Sciences, Germany.
    Kovács, György
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Shafait, Faisal
    School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan; Deep Learning Lab, National Center of Artificial Intelligence, National University of Sciences and Technology, Pakistan.
    UCL: Unsupervised Curriculum Learning for Water Body Classification from Remote Sensing Imagery2021Ingår i: International Journal of Applied Earth Observation and Geoinformation, ISSN 1569-8432, E-ISSN 1872-826X, Vol. 105, artikel-id 102568Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a Convolutional Neural Networks (CNN) based Unsupervised Curriculum Learning approach for the recognition of water bodies to overcome the stated challenges for remote sensing based RGB imagery. The unsupervised nature of the presented algorithm eliminates the need for labelled training data. The problem is cast as a two class clustering problem (water and non-water), while clustering is done on deep features obtained by a pre-trained CNN. After initial clusters have been identified, representative samples from each cluster are chosen by the unsupervised curriculum learning algorithm for fine-tuning the feature extractor. The stated process is repeated iteratively until convergence. Three datasets have been used to evaluate the approach and show its effectiveness on varying scales: (i) SAT-6 dataset comprising high resolution aircraft images, (ii) Sentinel-2 of EuroSAT, comprising remote sensing images with low resolution, and (iii) PakSAT, a new dataset we created for this study. PakSAT is the first Pakistani Sentinel-2 dataset designed to classify water bodies of Pakistan. Extensive experiments on these datasets demonstrate the progressive learning behaviour of UCL and reported promising results of water classification on all three datasets. The obtained accuracies outperform the supervised methods in domain adaptation, demonstrating the effectiveness of the proposed algorithm.

    Ladda ner fulltext (pdf)
    fulltext
  • 12.
    Abrishambaf, Reza
    et al.
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Bal, Mert
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Distributed home automation system based on IEC61499 function blocks and wireless sensor networks2017Ingår i: Proceedings of the IEEE International Conference on Industrial Technology, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 1354-1359, artikel-id 7915561Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution

  • 13.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, WitoldDepartment of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.Vasilakos, AthanasiosLuleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.Vitiello, AutiliaDepartment of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems2020Samlingsverk (redaktörskap) (Övrigt vetenskapligt)
    Abstract [en]

    This book provides a comprehensive overview of computational intelligence methods for semantic knowledge management. Contrary to popular belief, the methods for semantic management of information were created several decades ago, long before the birth of the Internet. In fact, it was back in 1945 when Vannevar Bush introduced the idea for the first protohypertext: the MEMEX (MEMory + indEX) machine. In the years that followed, Bush’s idea influenced the development of early hypertext systems until, in the 1980s, Tim Berners Lee developed the idea of the World Wide Web (WWW) as it is known today. From then on, there was an exponential growth in research and industrial activities related to the semantic management of the information and its exploitation in different application domains, such as healthcare, e-learning and energy management. 

    However, semantics methods are not yet able to address some of the problems that naturally characterize knowledge management, such as the vagueness and uncertainty of information. This book reveals how computational intelligence methodologies, due to their natural inclination to deal with imprecision and partial truth, are opening new positive scenarios for designing innovative semantic knowledge management architectures.

  • 14.
    Acampora, Giovanni
    et al.
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Pedrycz, Witold
    Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.
    Vasilakos, Athanasios
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Vitiello, Autilia
    Department of Physics “Ettore Pancini”, University of Naples Federico II, Naples, Italy.
    Preface2020Ingår i: Computational Intelligence for Semantic Knowledge Management: New Perspectives for Designing and Organizing Information Systems / [ed] Giovanni Acampora; Witold Pedrycz; Athanasios V. Vasilakos; Autilia Vitiello, Springer Nature, 2020, Vol. 837, s. vii-xKapitel i bok, del av antologi (Övrigt vetenskapligt)
  • 15.
    Adelani, David Ifeoluwa
    et al.
    Masakhane NLP; Saarland University, Germany; University College London, UK.
    Neubig, Graham
    Carnegie Mellon University, USA.
    Ruder, Sebastian
    Google Research.
    Rijhwani, Shruti
    Carnegie Mellon University, USA.
    Beukman, Michael
    Masakhane NLP; University of the Witwatersrand, South Africa.
    Palen-Michel, Chester
    Masakhane NLP; Brandeis University, USA.
    Lignos, Constantine
    Masakhane NLP; Brandeis University, USA.
    Alabi, Jesujoba O.
    Masakhane NLP; Saarland University, Germany.
    Muhammad, Shamsuddeen H.
    Masakhane NLP; LIAAD-INESC TEC, Portugal.
    Nabende, Peter
    Masakhane NLP; Makerere University, Uganda.
    Bamba Dione, Cheikh M.
    Masakhane NLP; University of Bergen, Norway.
    Bukula, Andiswa
    SADiLaR, South Africa.
    Mabuya, Rooweither
    SADiLaR, South Africa.
    Dossou, Bonaventure F.P.
    Masakhane NLP; Mila Quebec AI Institute, Canada.
    Sibanda, Blessing
    Masakhane NLP.
    Buzaaba, Happy
    Masakhane NLP; RIKEN Center for AI Project, Japan.
    Mukiibi, Jonathan
    Masakhane NLP; Makerere University, Uganda.
    Kalipe, Godson
    Masakhane NLP.
    Mbaye, Derguene
    Masakhane NLP; Baamtu, Senegal.
    Taylor, Amelia
    Masakhane NLP; Malawi University of Business and Applied Science, Malawi.
    Kabore, Fatoumata
    Masakhane NLP; Uppsala University, Sweden.
    Emezue, Chris Chinenye
    Masakhane NLP; TU Munich, Germany.
    Aremu, Anuoluwapo
    Masakhane NLP.
    Ogayo, Perez
    Masakhane NLP; Carnegie Mellon University, USA.
    Gitau, Catherine
    Masakhane NLP.
    Munkoh-Buabeng, Edwin
    Masakhane NLP; TU Clausthal, Germany.
    Koagne, Victoire M.
    Masakhane NLP.
    Tapo, Allahsera Auguste
    Masakhane NLP; Rochester Institute of Technology, USA.
    Macucwa, Tebogo
    Masakhane NLP; University of Pretoria, South Africa.
    Marivate, Vukosi
    Masakhane NLP; University of Pretoria, South Africa.
    Mboning, Elvis
    Masakhane NLP.
    Gwadabe, Tajuddeen
    Masakhane NLP.
    Adewumi, Tosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Masakhane NLP.
    Ahia, Orevaoghene
    Masakhane NLP; University of Washington, USA.
    Nakatumba-Nabende, Joyce
    Masakhane NLP; Makerere University, Uganda.
    Mokono, Neo L.
    Masakhane NLP; University of Pretoria, South Africa.
    Ezeani, Ignatius
    Masakhane NLP; Lancaster University, UK.
    Chukwuneke, Chiamaka
    Masakhane NLP; Lancaster University, UK.
    Adeyemi, Mofetoluwa
    Masakhane NLP; University of Waterloo, Canada.
    Hacheme, Gilles Q.
    Masakhane NLP; Ai4innov, France.
    Abdulmumin, Idris
    Masakhane NLP; Ahmadu Bello University, Nigeria.
    Ogundepo, Odunayo
    Masakhane NLP; University of Waterloo, Canada.
    Yousuf, Oreen
    Masakhane NLP; Uppsala University, Sweden.
    Ngoli, Tatiana Moteu
    Masakhane NLP.
    Klakow, Dietrich
    Saarland University, Germany.
    MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition2022Ingår i: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (ACL) , 2022, s. 4488-4508Konferensbidrag (Refereegranskat)
    Abstract [en]

    African languages are spoken by over a billion people, but are underrepresented in NLP research and development. The challenges impeding progress include the limited availability of annotated datasets, as well as a lack of understanding of the settings where current methods are effective. In this paper, we make progress towards solutions for these challenges, focusing on the task of named entity recognition (NER). We create the largest human-annotated NER dataset for 20 African languages, and we study the behavior of state-of-the-art cross-lingual transfer methods in an Africa-centric setting, demonstrating that the choice of source language significantly affects performance. We show that choosing the best transfer language improves zero-shot F1 scores by an average of 14 points across 20 languages compared to using English. Our results highlight the need for benchmark datasets and models that cover typologically-diverse African languages.

  • 16.
    Adewumi, Oluwatosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Vector Representations of Idioms in Data-Driven Chatbots for Robust Assistance2022Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis presents resources capable of enhancing solutions of some Natural Language Processing (NLP) tasks, demonstrates the learning of abstractions by deep models through cross-lingual transferability, and shows how deep learning models trained on idioms can enhance open-domain conversational systems. The challenges of open-domain conversational systems are many and include bland repetitive utterances, lack of utterance diversity, lack of training data for low-resource languages, shallow world-knowledge and non-empathetic responses, among others. These challenges contribute to the non-human-like utterances that open-domain conversational systems suffer from. They, hence,have motivated the active research in Natural Language Understanding (NLU) and Natural Language Generation (NLG), considering the very important role conversations (or dialogues) play in human lives. The methodology employed in this thesis involves an iterative set of scientific methods. First, it conducts a systematic literature review to identify the state-of-the-art (SoTA) and gaps, such as the challenges mentioned earlier, in current research. Subsequently, it follows the seven stages of the Machine Learning (ML) life-cycle, which are data gathering (or acquisition), data preparation, model selection, training, evaluation with hyperparameter tuning, prediction and model deployment. For data acquisition, relevant datasets are acquired or created, using benchmark datasets as references, and their data statements are included. Specific contributions of this thesis are the creation of the Swedish analogy test set for evaluating word embeddings and the Potential Idiomatic Expression (PIE)-English idioms corpus for training models in idiom identification and classification. In order to create a benchmark, this thesis performs human evaluation on the generated predictions of some SoTA ML models, including DialoGPT. As different individuals may not agree on all the predictions, the Inter-Annotator Agreement (IAA) is measured. A typical method for measuring IAA is Fleiss Kappa, however, it has a number of shortcomings, including high sensitivity to the number of categories being evaluated. Therefore, this thesis introduces the credibility unanimous score (CUS), which is more intuitive, easier to calculate and seemingly less sensitive to changes in the number of categories being evaluated. The results of human evaluation and comments from evaluators provide valuable feedback on the existing challenges within the models. These create the opportunity for addressing such challenges in future work. The experiments in this thesis test two hypothesis; 1) an open-domain conversational system that is idiom-aware generates more fitting responses to prompts containing idioms, and 2) deep monolingual models learn some abstractions that generalise across languages. To investigate the first hypothesis, this thesis trains English models on the PIE-English idioms corpus for classification and generation. For the second hypothesis, it explores cross-lingual transferability from English models to Swedish, Yorùbá, Swahili, Wolof, Hausa, Nigerian Pidgin English and Kinyarwanda. From the results, the thesis’ additional contributions mainly lie in 1) confirmation of the hypothesis that an open-domain conversational system that is idiom-aware generates more fitting responses to prompts containing idioms, 2) confirmation of the hypothesis that deep monolingual models learn some abstractions that generalise across languages, 3) introduction of CUS and its benefits, 4) insight into the energy-saving and time-saving benefits of more optimal embeddings from relatively smaller corpora, and 5) provision of public access to the model checkpoints that were developed from this work. We further discuss the ethical issues involved in developing robust, open-domain conversational systems. Parts of this thesis are already published in the form of peer-reviewed journal and conference articles.

    Ladda ner fulltext (pdf)
    fulltext
    Ladda ner fulltext (pdf)
    errata
  • 17.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Brännvall, Rickard
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. RISE Research Institutes of Sweden.
    Abid, Nosheen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Pahlavan, Maryam
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Sabah Sabry, Sana
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Småprat: DialoGPT for Natural Language Generation of Swedish Dialogue by Transfer Learning2022Ingår i: Proceedings of the Northern Lights Deep Learning Workshop 2022 / [ed] Sigurd Løkse, Benjamin Ricaud, Septentrio Academic Publishing , 2022, Vol. 3Konferensbidrag (Refereegranskat)
    Abstract [en]

    Building open-domain conversational systems (or chatbots) that produce convincing responses is a recognized challenge. Recent state-of-the-art (SoTA) transformer-based models for the generation of natural language dialogue have demonstrated impressive performance in simulating human-like, single-turn conversations in English.This work investigates, by an empirical study, the potential for transfer learning of such models to Swedish language. DialoGPT, an English language pre-trained model, is adapted by training on three different Swedish language conversational datasets obtained from publicly available sources: Reddit, Familjeliv and the GDC. Perplexity score (an automated intrinsic metric) and surveys by human evaluation were used to assess the performances of the fine-tuned models. We also compare the DialoGPT experiments with an attention-mechanism-based seq2seq baseline model, trained on the GDC dataset. The results indicate that the capacity for transfer learning can be exploited with considerable success. Human evaluators asked to score the simulated dialogues judged over 57% of the chatbot responses to be human-like for the model trained on the largest (Swedish) dataset. The work agrees with the hypothesis that deep monolingual models learn some abstractions which generalize across languages. We contribute the codes, datasets and model checkpoints and host the demos on the HuggingFace platform.

  • 18.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Conversational Systems in Machine Learning from the Point of View of the Philosophy of Science—Using Alime Chat and Related Studies2019Ingår i: Philosophies, ISSN 2409-9287, Vol. 4, nr 3, artikel-id 41Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This essay discusses current research efforts in conversational systems from the philosophy of science point of view and evaluates some conversational systems research activities from the standpoint of naturalism philosophical theory. Conversational systems or chatbots have advanced over the decades and now have become mainstream applications. They are software that users can communicate with, using natural language. Particular attention is given to the Alime Chat conversational system, already in industrial use, and the related research. The competitive nature of systems in production is a result of different researchers and developers trying to produce new conversational systems that can outperform previous or state-of-the-art systems. Different factors affect the quality of the conversational systems produced, and how one system is assessed as being better than another is a function of objectivity and of the relevant experimental results. This essay examines the research practices from, among others, Longino’s view on objectivity and Popper’s stand on falsification. Furthermore, the need for qualitative and large datasets is emphasized. This is in addition to the importance of the peer-review process in scientific publishing, as a means of developing, validating, or rejecting theories, claims, or methodologies in the research community. In conclusion, open data and open scientific discussion fora should become more prominent over the mere publication-focused trend.

  • 19.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work, we show that the difference in performance of embeddings from differently sourced data for a given language can be due to other factors besides data size. Natural language processing (NLP) tasks usually perform better with embeddings from bigger corpora. However, broadness of covered domain and noise can play important roles. We evaluate embeddings based on two Swedish corpora: The Gigaword and Wikipedia, in analogy (intrinsic) tests and discover that the embeddings from the Wikipedia corpus generally outperform those from the Gigaword corpus, which is a bigger corpus. Downstream tests will be required to have a definite evaluation.

    Ladda ner fulltext (pdf)
    fulltext
  • 20.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Inner For-Loop for Speeding Up Blockchain Mining2020Ingår i: Open Computer Science, E-ISSN 2299-1093, Vol. 10, nr 1, s. 42-47Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, the authors propose to increase the efficiency of blockchain mining by using a population-based approach. Blockchain relies on solving difficult mathematical problems as proof-of-work within a network before blocks are added to the chain. Brute force approach, advocated by some as the fastest algorithm for solving partial hash collisions and implemented in Bitcoin blockchain, implies exhaustive, sequential search. It involves incrementing the nonce (number) of the header by one, then taking a double SHA-256 hash at each instance and comparing it with a target value to ascertain if lower than that target. It excessively consumes both time and power. In this paper, the authors, therefore, suggest using an inner for-loop for the population-based approach. Comparison shows that it’s a slightly faster approach than brute force, with an average speed advantage of about 1.67% or 3,420 iterations per second and 73% of the time performing better. Also, we observed that the more the total particles deployed, the better the performance until a pivotal point. Furthermore, a recommendation on taming the excessive use of power by networks, like Bitcoin’s, by using penalty by consensus is suggested.

    Ladda ner fulltext (pdf)
    fulltext
  • 21.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Sabry, Sana Sabah
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Abid, Nosheen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    T5 for Hate Speech, Augmented Data, and Ensemble2023Ingår i: Sci, E-ISSN 2413-4155, Vol. 5, nr 4, artikel-id 37Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.

    Ladda ner fulltext (pdf)
    fulltext
  • 22.
    Adewumi, Tosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Masakhane.
    Adeyemi, Mofetoluwa
    Masakhane.
    Anuoluwapo, Aremu
    Masakhane.
    Peters, Bukola
    CIS.
    Buzaaba, Happy
    Masakhane.
    Samuel, Oyerinde
    Masakhane.
    Rufai, Amina Mardiyyah
    Masakhane.
    Ajibade, Benjamin
    Masakhane.
    Gwadabe, Tajudeen
    Masakhane.
    Koulibaly Traore, Mory Moussou
    Masakhane.
    Ajayi, Tunde Oluwaseyi
    Masakhane.
    Muhammad, Shamsuddeen
    Baruwa, Ahmed
    Masakhane.
    Owoicho, Paul
    Masakhane.
    Ogunremi, Tolulope
    Masakhane.
    Ngigi, Phylis
    Jomo Kenyatta University of Agriculture and Technology.
    Ahia, Orevaoghene
    Masakhane.
    Nasir, Ruqayya
    Masakhane.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    AfriWOZ: Corpus for Exploiting Cross-Lingual Transfer for Dialogue Generation in Low-Resource, African Languages2023Ingår i: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2023Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dialogue generation is an important NLP task fraught with many challenges. The challenges become more daunting for low-resource African languages. To enable the creation of dialogue agents for African languages, we contribute the first high-quality dialogue datasets for 6 African languages: Swahili, Wolof, Hausa, Nigerian Pidgin English, Kinyarwanda & Yorùbá. There are a total of 9,000 turns, each language having 1,500 turns, which we translate from a portion of the English multi-domain MultiWOZ dataset. Subsequently, we benchmark by investigating & analyzing the effectiveness of modelling through transfer learning by utilziing state-of-the-art (SoTA) deep monolingual models: DialoGPT and BlenderBot. We compare the models with a simple seq2seq baseline using perplexity. Besides this, we conduct human evaluation of single-turn conversations by using majority votes and measure inter-annotator agreement (IAA). We find that the hypothesis that deep monolingual models learn some abstractions that generalize across languages holds. We observe human-like conversations, to different degrees, in 5 out of the 6 languages. The language with the most transferable properties is the Nigerian Pidgin English, with a human-likeness score of 78.1%, of which 34.4% are unanimous. We freely provide the datasets and host the model checkpoints/demos on the HuggingFace hub for public access.

  • 23.
    Adewumi, Tosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    State-of-the-Art in Open-Domain Conversational AI: A Survey2022Ingår i: Information, E-ISSN 2078-2489, Vol. 13, nr 6, artikel-id 298Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    We survey SoTA open-domain conversational AI models with the objective of presenting the prevailing challenges that still exist to spur future research. In addition, we provide statistics on the gender of conversational AI in order to guide the ethics discussion surrounding the issue. Open-domain conversational AI models are known to have several challenges, including bland, repetitive responses and performance degradation when prompted with figurative language, among others. First, we provide some background by discussing some topics of interest in conversational AI. We then discuss the method applied to the two investigations carried out that make up this study. The first investigation involves a search for recent SoTA open-domain conversational AI models, while the second involves the search for 100 conversational AI to assess their gender. Results of the survey show that progress has been made with recent SoTA conversational AI, but there are still persistent challenges that need to be solved, and the female gender is more common than the male for conversational AI. One main takeaway is that hybrid models of conversational AI offer more advantages than any single architecture. The key contributions of this survey are (1) the identification of prevailing challenges in SoTA open-domain conversational AI, (2) the rarely held discussion on open-domain conversational AI for low-resource languages, and (3) the discussion about the ethics surrounding the gender of conversational AI.

  • 24.
    Adewumi, Tosin P.
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    The Challenge of Diacritics in Yorùbá Embeddings2020Ingår i: ML4D 2020 Proceedings / [ed] Tejumade Afonja; Konstantin Klemmer; Aya Salama; Paula Rodriguez Diaz; Niveditha Kalavakonda; Oluwafemi Azeez, Neural Information Processing Systems Foundation , 2020, artikel-id 2011.07605Konferensbidrag (Refereegranskat)
    Abstract [en]

    The major contributions of this work include the empirical establishment of a better performance for Yoruba embeddings from undiacritized (normalized) dataset and provision of new analogy sets for evaluation.The Yoruba language, being a tonal language, utilizes diacritics (tonal marks) in written form. We show that this affects embedding performance by creating embeddings from exactly the same Wikipedia dataset but with the second one normalized to be undiacritized. We further compare average intrinsic performance with two other work (using analogy test set & WordSim) and we obtain the best performance in WordSim and corresponding Spearman correlation.

    Ladda ner fulltext (pdf)
    fulltext
  • 25.
    Adewumi, Tosin P.
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Vector Representations of Idioms in Chatbots2020Ingår i: Proceedings: SAIS Workshop 2020, Chalmers University of Technology , 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    Open-domain chatbots have advanced but still have many gaps. My PhD aims to solve a few of those gaps by creating vector representations of idioms (figures of speech) that will be beneficial to chatbots and natural language processing (NLP), generally. In the process, new, optimal fastText embeddings in Swedish and English have been created and the first Swedish analogy test set, larger than the Google original, for intrinsic evaluation of Swedish embeddings has also been produced. Major milestones have been attained and others are soon to follow. The deliverables of this project will give NLP researchers the opportunity to measure the quality of Swedish embeddings easily and advance state-of-the-art (SotA) in NLP.

    Ladda ner fulltext (pdf)
    fulltext
  • 26.
    Afroze, Tasnim
    et al.
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Akther, Shumia
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Chowdhury, Mohammed Armanuzzaman
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Hossain, Emam
    Department of Computer Science and Engineering, Port City International University, Chattogram 4202, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chattogram 4331, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Glaucoma Detection Using Inception Convolutional Neural Network V32021Ingår i: Applied Intelligence and Informatics: First International Conference, AII 2021, Nottingham, UK, July 30–31, 2021, Proceedings / [ed] Mufti Mahmud; M. Shamim Kaiser; Nikola Kasabov; Khan Iftekharuddin; Ning Zhong, Springer, 2021, s. 17-28Konferensbidrag (Refereegranskat)
    Abstract [en]

    Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.

    Ladda ner fulltext (pdf)
    fulltext
  • 27.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland; School of Computing and Data Science, Willamette University, Salem, OR, 97301, USA.
    Oyelere, Solomon Sunday
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Tukiainen, Markku
    School of Computing, University of Eastern Finland, P.O. Box 111, N80101, Joensuu, Finland.
    Design, development, and evaluation of a virtual reality game-based application to support computational thinking2023Ingår i: Educational technology research and development, ISSN 1042-1629, E-ISSN 1556-6501, Vol. 71, nr 2, s. 505-537Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Computational thinking (CT) has become an essential skill nowadays. For young students, CT competency is required to prepare them for future jobs. This competency can facilitate students’ understanding of programming knowledge which has been a challenge for many novices pursuing a computer science degree. This study focuses on designing and implementing a virtual reality (VR) game-based application (iThinkSmart) to support CT knowledge. The study followed the design science research methodology to design, implement, and evaluate the first prototype of the VR application. An initial evaluation of the prototype was conducted with 47 computer science students from a Nigerian university who voluntarily participated in an experimental process. To determine what works and what needs to be improved in the iThinkSmart VR game-based application, two groups were randomly formed, consisting of the experimental (n = 21) and the control (n = 26) groups respectively. Our findings suggest that VR increases motivation and therefore increase students’ CT skills, which contribute to knowledge regarding the affordances of VR in education and particularly provide evidence on the use of visualization of CT concepts to facilitate programming education. Furthermore, the study revealed that immersion, interaction, and engagement in a VR educational application can promote students’ CT competency in higher education institutions (HEI). In addition, it was shown that students who played the iThinkSmart VR game-based application gained higher cognitive benefits, increased interest and attitude to learning CT concepts. Although further investigation is required in order to gain more insights into students learning process, this study made significant contributions in positioning CT in the HEI context and provides empirical evidence regarding the use of educational VR mini games to support students learning achievements.

    Ladda ner fulltext (pdf)
    fulltext
  • 28.
    Agbo, Friday Joseph
    et al.
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Sanusi, Ismaila Temitayo
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Oyelere, Solomon Sunday
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Suhonen, Jarkko
    School of Computing, University of Eastern Finland, FIN-80101 Joensuu, Finland.
    Application of Virtual Reality in Computer Science Education: A Systemic Review Based on Bibliometric and Content Analysis Methods2021Ingår i: Education Sciences, E-ISSN 2227-7102, Vol. 11, nr 3, artikel-id 142Artikel, forskningsöversikt (Refereegranskat)
    Abstract [en]

    This study investigated the role of virtual reality (VR) in computer science (CS) education over the last 10 years by conducting a bibliometric and content analysis of articles related to the use of VR in CS education. A total of 971 articles published in peer-reviewed journals and conferences were collected from Web of Science and Scopus databases to conduct the bibliometric analysis. Furthermore, content analysis was conducted on 39 articles that met the inclusion criteria. This study demonstrates that VR research for CS education was faring well around 2011 but witnessed low production output between the years 2013 and 2016. However, scholars have increased their contribution in this field recently, starting from the year 2017. This study also revealed prolific scholars contributing to the field. It provides insightful information regarding research hotspots in VR that have emerged recently, which can be further explored to enhance CS education. In addition, the quantitative method remains the most preferred research method, while the questionnaire was the most used data collection technique. Moreover, descriptive analysis was primarily used in studies on VR in CS education. The study concludes that even though scholars are leveraging VR to advance CS education, more effort needs to be made by stakeholders across countries and institutions. In addition, a more rigorous methodological approach needs to be employed in future studies to provide more evidence-based research output. Our future study would investigate the pedagogy, content, and context of studies on VR in CS education.

    Ladda ner fulltext (pdf)
    fulltext
  • 29.
    Agües Paszkowsky, Núria
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Research Institutes of Sweden, Unit for Data Center Systems and Applied Data Science, Sweden.
    Brännvall, Rickard
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Research Institutes of Sweden, Unit for Data Center Systems and Applied Data Science, Sweden.
    Carlstedt, Johan
    Research Institutes of Sweden, Unit for Data Center Systems and Applied Data Science, Sweden.
    Milz, Mathias
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Rymdteknik.
    Kovács, György
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Vegetation and Drought Trends in Sweden’s Mälardalen Region – Year-on-Year Comparison by Gaussian Process Regression2020Ingår i: 2020 Swedish Workshop on Data Science (SweDS), IEEE, 2020Konferensbidrag (Refereegranskat)
    Abstract [en]

    This article describes analytical work carried out in a pilot project for the Swedish Space Data Lab (SSDL), which focused on monitoring drought in the Mälardalen region in central Sweden. Normalized Difference Vegetation Index (NDVI) and the Moisture Stress Index (MSI) – commonly used to analyse drought – are estimated from Sentinel 2 satellite data and averaged over a selection of seven grassland areas of interest. To derive a complete time-series over a season that interpolates over days with missing data, we use Gaussian Process Regression, a technique from multivariate Bayesian analysis. The analysis show significant differences at 95% confidence for five out of seven areas when comparing the peak drought period in the dry year 2018 compared to the corresponding period in 2019. A cross-validation analysis indicates that the model parameter estimates are robust for temporal covariance structure (while inconclusive for the spatial dimensions). There were no signs of over-fitting when comparing in-sample and out-of-sample RMSE.

  • 30.
    Ahmad, Riaz
    et al.
    Shaheed Banazir Bhutto University, Sheringal, Pakistan.
    Naz, Saeeda
    Computer Science Department, GGPGC No.1 Abbottabad, Pakistan.
    Afzal, Muhammad
    Mindgarage, University of Kaiserslautern, Germany.
    Rashid, Sheikh
    Al Khwarizmi Institute of Computer Science, UET Lahore, Pakistan.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Dengel, Andreas
    German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern, Germany.
    A Deep Learning based Arabic Script Recognition System: Benchmark on KHAT2020Ingår i: The International Arab Journal of Information Technology, ISSN 1683-3198, Vol. 17, nr 3, s. 299-305Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.

  • 31.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hasan, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong University, Chittagong, 4331, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Comparative Performance of Tree Based Machine Learning Classifiers in Product Backorder Prediction2023Ingår i: Intelligent Computing & Optimization: Proceedings of the 5th International Conference on Intelligent Computing and Optimization 2022 (ICO2022) / [ed] Pandian Vasant; Gerhard-Wilhelm Weber; José Antonio Marmolejo-Saucedo; Elias Munapo; J. Joshua Thomas, Springer, 2023, 1, s. 572-584Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.

  • 32.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram 4000, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Islam, Raihan Ul
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    An Evolutionary Belief Rule-Based Clinical Decision Support System to Predict COVID-19 Severity under Uncertainty2021Ingår i: Applied Sciences, E-ISSN 2076-3417, Vol. 11, nr 13, artikel-id 5810Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.

    Ladda ner fulltext (pdf)
    fulltext
  • 33.
    Ahmed, Faisal
    et al.
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Naim Uddin Rahi, Mohammad
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Uddin, Raihan
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Sen, Anik
    Department of Computer Science and Engineering, Premier University, Chattogram, Bangladesh.
    Shahadat Hossain, Mohammad
    University of Chittagong, Chattogram, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Machine Learning-Based Tomato Leaf Disease Diagnosis Using Radiomics Features2023Ingår i: Proceedings of the Fourth International Conference on Trends in Computational and Cognitive Engineering - TCCE 2022 / [ed] M. Shamim Kaiser; Sajjad Waheed; Anirban Bandyopadhyay; Mufti Mahmud; Kanad Ray, Springer Science and Business Media Deutschland GmbH , 2023, Vol. 1, s. 25-35Konferensbidrag (Refereegranskat)
    Abstract [en]

    Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.

  • 34.
    Ahmed, Mumtahina
    et al.
    Department of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Explainable Text Classification Model for COVID-19 Fake News Detection2022Ingår i: Journal of Internet Services and Information Security (JISIS), ISSN 2182-2069, E-ISSN 2182-2077, Vol. 12, nr 2, s. 51-69Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.

  • 35.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    A Deep Learning Approach with Data Augmentation to Recognize Facial Expressions in Real Time2022Ingår i: Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering: TCCE 2021 / [ed] M. Shamim Kaiser; Kanad Ray; Anirban Bandyopadhyay; Kavikumar Jacob; Kek Sie Long, Springer Nature, 2022, s. 487-500Konferensbidrag (Refereegranskat)
    Abstract [en]

    The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.

  • 36.
    Ahmed, Tawsin Uddin
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Jamil, Mohammad Newaj
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    An Integrated Deep Learning and Belief Rule Base Intelligent System to Predict Survival of COVID-19 Patient under Uncertainty2022Ingår i: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 14, nr 2, s. 660-676Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.

  • 37.
    Ahmer, Muhammad
    et al.
    Manufacturing and Process Development, AB SKF, Gothenburg, Sweden.
    Sandin, Fredrik
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Marklund, Pär
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Maskinelement.
    Gustafsson, Martin
    Manufacturing and Process Development, AB SKF, Gothenburg, Sweden.
    Berglund, Kim
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Maskinelement.
    Failure mode classification for condition-based maintenance in a bearing ring grinding machine2022Ingår i: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 122, s. 1479-1495Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Technical failures in machines are major sources of unplanned downtime in any production and result in reduced efficiency and system reliability. Despite the well-established potential of Machine Learning techniques in condition-based maintenance (CBM), the lack of access to failure data in production machines has limited the development of a holistic approach to address machine-level CBM. This paper presents a practical approach for failure mode prediction using multiple sensors installed in a bearing ring grinder for process control as well as condition monitoring. Bearing rings are produced in a set of 7 experimental runs, including 5 frequently occurring production failures in the critical subsystems. An advanced data acquisition setup, implemented for CBM in the grinder, is used to capture information about each individual grinding cycle. The dataset is pre-processed and segmented into grinding cycle stages before time and frequency domain feature extraction. A sensor ranking algorithm is proposed to optimize feature selection for failure classification and the installation cost. Random forest models, benchmarked as best performing classifiers, are trained in a two-step classification framework. The presence of failure mode is predicted in the first step and the failure mode type is identified in the second step using the same feature set. Defining the feature set in the failure detection step improves the predictor generalization with the classifiers’ performance accuracy of 99%99% on the test dataset. The presented approach demonstrates an efficient failure mode classification by selecting crucial sensors resulting in a cost-effective CBM implementation in a bearing ring grinder.

  • 38.
    Akifev, Daniil
    et al.
    Independent researcher.
    Liakh, Tatiana
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Ovsiannikova, Polina
    Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Sorokin, Radimir
    Independent researcher.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Debugging approach for IEC 61499 control applications in FBME2023Ingår i: 2023 IEEE 32nd International Symposium on Industrial Electronics (ISIE), IEEE, 2023Konferensbidrag (Refereegranskat)
  • 39.
    Akter, Mehenika
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Hand-Drawn Emoji Recognition using Convolutional Neural Network2021Ingår i: Proceedings of 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), IEEE, 2021, s. 147-152Konferensbidrag (Refereegranskat)
    Abstract [en]

    Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.

    Ladda ner fulltext (pdf)
    fulltext
  • 40.
    Akter, Shamima
    et al.
    International Islamic University, Chittagong, Bangladesh.
    Nahar, Nazmun
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    A New Crossover Technique to Improve Genetic Algorithm and Its Application to TSP2019Ingår i: Proceedings of 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE, 2019, artikel-id 18566123Konferensbidrag (Refereegranskat)
    Abstract [en]

    Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.

    Ladda ner fulltext (pdf)
    fulltext
  • 41.
    Al Banna, Md. Hasan
    et al.
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Ghosh, Tapotosh
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Al Nahian, Md. Jaber
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Taher, Kazi Abu
    Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka 1216, Bangladesh.
    Kaiser, M. Shamim
    Institute of Information Technology, Jahangirnagar University, Savar, Dhaka 1342, Bangladesh.
    Mahmud, Mufti
    Department of Computer Science, Nottingham Trent University, NG11 8NS – Nottingham, UK.
    Hossain, Mohammad Shahadat
    Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Attention-based Bi-directional Long-Short Term Memory Network for Earthquake Prediction2021Ingår i: IEEE Access, E-ISSN 2169-3536, Vol. 9, s. 56589-56603Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.

    Ladda ner fulltext (pdf)
    fulltext
  • 42.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    An IoT-Belief Rule Base Smart System to Assess Autism2018Ingår i: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, s. 671-675Konferensbidrag (Refereegranskat)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

    Ladda ner fulltext (pdf)
    fulltext
  • 43.
    Alani, Mohammed M.
    et al.
    Seneca College, Toronto, Canada.
    Awad, Ali Ismail
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Digitala tjänster och system. College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 17551, United Arab Emirates; Faculty of Engineering, Al-Azhar University, Qena P.O. Box 83513, Egypt; Centre for Security, Communications and Network Research, University of Plymouth, Plymouth PL4 8AA, U.K..
    AdStop: Efficient Flow-based Mobile Adware Detection using Machine Learning2022Ingår i: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 117, artikel-id 102718Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In recent years, mobile devices have become commonly used not only for voice communications but also to play a major role in our daily activities. Accordingly, the number of mobile users and the number of mobile applications (apps) have increased exponentially. With a wide user base exceeding 2 billion users, Android is the most popular operating system worldwide, which makes it a frequent target for malicious actors. Adware is a form of malware that downloads and displays unwanted advertisements, which are often offensive and always unsolicited. This paper presents a machine learning-based system (AdStop) that detects Android adware by examining the features in the flow of network traffic. The design goals of AdStop are high accuracy, high speed, and good generalizability beyond the training dataset. A feature reduction stage was implemented to increase the accuracy of Adware detection and reduce the time overhead. The number of relevant features used in training was reduced from 79 to 13 to improve the efficiency and simplify the deployment of AdStop. In experiments, the tool had an accuracy of 98.02% with a false positive rate of 2% and a false negative rate of 1.9%. The time overhead was 5.54 s for training and 9.36 µs for a single instance in the testing phase. In tests, AdStop outperformed other methods described in the literature. It is an accurate and lightweight tool for detecting mobile adware.

  • 44.
    Alawadi, Sadi
    et al.
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Kebande, Victor R.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Digitala tjänster och system.
    Dong, Yuji
    School of Internet of Things, Xi’an Jiaotong-Liverpool University, Suzhou, China.
    Bugeja, Joseph
    Department of Computer Science, Malmö University, Malmö, Sweden.
    Persson, Jan A.
    Department of Computer Science, Malmö University, Malmö, Sweden.
    Olsson, Carl Magnus
    Department of Computer Science, Malmö University, Malmö, Sweden.
    A Federated Interactive Learning IoT-Based Health Monitoring Platform2021Ingår i: New Trends in Database and Information Systems: ADBIS 2021 Short Papers, Doctoral Consortium and Workshops: DOING, SIMPDA, MADEISD, MegaData, CAoNS, Tartu, Estonia, August 24-26, 2021, Proceedings / [ed] Ladjel Bellatreche; Marlon Dumas; Panagiotis Karras; Raimundas Matulevičius; Ahmed Awad; Matthias Weidlich; Mirjana Ivanović; Olaf Hartig, Springer, 2021, s. 235-246Konferensbidrag (Refereegranskat)
    Abstract [en]

    Remote health monitoring is a trend for better health management which necessitates the need for secure monitoring and privacy-preservation of patient data. Moreover, accurate and continuous monitoring of personal health status may require expert validation in an active learning strategy. As a result, this paper proposes a Federated Interactive Learning IoT-based Health Monitoring Platform (FIL-IoT-HMP) which incorporates multi-expert feedback as ‘Human-in-the-loop’ in an active learning strategy in order to improve the clients’ Machine Learning (ML) models. The authors have proposed an architecture and conducted an experiment as a proof of concept. Federated learning approach has been preferred in this context given that it strengthens privacy by allowing the global model to be trained while sensitive data is retained at the local edge nodes. Also, each model’s accuracy is improved while privacy and security of data has been upheld. 

    Ladda ner fulltext (pdf)
    fulltext
  • 45.
    Al-Azzawi, Sana
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Kovács, György
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Nilsson, Filip
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Adewumi, Tosin
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset2023Ingår i: 17th International Workshop on Semantic Evaluation, SemEval 2023: Proceedings of the Workshop, Association for Computational Linguistics, 2023, s. 1421-1427Konferensbidrag (Refereegranskat)
  • 46.
    Alberti, M.
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, V.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Wursch, M.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Ingold, R.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    DeepDIVA: A Highly-Functional Python Framework for Reproducible Experiments2018Ingår i: Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR 2018, IEEE, 2018, s. 423-428, artikel-id 8583798Konferensbidrag (Refereegranskat)
    Abstract [en]

    We introduce DeepDIVA: an infrastructure designed to enable quick and intuitive setup of reproducible experiments with a large range of useful analysis functionality. Reproducing scientific results can be a frustrating experience, not only in document image analysis but in machine learning in general. Using DeepDIVA a researcher can either reproduce a given experiment or share their own experiments with others. Moreover, the framework offers a large range of functions, such as boilerplate code, keeping track of experiments, hyper-parameter optimization, and visualization of data and results. To demonstrate the effectiveness of this framework, this paper presents case studies in the area of handwritten document analysis where researchers benefit from the integrated functionality. DeepDIVA is implemented in Python and uses the deep learning framework PyTorch. It is completely open source(1), and accessible as Web Service through DIVAServices(2).

  • 47.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland; V7 Ltd, London, United Kingdom.
    Botros, Angela
    ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland.
    Schütz, Narayan
    ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Seuret, Mathias
    Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany.
    Trainable Spectrally Initializable Matrix Transformations in Convolutional Neural Networks2021Ingår i: Proceedings of ICPR 2020: 25th International Conference on Pattern Recognition, IEEE, 2021, s. 8204-8211Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work, we introduce a new architectural component to Neural Network (NN), i.e., trainable and spectrally initializable matrix transformations on feature maps. While previous literature has already demonstrated the possibility of adding static spectral transformations as feature processors, our focus is on more general trainable transforms. We study the transforms in various architectural configurations on four datasets of different nature: from medical (ColorectalHist, HAM10000) and natural (Flowers) images to historical documents (CB55). With rigorous experiments that control for the number of parameters and randomness, we show that networks utilizing the introduced matrix transformations outperform vanilla neural networks. The observed accuracy increases appreciably across all datasets. In addition, we show that the benefit of spectral initialization leads to significantly faster convergence, as opposed to randomly initialized matrix transformations. The transformations are implemented as auto-differentiable PyTorch modules that can be incorporated into any neural network architecture. The entire code base is open-source.

  • 48.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Vögtlin, Lars
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Würsch, Marcel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland; Institute for Interactive Technologies (IIT), FHNW University of Applied Sciences and Arts Northwestern Switzerland, Switzerland.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Improving Reproducible Deep Learning Workflows with DeepDIVA2019Ingår i: Proceedings 6th Swiss Conference on Data Science: SDS2019, IEEE, 2019, s. 13-18Konferensbidrag (Refereegranskat)
    Abstract [en]

    The field of deep learning is experiencing a trend towards producing reproducible research. Nevertheless, it is still often a frustrating experience to reproduce scientific results. This is especially true in the machine learning community, where it is considered acceptable to have black boxes in your experiments. We present DeepDIVA, a framework designed to facilitate easy experimentation and their reproduction. This framework allows researchers to share their experiments with others, while providing functionality that allows for easy experimentation, such as: boilerplate code, experiment management, hyper-parameter optimization, verification of data integrity and visualization of data and results. Additionally, the code of DeepDIVA is well-documented and supported by several tutorials that allow a new user to quickly familiarize themselves with the framework.

  • 49.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Würsch, Marcel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Bouillon, Manuel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Seuret, Mathias
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Are You Tampering with My Data?2019Ingår i: Computer Vision – ECCV 2018 Workshops: Proceedings, Part II / [ed] Laura Leal-Taixé & Stefan Roth, Springer, 2019, s. 296-312Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks, causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.

  • 50.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Vögtlin, Lars
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Seuret, Mathias
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland. Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Labeling, Cutting, Grouping: An Efficient Text Line Segmentation Method for Medieval Manuscripts2019Ingår i: The 15th IAPR International Conference on Document Analysis and Recognition: ICDAR 2019, IEEE, 2019, s. 1200-1206Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    This paper introduces a new way for text-line extraction by integrating deep-learning based pre-classification and state-of-the-art segmentation methods. Text-line extraction in complex handwritten documents poses a significant challenge, even to the most modern computer vision algorithms. Historical manuscripts are a particularly hard class of documents as they present several forms of noise, such as degradation, bleed-through, interlinear glosses, and elaborated scripts. In this work, we propose a novel method which uses semantic segmentation at pixel level as intermediate task, followed by a text-line extraction step. We measured the performance of our method on a recent dataset of challenging medieval manuscripts and surpassed state-of-the-art results by reducing the error by 80.7%. Furthermore, we demonstrate the effectiveness of our approach on various other datasets written in different scripts. Hence, our contribution is two-fold. First, we demonstrate that semantic pixel segmentation can be used as strong denoising pre-processing step before performing text line extraction. Second, we introduce a novel, simple and robust algorithm that leverages the high-quality semantic segmentation to achieve a text-line extraction performance of 99.42% line IU on a challenging dataset.

1234567 1 - 50 av 1254
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf