Endre søk
Begrens søket
1234567 1 - 50 of 640
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology Chittagong.
    Chowdhury, Abu Sayeed
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Karim, Razuan
    University of Science and Technology Chittagong.
    An Interoperable IP based WSN for Smart Irrigation Systems2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.

    Fulltekst (pdf)
    fulltext
  • 2.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Paul, Sukanta
    University of Science and Technology, Chittagong.
    Akhter, Sharmin
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology, Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Selection of Energy Efficient Routing Protocol for Irrigation Enabled by Wireless Sensor Networks2017Inngår i: Proceedings of 2017 IEEE 42nd Conference on Local Computer Networks Workshops, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 75-81Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.

    Fulltekst (pdf)
    fulltext
  • 3.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018Inngår i: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, s. 1-7Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

    Fulltekst (pdf)
    fulltext
  • 4.
    Abrishambaf, Reza
    et al.
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Bal, Mert
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Distributed home automation system based on IEC61499 function blocks and wireless sensor networks2017Inngår i: Proceedings of the IEEE International Conference on Industrial Technology, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 1354-1359, artikkel-id 7915561Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution

  • 5.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Foteini
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Conversational Systems in Machine Learning from the Point of View of the Philosophy of Science—Using Alime Chat and Related Studies2019Inngår i: Philosophies, ISSN 2409-9287, Vol. 4, nr 3, artikkel-id 41Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This essay discusses current research efforts in conversational systems from the philosophy of science point of view and evaluates some conversational systems research activities from the standpoint of naturalism philosophical theory. Conversational systems or chatbots have advanced over the decades and now have become mainstream applications. They are software that users can communicate with, using natural language. Particular attention is given to the Alime Chat conversational system, already in industrial use, and the related research. The competitive nature of systems in production is a result of different researchers and developers trying to produce new conversational systems that can outperform previous or state-of-the-art systems. Different factors affect the quality of the conversational systems produced, and how one system is assessed as being better than another is a function of objectivity and of the relevant experimental results. This essay examines the research practices from, among others, Longino’s view on objectivity and Popper’s stand on falsification. Furthermore, the need for qualitative and large datasets is emphasized. This is in addition to the importance of the peer-review process in scientific publishing, as a means of developing, validating, or rejecting theories, claims, or methodologies in the research community. In conclusion, open data and open scientific discussion fora should become more prominent over the mere publication-focused trend.

  • 6.
    Adewumi, Oluwatosin
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Inner For-Loop for Speeding Up Blockchain Mining2020Inngår i: Open Computer Science, ISSN 2299-1093, Vol. 10, nr 1, s. 42-47Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, the authors propose to increase the efficiency of blockchain mining by using a population-based approach. Blockchain relies on solving difficult mathematical problems as proof-of-work within a network before blocks are added to the chain. Brute force approach, advocated by some as the fastest algorithm for solving partial hash collisions and implemented in Bitcoin blockchain, implies exhaustive, sequential search. It involves incrementing the nonce (number) of the header by one, then taking a double SHA-256 hash at each instance and comparing it with a target value to ascertain if lower than that target. It excessively consumes both time and power. In this paper, the authors, therefore, suggest using an inner for-loop for the population-based approach. Comparison shows that it’s a slightly faster approach than brute force, with an average speed advantage of about 1.67% or 3,420 iterations per second and 73% of the time performing better. Also, we observed that the more the total particles deployed, the better the performance until a pivotal point. Furthermore, a recommendation on taming the excessive use of power by networks, like Bitcoin’s, by using penalty by consensus is suggested.

    Fulltekst (pdf)
    fulltext
  • 7.
    Ahmad, Riaz
    et al.
    Shaheed Banazir Bhutto University, Sheringal, Pakistan.
    Naz, Saeeda
    Computer Science Department, GGPGC No.1 Abbottabad, Pakistan.
    Afzal, Muhammad
    Mindgarage, University of Kaiserslautern, Germany.
    Rashid, Sheikh
    Al Khwarizmi Institute of Computer Science, UET Lahore, Pakistan.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Dengel, Andreas
    German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern, Germany.
    A Deep Learning based Arabic Script Recognition System: Benchmark on KHAT2020Inngår i: The International Arab Journal of Information Technology, ISSN 1683-3198, Vol. 17, nr 3, s. 299-305Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.

  • 8.
    Akter, Shamima
    et al.
    International Islamic University, Chittagong, Bangladesh.
    Nahar, Nazmun
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    A New Crossover Technique to Improve Genetic Algorithm and Its Application to TSP2019Inngår i: Proceedings of 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE, 2019, artikkel-id 18566123Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.

    Fulltekst (pdf)
    fulltext
  • 9.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    An IoT-Belief Rule Base Smart System to Assess Autism2018Inngår i: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, s. 671-675Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

    Fulltekst (pdf)
    fulltext
  • 10.
    Alberti, M.
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, V.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Wursch, M.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Ingold, R.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    DeepDIVA: A Highly-Functional Python Framework for Reproducible Experiments2018Inngår i: Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR 2018, IEEE, 2018, s. 423-428, artikkel-id 8583798Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We introduce DeepDIVA: an infrastructure designed to enable quick and intuitive setup of reproducible experiments with a large range of useful analysis functionality. Reproducing scientific results can be a frustrating experience, not only in document image analysis but in machine learning in general. Using DeepDIVA a researcher can either reproduce a given experiment or share their own experiments with others. Moreover, the framework offers a large range of functions, such as boilerplate code, keeping track of experiments, hyper-parameter optimization, and visualization of data and results. To demonstrate the effectiveness of this framework, this paper presents case studies in the area of handwritten document analysis where researchers benefit from the integrated functionality. DeepDIVA is implemented in Python and uses the deep learning framework PyTorch. It is completely open source(1), and accessible as Web Service through DIVAServices(2).

  • 11.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Würsch, Marcel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Bouillon, Manuel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Seuret, Mathias
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Are You Tampering with My Data?2019Inngår i: Computer Vision – ECCV 2018 Workshops: Proceedings, Part II / [ed] Laura Leal-Taixé & Stefan Roth, Springer, 2019, s. 296-312Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks, causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.

  • 12.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Vögtlin, Lars
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Seuret, Mathias
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland. Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Labeling, Cutting, Grouping: An Efficient Text Line Segmentation Method for Medieval Manuscripts2019Inngår i: The 15th IAPR International Conference on Document Analysis and Recognition: ICDAR 2019, IEEE, 2019, s. 1200-1206Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    This paper introduces a new way for text-line extraction by integrating deep-learning based pre-classification and state-of-the-art segmentation methods. Text-line extraction in complex handwritten documents poses a significant challenge, even to the most modern computer vision algorithms. Historical manuscripts are a particularly hard class of documents as they present several forms of noise, such as degradation, bleed-through, interlinear glosses, and elaborated scripts. In this work, we propose a novel method which uses semantic segmentation at pixel level as intermediate task, followed by a text-line extraction step. We measured the performance of our method on a recent dataset of challenging medieval manuscripts and surpassed state-of-the-art results by reducing the error by 80.7%. Furthermore, we demonstrate the effectiveness of our approach on various other datasets written in different scripts. Hence, our contribution is two-fold. First, we demonstrate that semantic pixel segmentation can be used as strong denoising pre-processing step before performing text line extraction. Second, we introduce a novel, simple and robust algorithm that leverages the high-quality semantic segmentation to achieve a text-line extraction performance of 99.42% line IU on a challenging dataset.

  • 13.
    Albertsson, Kim
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. CERN.
    Gleyze, Sergei
    University of Florida.
    Huwiler, Marc
    EPFL.
    Ilievski, Vladimir
    EPFL.
    Moneta, Lorenzo
    CERN.
    Shekar, Saurav
    ETH Zurich.
    Estrade, Victor
    CERN.
    Vashistha, Akshay
    CERN. Karlsruhe Institute of Technology.
    Wunsch, Stefan
    CERN. Karlsruhe Institute of Technology.
    Mesa, Omar Andres Zapata
    University of Antioquia. Metropolitan Institute of Technology.
    New Machine Learning Developments in ROOT/TMVA2019Inngår i: 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018), EDP Sciences, 2019, Vol. 214, artikkel-id 06014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The Toolkit for Multivariate Analysis, TMVA, the machine learning package integrated into the ROOT data analysis framework, has recently seen improvements to its deep learning module, parallelisation of multivariate methods and cross validation. Performance benchmarks on datasets from high-energy physics are presented with a particular focus on the new deep learning module which contains robust fully-connected, convolutional and recurrent deep neural networks implemented on CPU and GPU architectures. Both dense and convo-lutional layers are shown to be competitive on small-scale networks suitable for high-level physics analyses in both training and in single-event evaluation. Par-allelisation efforts show an asymptotical 3-fold reduction in boosted decision tree training time while the cross validation implementation shows significant speed up with parallel fold evaluation.

  • 14.
    Al-Douri, Yamur
    Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, Drift, underhåll och akustik.
    Two-Level Multi-Objective Genetic Algorithm for Risk-Based Life Cycle Cost Analysis2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Artificial intelligence (AI) is one of the fields in science and engineering and encompasses a wide variety of subfields, ranging from general areas (learning and perception) to specific topics, such as mathematical theorems. AI and, specifically, multi-objective genetic algorithms (MOGAs) for risk-based life cycle cost (LCC) analysis should be performed to estimate the optimal replacement time of tunnel fan systems, with a view towards reducing the ownership cost and the risk cost and increasing company profitability from an economic point of view. MOGA can create systems that are capable of solving problems that AI and LCC analyses cannot accomplish alone.

    The purpose of this thesis is to develop a two-level MOGA method for optimizing the replacement time of reparable system. MOGA should be useful for machinery in general and specifically for reparable system. This objective will be achieved by developing a system that includes a smart combination of techniques by integrating MOGA to yield the optimized replacement time. Another measure to achieve this purpose is implementing MOGA in clustering and imputing missing data to obtain cost data, which could help to provide proper data to forecast cost data for optimization and to identify the optimal replacement time.

    In the first stage, a two-level MOGA is proposed to optimize clustering to reduce and impute missing cost data. Level one uses a MOGA based on fuzzy c-means to cluster cost data objects based on three main indices. The first is cluster centre outliers; the second is the compactness and separation ( ) of the data points and cluster centres; the third is the intensity of data points belonging to the derived clusters. Level two uses MOGA to impute the missing cost data by using a valid data period from that are reduced data in size. In the second stage, a two-level MOGA is proposed to optimize time series forecasting. Level one implements MOGA based on either an autoregressive integrated moving average (ARIMA) model or a dynamic regression (DR) model. Level two utilizes a MOGA based on different forecasting error rates to identify proper forecasting. These models are applied to simulated data for evaluation since there is no control of the influenced parameters in all of the real cost data. In the final stage, a two-level MOGA is employed to optimize risk-based LCC analysis to find the optimal replacement time for reparable system. Level one uses a MOGA based on a risk model to provide a variation of risk percentages, while level two uses a MOGA based on an LCC model to estimate the optimal reparable system replacement time.

    The results of the first stage show the best cluster centre optimization for data clustering with low  and high intensity. Three cluster centres were selected because these centres have a geometry that is suitable for the highest data reduction of 27%. The best optimized interval is used for imputing missing data. The results of the second stage show the drawbacks of time series forecasting using a MOGA based on the DR model. The MOGA based on the ARIMA model yields better forecasting results. The results of the final stage show the drawbacks of the MOGA based on a risk-based LCC model regarding its estimation. However, the risk-based LCC model offers the possibility of optimizing the replacement schedule.

    However, MOGA is highly promising for allowing optimization compared with other methods that were investigated in the present thesis.

    Fulltekst (pdf)
    fulltext
  • 15.
    Altmojo, Udayanto Dwi
    et al.
    Aalto Univ, Dept Elect Engn & Automat, Aalto, Finland.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. Aalto Univ, Dept Elect Engn & Automat, Espoo, Finland.
    Salcic, Zoran
    Univ Auckland, Dept Elect & Comp Engn, Auckland, New Zealand.
    On Achieving Reliable Communication in IEC 614992018Inngår i: 2018 IEEE 23RD INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), Piscataway, NJ: IEEE, 2018, s. 147-154Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper proposes a novel extension for communication in the IEC 61499 standard. Inspired by the features found in the formal programming language SystemJ, the extension supports reliable and guaranteed communication in distributed execution of function block application(s)/program(s). The extension utilizes mechanisms agnostic on underlying network protocols and are based on formal semantics that guarantee data delivery. The use of proposed extension, called channel, is demonstrated in an industrial automation-type example.

  • 16.
    Andersson, Arne
    et al.
    Lunds universitet.
    Brodnik, Andrej
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. IMFM, Ljubljana, Slovenia.
    Comment on "self-indexed sort"1996Inngår i: SIGPLAN notices, ISSN 0362-1340, E-ISSN 1558-1160, Vol. 31, nr 8, s. 40-41Artikkel i tidsskrift (Annet vitenskapelig)
  • 17.
    Andersson, Karl
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    You, Ilsun
    Soonchunhyang University, Chungcheongnam-do, Republic of Korea.
    Palmieri, Francesco
    University of Salerno, Fisciano (SA), Italy.
    Security and Privacy for Smart, Connected, and Mobile IoT Devices and Platforms2018Inngår i: Security and Communication Networks, ISSN 1939-0114, E-ISSN 1939-0122, Vol. 2018, s. 1-2, artikkel-id 5346596Artikkel i tidsskrift (Fagfellevurdert)
  • 18.
    Aparicio Rivera, Jorge
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Lindner, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Lindgren, Per
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB. Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Heapless: Dynamic Data Structures without Dynamic Heap Allocator for Rust2018Inngår i: 2018 IEEE 16TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), Piscataway, NJ: IEEE, 2018, s. 87-94, artikkel-id 8472097Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Dynamic memory management is typically implemented using a global memory allocator, which may negatively impact the performance, reliability, and predictability of a program; in effect standards around safety-critical applications often discourage or even disallow dynamic memory management. This paper presents heapless, a collection of dynamic data structures (for vectors, strings, and circular buffers) that can be either stack or statically allocated, thus free of global allocator dependencies. The proposed data structures for vectors and strings closely mimic the Rust standard library implementations while adding support to gracefully handling cases of capacity exceedance. Our circular buffers act as queues and allowing channel like usage (by splitting). The Rust memory model together with the ability of local reasoning on memory requirements (brought by heapless) facilitates establishing robustness/safety guarantees and minimize attack surfaces of (industrial) IoT systems. We show that the heapless data structures are highly efficient and have predictable performance, thus suitable for hard real-time applications. Moreover, in our implementation heapless data structures are non-relocatable allowing mapping to hardware, useful, e.g., to DMA transfers. The feasibility, performance, and advantages of heapless are demonstrated by implementing a JSON serialization and de-serialization library for an ARM Cortex-M based IoT platform.

  • 19.
    Arvidsson, Johan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Finding delta difference in large data sets2019Independent thesis Basic level (professional degree), 10 poäng / 15 hpOppgave
    Abstract [en]

    To find out what differs between two versions of a file can be done with several different techniques and programs. These techniques and programs are often focusd on finding differences in text files, in documents, or in class files for programming. An example of a program is the popular git tool which focuses on displaying the difference between versions of files in a project. A common way to find these differences is to utilize an algorithm called Longest common subsequence, which focuses on finding the longest common subsequence in each file to find similarity between the files. By excluding all similarities in a file, all remaining text will be the differences between the files. The Longest Common Subsequence is often used to find the differences in an acceptable time. When two lines in a file is compared to see if they differ from each other hashing is used. The hash values for each correspondent line in both files will be compared. Hashing a line will give the content on that line a unique value. If as little as one character on a line is different between the version, the hash values for those lines will be different as well. These techniques are very useful when comparing two versions of a file with text content. With data from a database some, but not all, of these techniques can be useful. A big difference between data in a database and text in a file will be that content is not just added and delete but also updated. This thesis studies the problem on how to make use of these techniques when finding differences between large datasets, and doing this in a reasonable time, instead of finding differences in documents and files.  Three different methods are going to be studied in theory. These results will be provided in both time and space complexities. Finally, a selected one of these methods is further studied with implementation and testing. The reason only one of these three is implemented is because of time constraint. The one that got chosen had easy maintainability, an easy implementation, and maintains a good execution time.

    Fulltekst (pdf)
    fulltext
  • 20.
    Atmojo, Udayanto Dwi
    et al.
    Department of Electrical Engineering and Automation, Aalto University.
    Gulzar, Kashif
    Department of Electrical Engineering and Automation, Aalto University.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. Department of Electrical Engineering and Automation, Aalto University.
    Ma, Rongwei
    Department of Electrical Engineering and Automation, Aalto University.
    Hopsu, Alexander
    Department of Electrical Engineering and Automation, Aalto University.
    Makkonen, Henri
    Department of Electrical Engineering and Automation, Aalto University.
    Korhonen, Atte
    Department of Electrical Engineering and Automation, Aalto University.
    Phu, Long Tran
    Department of Electrical Engineering and Automation, Aalto University.
    Distributed control architecture for dynamic reconfiguration: Flexible assembly line case study2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This article presents the development of a distributed manufacturing case study enhanced with features that enable flexibility during the production process and the capability to continue the production process in case of fault scenarios. The approach described in this paper presents solutions to achieve the production of customized products, handle changes in product order, and minimize downtime and avoid total shutdown of the manufacturing system due to the occurrence of failures during the production process.

  • 21.
    Atmojo, Udayanto Dwi
    et al.
    Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Salcic, Zoran
    Electrical and Computer Engineering, The University of Auckland, Auckland, New Zealand.
    Wang, Kevin I-Kai
    The University of Auckland, 1415 Auckland, New Zealand .
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    A Service-Oriented Programming Approach for Dynamic Distributed Manufacturing Systems2020Inngår i: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 16, nr 1, s. 150-160Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Dynamic reconfigurability and adaptability are crucial features of the future manufacturing systems that must be supported by adequate software technologies. Currently, they are typically achieved as add-ons to existing software tools and run-time systems, which are not based on any formal foundation such as formal model of computation (MoC). This paper presents the new programming paradigm of Service Oriented SystemJ (SOSJ), which targets dynamic distributed software systems suited for future manufacturing applications. SOSJ is built on a merger and the synergies of two programming concepts of (1) Service Oriented Architecture (SOA), to support dynamic software system composition, and (2) SystemJ programming language based on a formal MoC, which targets correct by construction design of static distributed software systems. The resulting programming paradigm allows the design and implementation of dynamic distributed software systems.

  • 22.
    Axelsson, Tobias
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Using supervised learning algorithms to model the behavior of Road Weather Information System sensors2018Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Trafikverket, the agency in charge of state road maintenance in Sweden, have a number of so-called Road Weather Information Systems (RWIS). The main purpose of the stations is to provide winter road maintenance workers with information to decide when roads need to be plowed and/or salted. Each RWIS have a number of sensors which make road weather-related measurements every 30 minutes. One of the sensors is dug into the road which can cause traffic disturbances and be costly for Trafikverket. Other RWIS sensors fail occasionally.

    This project aims at modelling a set of RWIS sensors using supervised machine learning algorithms. The sensors that are of interest to model are: Optic Eye, Track Ice Road Sensor (TIRS) and DST111. Optic Eye measures precipitation type and precipitation amount. Both TIRS and DST111 measure road surface temperature. The difference between TIRS and DST111 is that the former is dug into the road, and DST111 measures road surface temperature from a distance via infrared laser. Any supervised learning algorithm trained to model a given measurement made by a sensor, may only train on measurements made by the other sensors as input features. Measurements made by TIRS may not be used as input in modelling other sensors, since it is desired to see if TIRS can be removed. The following input features may also be used for training: road friction, road surface condition and timestamp.

    Scikit-learn was used as machine learning software in this project. An experimental approach was chosen to achieve the project results: A pre-determined set of supervised algorithms were compared using different amount of top relevant input features and different hyperparameter settings. Prior to achieving the results, a data preparation process was conducted. Observations with suspected or definitive errors were removed in this process. During the data preparation process, the timestamp feature was transformed into two new features: month and hour.

    The results in this project show that precipitation type was best modelled using Classification And Regression Tree (CART) on Scikit-learn default settings, achieving a performance score of Macro-F1test = 0.46 and accuracy = 0.84 using road surface condition, road friction, DST111 road surface temperature, hour and month as input features. Precipitation amount was best modelled using k-Nearest Neighbor (kNN); with k = 64 and road friction used as the only input feature, a performance score of MSEtest = 0.31 was attained. TIRS road surface temperature was best modelled with Multi-Layer Perceptron (MLP) using 64 hidden nodes and DST111 road surface temperature, road surface condition, road friction, month, hour and precipitation type as input features, with which a performance score of MSEtest = 0.88 was achieved. DST111 road surface temperature was best modelled using Random forest on Scikit-learn default settings with road surface condition, road friction, month, precipitation type and hour as input features, achieving a performance score of MSEtest = 10.16.

    Fulltekst (pdf)
    fulltext
  • 23.
    Baghdo, Simon
    Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande.
    Game Telemtry: Store, Analyze and Improve UX in Game from Player-Choices2016Independent thesis Basic level (university diploma), 10 poäng / 15 hpOppgave
    Abstract [en]

    During this project, the main objective was to store and analyze the user choices through game telemetry, in the game Bloodlines. With the goal to adjust the game for each member personally, for an improved user experience. This was done through a constructed database. By saving metrics of player choices and events such as: Most used weapon, attempts per session, session time periods, amount of deaths and highest rate of death cause. The results got analyzed with the control group settings in mind. Adjustments made were based on a fundamental foundation. In addition a web application with the functionality to enter and change the settings metrics in real time.

    Fulltekst (pdf)
    fulltext
  • 24.
    Balasubramaniam, Sasitharan
    et al.
    Tampere University of Technology.
    Lyamin, Nikita
    Halmstad University.
    Kleyko, Denis
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Skurnik, Mikael
    University of Helsinki.
    Vinel, Alexey
    Halmstad University.
    Koucheryavy, Yevgeni
    Tampere University of Technology.
    Exploiting bacterial properties for multi-­‐hop nanonetworks2014Inngår i: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 52, nr 7, s. 184-191Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Molecular communication is a relatively new communication paradigm for nanomachines where the communication is realized by utilizing existing biological components found in nature. In recent years, researchers have proposed using bacteria to realize molecular communication because the bacteria have, (i) the ability to swim and migrate between locations, (ii) the ability to carry DNA contents (i.e. plasmids), which could be utilized for information storage, and (iii) the ability to interact and transfer plasmids to other bacteria (one of this process is known as bacterial conjugation). However, current proposals for bacterial nanonetworks have not considered the internal structures of the nanomachines that can facilitate the use of bacteria as an information carrier. This article presents the types and functionalities of nanomachines that can be utilized in bacterial nanonetworks. A particular focus is placed on the bacterial conjugation and its support for multi-hop communication between nanomachines. Simulations of the communication process have also been evaluated, to analyze the quantity of bit received as well as the delay performances. Wet lab experiments have also been conducted to validate the bacterial conjugation process. The article also discusses potential applications of bacterial nanonetworks for cancer monitoring and therapy.

  • 25.
    Baniya, Rupak
    et al.
    Department of Electrical Engineering, School of Electrical Engineering, Aalto University.
    Maksimainen, Mikko
    Department of Electrical Engineering, School of Electrical Engineering, Aalto University.
    Sierla, Seppo
    Department of Automation and Systems Technology, School of Electrical Engineering, Aalto University.
    Pang, Cheng
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Yang, Chen-Wei
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Smart indoor lighting control: power, illuminance, and colour quality2014Inngår i: Proceedings: 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE) : Grand Cevahir Hotel and Convention Center Istambul, Turkey : 01 - 04 June, 2014, Piscataway, NJ: IEEE Communications Society, 2014, s. 1745-1750, artikkel-id 6864878Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper investigates the correlation between color quality and energy efficiency of indoor lighting control. The color quality, in terms of visual performance and comfort, is quantified using three measurements: illuminance, Color Rendering Index, and Correlated Color Temperature. Several experiments have been conducted to evaluate the potential energy savings of using different portions of light spectrum to obtain the optimal color quality. In particular, Light-Emitting Diodes are used as the lighting sources of the experimental luminaire. Moreover, the above quantification method and experimental results have been incorporated into a previously developed simulation framework for Building Automation and Control Systems, and smart lighting is used to adjust the tradeoff between comfort and energy consumption based on the presence of occupants. The results can be used to evaluate the viability of advanced lighting automation.

    Fulltekst (pdf)
    FULLTEXT01
  • 26.
    Bediako, Peter Ken
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Long Short-Term Memory Recurrent Neural Network for detecting DDoS flooding attacks within TensorFlow Implementation framework.2017Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Distributed Denial of Service (DDoS) attacks is one of the most widespread security attacks to internet service providers. It is the most easily launched attack, but very difficult and expensive to detect and mitigate. In view of the devastating effect of DDoS attacks, there has been the increase on the adaptation of a network detection technique to reveal the presence of DDoS attack before huge traffic buildup to prevent service availability.

    Several works done on DDoS attack detection reveals that, the conventional DDoS attack detection methods based on statistical divergence is useful, however, the large surface area of the internet which serve as the main conduit for DDoS flooding attacks to occur, makes it difficult to use this approach to detect attacks on the network. Hence this research work is focused on using detection techniques based on a deep learning technique, because it is proven as the most effective detection technique against DDoS attacks.

    Out of the several deep neural network techniques available, this research focuses on one aspect of recurrent neural network called Long Short-Term Memory (LSTM) and TensorFlow framework to build and train a deep neural network model to detect the presence of DDoS attacks on a network. This model can be used to develop an Intrusion Detection System (IDS) to aid in detecting DDoS attacks on the network. Also, at the completion of this project, the expectation of the produced model is to have a higher detection accuracy rates, and a low false alarm rates.

    Design Science Research Methodology (DSRM) was used to carry out this project. The test experiment for this work was performed on CPU and GPU base systems to determine the base system's effect on the detection accuracy of the model.

    To achieve the set goals, seven evaluating parameters were used to test the model's detection accuracy and performance on both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) systems.

    The results reveal that the model was able to produce a detection accuracy of 99.968% on both CPU and GPU base system which is better than the results by Yuan et al. [55] which is 97.606%. Also the results prove that the model's performance does not depend on the based system used for the training but rather depends on the dataset size. However, the GPU systems train faster than CPU systems. It also revealed that increasing the value of epochs during training does not affect the models detection accuracy but rather extends the training time.

    This model is limited to detecting 17 different attack types on maintaining the same detection accuracy mentioned above. Further future work should be done to increase the detecting attack type to unlimited so that it will be able to detect all attack types.

    Fulltekst (pdf)
    fulltext
  • 27.
    Belay, Birhanu
    et al.
    DFKI-German Research Center for Artificial Intelligence, University of Kaiserslautern, DE, Kaiserslautern, DE.
    Habtegebrial, Tewodros
    DFKI-German Research Center for Artificial Intelligence, University of Kaiserslautern, DE, Kaiserslautern, DE.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Belay, Gebeyehu
    Bahir Dar Institute of Technology, Ethiopia.
    Stricker, Didier
    DFKI-German Research Center for Artificial Intelligence, University of Kaiserslautern, DE, Kaiserslautern, DE.
    Factored Convolutional Neural Network for Amharic Character Image Recognition2019Inngår i: 2019 IEEE International Conference on Image Processing: Proceedings, IEEE, 2019, s. 2906-2910Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    In this paper we propose a novel CNN based approach for Amharic character image recognition. The proposed method is designed by leveraging the structure of Amharic graphemes. Amharic characters could be decomposed in to a consonant and a vowel. As a result of this consonant-vowel combination structure, Amharic characters lie within a matrix structure called 'Fidel Gebeta'. The rows and columns of 'Fidel Gebeta' correspond to a character's consonant and the vowel components, respectively. The proposed method has a CNN architecture with two classifiers that detect the row/consonant and column/vowel components of a character. The two classifiers share a common feature space before they fork-out at their last layers. The method achieves state-of-the-art result on a synthetically generated dataset. The proposed method achieves 94.97% overall character recognition accuracy.

  • 28.
    Belay, Birhanu
    et al.
    Dept. of Computer Science, University of Kaiserslautern, Kaiserslautern, Germany. Faculty of Computing, Bahir Dar Institute of Technology, Bahir Dar, Ethiopia.
    Habtegebrial, Tewodros
    Dept. of Computer Science, University of Kaiserslautern, Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Belayl, Gebeyehu
    Faculty of Computing, Bahir Dar Institute of Technology, Bahir Dar, Ethiopia.
    Stricker, Didier
    DFKI-German Research Center for Artificial Intelligence, Kaiserslautern, Germany.
    Amharic Text Image Recognition: Database, Algorithm, and Analysis2019Inngår i: The 15th IAPR International Conference on Document Analysis and Recognition: ICDAR 2019, IEEE, 2019, s. 1268-1273Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    This paper introduces a dataset for an exotic, but very interesting script, Amharic. Amharic follows a unique syllabic writing system which uses 33 consonant characters with their 7 vowels variants of each. Some labialized characters derived by adding diacritical marks on consonants and or removing part of it. These associated diacritics on consonant characters are relatively smaller in size and challenging to distinguish the derived (vowel and labialized) characters. In this paper we tackle the problem of Amharic text-line image recognition. In this work, we propose a recurrent neural network based method to recognize Amharic text-line images. The proposed method uses Long Short Term Memory (LSTM) networks together with CTC (Connectionist Temporal Classification). Furthermore, in order to overcome the lack of annotated data, we introduce a new dataset that contains 337,332 Amharic text-line images which is made freely available at http://www.dfki.uni-kl.de/~belay/. The performance of the proposed Amharic OCR model is tested by both printed and synthetically generated datasets, and promising results are obtained.

  • 29.
    Belay, Birhanu
    et al.
    Department of Computer Science, University of Kaiserslautern, Germany. Faculty of Computing, Bahir Dar Institute of Technology, Ethiopia.
    Habtegebrial, Tewodros
    Department of Computer Science, University of Kaiserslautern, Germany.
    Meshesha, Million
    School of Information Science, Addis Ababa University, Ethiopia.
    Liwicki, Marcus
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Belay, Gebeyehu
    Faculty of Computing, Bahir Dar Institute of Technology, Ethiopia.
    Stricker, Didier
    Department of Computer Science, University of Kaiserslautern, Germany. German Research Center for Artificial Intelligence, DFKI, Germany.
    Amharic OCR: An End-to-End Learning2020Inngår i: Applied Sciences, E-ISSN 2076-3417, Vol. 10, nr 3, artikkel-id 1117Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we introduce an end-to-end Amharic text-line image recognition approach based on recurrent neural networks. Amharic is an indigenous Ethiopic script which follows a unique syllabic writing system adopted from an ancient Geez script. This script uses 34 consonant characters with the seven vowel variants of each (called basic characters) and other labialized characters derived by adding diacritical marks and/or removing parts of the basic characters. These associated diacritics on basic characters are relatively smaller in size, visually similar, and challenging to distinguish from the derived characters. Motivated by the recent success of end-to-end learning in pattern recognition, we propose a model which integrates a feature extractor, sequence learner, and transcriber in a unified module and then trained in an end-to-end fashion. The experimental results, on a printed and synthetic benchmark Amharic Optical Character Recognition (OCR) database called ADOCR, demonstrated that the proposed model outperforms state-of-the-art methods by 6.98% and 1.05%, respectively.

  • 30.
    Belyakov, Stanislav Leonidovich
    et al.
    Southern Federal University, Department of Applied Information Science Taganrog.
    Savelyeva, Marina
    Southern Federal University, Department of Applied Information Science Taganrog.
    Yan, Jeffrey
    Department of Electrical and Computer System Engineering, University of Auckland, University of Auckland.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Adaptation of Material Flows in Mechanical Transportation Systems Based on Observation Experience2015Inngår i: IEEE TrustCom-BigDataSE-ISPA 2015: Helsinki, 20-22 Aug. 2015, Piscataway, NJ: IEEE Communications Society, 2015, s. 269-274, artikkel-id 7345659Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper investigates adaptation of material flows in mechanical transportation systems to the appearance of local overloads. The adaptation mechanism is based on the deflection of the forecast of experts who oversee the behavior of flows in the network. We propose a modified version of case-based reasoning, which uses the concept of imagination of situations. Unlike known methods, imaginative description of cases increases the reliability of decision-making. We provide a modification of the algorithm for dynamically building routing tables in distributed controllers of a transportation network. Analytic evaluation of the adaptation method's effectiveness is provided. The paper is concluded with outline of the implementation mechanism using a network of distributed controllers

  • 31.
    Belyakov, Stanislav Leonidovich
    et al.
    Southern Federal University, Department of Applied Information Science Taganrog.
    Savelyeva, Marina
    Southern Federal University, Department of Applied Information Science Taganrog.
    Yan, Jeffrey
    Department of Electrical and Computer System Engineering, University of Auckland, University of Auckland.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Knowledge-based routing in mechanical transportation systems2014Inngår i: 12th IEEE International Conference on Industrial Informatics, INDIN 2014: Porto Alegre, Brazil, 27 - 30 July 2014, Piscataway, NJ: IEEE Communications Society, 2014, s. 48-53Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents the ways of constructing routing algorithms in mechanical transport systems based on knowledge. It is assumed that the experts observing system behavior applies his experience by designating subsystems with a specific behavior. To create routing tables, a model of fuzzy temporal hypergraph was used. We consider fixed and dynamic routing, given modifications of Dijkstra's algorithm for the case of fuzzy temporal hypergraphs

  • 32.
    Bengtsson, Fredrik
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Algorithms for aggregate information extraction from sequences2007Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In this thesis, we propose efficient algorithms for aggregate information extraction from sequences and multidimensional arrays. The algorithms proposed are applicable in several important areas, including large databases and DNA sequence segmentation. We first study the problem of efficiently computing, for a given range, the range-sum in a multidimensional array as well as computing the k maximum values, called the top-k values. We design two efficient data structures for these problems. For the range-sum problem, our structure supports fast update while preserving low complexity of range-sum query. The proposed top-k structure provides fast query computation in linear time proportional to the sum of the sizes of a two-dimensional query region. We also study the k maximum sum subsequences problem and develop several efficient algorithms. In this problem, the k subsegments of consecutive elements with largest sum are to be found. The segments can potentially overlap, which allows for a large number of possible candidate segments. Moreover, we design an optimal algorithm for ranking the k maximum sum subsequences. Our solution does not require the value of k to be known a priori. Furthermore, an optimal linear-time algorithm is developed for the maximum cover problem of finding k subsequences of consecutive elements of maximum total element sum.

    Fulltekst (pdf)
    FULLTEXT01
  • 33.
    Bengtsson, Fredrik
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Efficient aggregate queries on data cubes2004Licentiatavhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    As computers are developing rapidly and become more available to the modern information society, the possibility and ability to handle large data sets in database applications increases. The demand for efficient algorithmic solutions to process huge amounts of information increases as the data sets become larger. In this thesis, we study the efficient implementation of aggregate operations on the data cube, a modern and flexible model for data warehouses. In particular, the problem of computing the k largest sum subsequences of a given sequence is investigated. An efficient algorithm for the problem is developed. Our algorithm is optimal for large values of the user-specified parameter k. Moreover, a fast in-place algorithm with good trade-off between update- and query-time, for the multidimensional orthogonal range sum problem, is presented. The problem studied is to compute the sum of the data over an orthogonal range in a multidimensional data cube. Furthermore, a fast algorithmic solution to the problem of maintaining a data structure for computing the k largest values in a requested orthogonal range of the data cube is also proposed.

    Fulltekst (pdf)
    FULLTEXT01
  • 34.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    A note on ranking k maximum sums2005Rapport (Annet vitenskapelig)
    Abstract [en]

    In this paper, we design a fast algorithm for ranking the k maximum sum subsequences. Given a sequence of real numbers and an integer parameter k, the problem is to compute k subsequences of consecutive elements with the sums of their elements being the largest, second largest, ..., and the k:th largest among all possible range sums. For any value of k, 1 <= k <= n(n+1)/2, our algorithm takes O(n + k log n) time in the worst case to rank all such subsequences. Our algorithm is optimal for k <= n.

    Fulltekst (pdf)
    FULLTEXT01
  • 35.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Computing maximum-scoring segments in almost linear time2006Rapport (Annet vitenskapelig)
    Abstract [en]

    Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires n log n time in the worst case. For a given sequence of length n, we present an almost linear-time algorithm for this problem. Our algorithm uses a disjoint-set data structure and requires O(n a(n, n) ) time in the worst case, where a(n,n) is the inverse Ackermann function.

    Fulltekst (pdf)
    FULLTEXT01
  • 36.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Computing maximum-scoring segments in almost linear time2006Inngår i: Computing and Combinatorics: 12th annual international conference, COCOON 2006, Taipei, Taiwan, August 15 - 18, 2006 ; proceedings / [ed] Danny Z. Chen, Encyclopedia of Global Archaeology/Springer Verlag, 2006, s. 255-264Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires Θ(n log n) time in the worst case. For a given sequence of length n, we present an almost linear-time algorithm for this problem. Our algorithm uses a disjoint-set data structure and requires O(nα(n, n)) time in the worst case, where α(n, n) is the inverse Ackermann function.

  • 37.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Computing maximum-scoring segments optimally2007Rapport (Annet vitenskapelig)
    Abstract [en]

    Given a sequence of length n, the problem studied in this report is to find a set of k disjoint subsequences of consecutive elements such that the total sum of all elements in the set is maximized. This problem arises in the analysis of DNA sequences. The previous best known algorithm requires time proportional to n times the inverse Ackermann function of (n,n), in the worst case. We present a linear-time algorithm, which is optimal, for this problem.

    Fulltekst (pdf)
    FULLTEXT01
  • 38.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Computing the k maximum subarrays fast2004Rapport (Annet vitenskapelig)
    Abstract [en]

    We study the problem of computing the k maximum sum subarrays. Given an array of real numbers and an integer, k, the problem involves finding the k largest values of the sum from i to j of the array, for any i and j. The problem for fixed k=1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. In this paper, we develop an algorithm requiring time proportional to n times square root of k for an array of length n. Moreover, for two-dimensional version of the problem, which computes the k largest sums over all rectangular subregions of an m times n array of real numbers, we show that it can be solved efficiently in the worst case as well.

    Fulltekst (pdf)
    FULLTEXT01
  • 39.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Efficient algorithms for k maximum sums2004Inngår i: Algorithms and Computation: 15th International Symposium, ISAAC 2004 / [ed] Rudolf Fleischer; Gerhard Trippen, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 2004, s. 137-148Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study the problem of computing the k maximum sum subsequences. Given a sequence of real numbers (x1,x2,⋯,xn) and an integer parameter k, l ≤ k ≤ 1/2n(n -1), the problem involves finding the k largest values of Σl=ij xl for 1 ≤ i ≤ j ≤ n. The problem for fixed k = 1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. Recently, Bae and Takaoka presented a θ(nk)-time algorithm for the k maximum sum subsequences problem. In this paper, we design efficient algorithms that solve the above problem in O (min{k + n log2 n, n √k}) time in the worst case. Our algorithm is optimal for k ≥ n log2 n and improves over the previously best known result for any value of the user-defined parameter k. Moreover, our results are also extended to the multi-dimensional versions of the k maximum sum subsequences problem; resulting in fast algorithms as well

  • 40.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Efficient algorithms for k maximum sums2006Inngår i: Algorithmica, ISSN 0178-4617, E-ISSN 1432-0541, Vol. 46, nr 1, s. 27-41Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We study the problem of computing the k maximum sum subsequences. Given a sequence of real numbers {x1,x2,...,xn} and an integer parameter k, 1 ≤ k ≤ 1/2n(n-1),the problem involves finding the k largest values of ∑ℓ=ijxℓ for 1 ≤ i ≤ j ≤ n.The problem for fixed k = 1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. Recently, Bae and Takaoka presented a Θ(nk)-time algorithm for the k maximum sum subsequences problem. In this paper we design an efficient algorithm that solves the above problem in O(min {k+nlog2n,n√k} ) time in the worst case. Our algorithm is optimal for k = Ω(n log 2 n) and improves over the previously best known result for any value of the user-defined parameter k < 1. Moreover, our results are also extended to the multi-dimensional versions of the k maximum sum subsequences problem; resulting in fast algorithms as well

  • 41.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Ranking k maximum sums2007Inngår i: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 377, nr 1-3, s. 229-237Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Given a sequence of n real numbers and an integer parameter k, the problem studied in this paper is to compute k subsequences of consecutive elements with the sums of their elements being the largest, the second largest, and the kth largest among all possible range sums of the input sequence. For any value of k, 1 <= k <= n (n + 1)/2, we design a fast algorithm that takes O (n + k log n) time in the worst case to compute and rank all such subsequences. We also prove that our algorithm is optimal for k = O (n) by providing a matching lower bound.Moreover, our algorithm is an improvement over the previous results on the maximum sum subsequences problem (where only the subsequences are requested and no ordering with respect to their relative sums will be determined).Furthermore, given the fact that we have computed the fth largest sums, our algorithm retrieves the (l + 1)th largest sum in O (log n) time, after O (n) time of preprocessing.

  • 42.
    Bengtsson, Fredrik
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Chen, Jingsen
    Space-efficient range-sum queries in OLAP2004Inngår i: Data Warehousing and Knowledge Discovery. Proceedings: 6th international conference, DaWaK 2004, Zaragoza, Spain, September 1 - 3, 2004 : proceedings / [ed] Yahiko Kambayashi; Mukesh Mohania; Wolfram Wöß, Encyclopedia of Global Archaeology/Springer Verlag, 2004, s. 87-96Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a fast algorithm to answer range-sum queries in OLAP data cubes. Our algorithm supports constant-time queries while maintaining sub-linear time update and using minimum space. Furthermore, we study the trade-off between query time and update time. The complexity for query is O(2ℓd) and for updates O((2ℓ2ℓ√n)d) on a data cube of nd elements, where ℓ is a trade-off parameter. Our algorithm improve over previous best known results

  • 43.
    Benko, I.
    et al.
    University of Waterloo.
    Brodnik, Andrej
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Architecture of KRPAN1996Inngår i: Elektrotehniski Vestnik, ISSN 0013-5852, Vol. 63, nr 2, s. 65-68Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the paper we address the operation of an agency that provides information in a newspaper-like form on the information highway. We describe the software architecture and the physical layout of KRPAN, a kernel that provides the support necessary to operate such an agency. KRPAN is a distributed system which employs intelligent caching to improve space and network utilization. The implementation of KRPAN relies on standardized formats of data which permits usage of commonly available tools. At the end we touch legal and ethical questions and describe how KRPAN helps to solve them.

  • 44.
    Berezovskaya, Yulia
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Yang, Chen-Wei
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Mousavi, Arash
    IT Department, SCANIA CV AB, Södertälje, Sweden.
    Vyatkin, Valeriy
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. Department of Electrical Engineering and Automation, Aalto University, 02150 Helsinki, Finland.
    Minde, Tor Björn
    RISE SICS North, Luleå, Sweden.
    Modular Model of a Data Centre as a Tool for Improving Its Energy Efficiency2020Inngår i: IEEE Access, E-ISSN 2169-3536, Vol. 8, s. 46559-46573Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    For most modern data centres, it is of high value to select practical methods for improving energy efficiency and reducing energy waste. IT-equipment and cooling systems are the two most significant energy consumers in data centres, thus the energy efficiency of any data centre mainly relies on the energy efficiency of its computational and cooling systems. Existing techniques of optimising the energy usage of both these systems have to be compared. However, such experiments cannot be conducted in real plants as they may harm the electronic equipment. This paper proposes a modelling toolbox which enables building models of data centres of any scale and configuration with relative ease. The toolbox is implemented as a set of building blocks which model individual components of a typical data centre, such as processors, local fans, servers, units of cooling systems, it provides methods of adjusting the internal parameters of the building blocks, as well as contains constructors utilising the building blocks for building models of data centre systems of different levels from server to the server room. The data centre model is meant to accurate estimating the energy consumption as well as the evolution of the temperature of all computational nodes and the air temperature inside the data centre. The constructed model capable of substitute for the real data centre at examining the performance of different energy-saving strategies in dynamic mode: the model provides information about data centre operating states at each time point (as model outputs) and takes values of adjustable parameters as the control signals from system implementing energy-saving algorithm (as model inputs). For Module 1 of the SICS ICE data centre located in Luleå, Sweden, the model was constructed from the building blocks. After adjusting the internal parameters of the building blocks, the model demonstrated the behaviour quite close to real data from the SICS ICE data centre. Therefore the model is applicable to use as a substitute for the real data centre. Some examples of using the model for testing energy-saving strategies are presented at the end of the paper.

  • 45.
    Berglund, Tomas
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Brodnik, Andrej
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Jonsson, Håkan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Mrozek, Kent
    Staffansson, Mats
    Luleå tekniska universitet.
    Söderkvist, Inge
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper.
    Minimum curvature variation B-splines: validation of a path-planning model2004Rapport (Annet vitenskapelig)
    Fulltekst (pdf)
    FULLTEXT01
  • 46.
    Berglund, Tomas
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Brodnik, Andrej
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap. University of Primorska, Slovenia. .
    Jonsson, Håkan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Staffansson, Mats
    Luleå tekniska universitet.
    Söderkvist, Inge
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper.
    Planning smooth and obstacle-avoiding b-spline paths for autonomous mining vehicles2010Inngår i: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 7, nr 1, s. 167-172Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We study the problem of automatic generation of smooth and obstacle-avoiding planar paths for efficient guidance of autonomous mining vehicles. Fast traversal of a path is of special interest. We consider four-wheel four-gear articulated vehicles and assume that we have an a priori knowledge of the mine wall environment in the form of polygonal chains. Computing quartic uniform B-spline curves, minimizing curvature variation, staying at least at a proposed safety margin distance from the mine walls, we plan high speed paths.

    Fulltekst (pdf)
    fulltext
  • 47.
    Berglund, Tomas
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Erikson, Ulf
    Luleå tekniska universitet.
    Jonsson, Håkan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Mrozek, Kent
    Navigator AB.
    Söderkvist, Inge
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper.
    Automatic generation of smooth paths bounded by polygonal chains2001Inngår i: CIMCA 2001: 2001 international conference on computational intelligence for modelling, control & automation : 9-11 July 2001, Las Vegas, Nevada, USA : proceedings / [ed] M. Mohammadian, CIMCA , 2001, s. 528-535Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    FULLTEXT01
  • 48.
    Berglund, Tomas
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Greppe, Anders
    Thorén, Johan
    Papp, John
    Curve and surface fitting to measured data with a B-spline approach1999Rapport (Annet vitenskapelig)
    Abstract [en]

    Report in the Project course in Mathematics, MAM088, 1998/1999. Department of Mathematics, Luleå University of Technology, Sweden

    Fulltekst (pdf)
    FULLTEXT01
  • 49. Berglund, Tomas
    et al.
    Jonsson, Håkan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Söderkvist, Inge
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper.
    An obstacle-avoiding minimum variation B-spline problem2003Inngår i: Proceedings: 2003 International Conference on Geometric Modeling and Graphics, GMAG 2003 ; 16 - 18 July 2003, London, England, Los Alamitos, Calif: IEEE Communications Society, 2003, s. 156-161Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study the problem of computing a planar curve, restricted to lie between two given polygonal chains, such that the integral of the square of arc-length derivative of curvature along the curve is minimized. We introduce the minimum variation B-spline problem, which is a linearly constrained optimization problem over curves, defined by B-spline functions only. An empirical investigation indicates that this problem has one unique solution among all uniform quartic B-spline functions. Furthermore, we prove that, for any B-spline function, the convexity properties of the problem are preserved subject to a scaling and translation of the knot sequence defining the B-spline.

  • 50.
    Berglund, Tomas
    et al.
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, EISLAB.
    Jonsson, Håkan
    Luleå tekniska universitet, Institutionen för system- och rymdteknik, Datavetenskap.
    Söderkvist, Inge
    Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, Matematiska vetenskaper.
    The problem of computing an obstacle-avoiding minimum variation B-spline2003Rapport (Annet vitenskapelig)
    Abstract [en]

    We study the problem of computing a planar curve restricted to lie between two given polygonal chains such that the integral of the square of arc- length derivative of curvature along the curve is minimized. We introduce the Minimum Variation B-spline problem which is a linearly constrained optimization problem over curves defined by B-spline functions only. An empirical investigation indicates that this problem has one unique solution among all uniform quartic B-spline functions. Furthermore, we prove that, for any B-spline function, the convexity properties of the problem are preserved subject to a scaling and translation of the knot sequence defining the B-spline.

1234567 1 - 50 of 640
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf