Change search
Refine search result
1234567 1 - 50 of 612
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology Chittagong.
    Chowdhury, Abu Sayeed
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Karim, Razuan
    University of Science and Technology Chittagong.
    An Interoperable IP based WSN for Smart Irrigation Systems2017Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.

  • 2.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Paul, Sukanta
    University of Science and Technology, Chittagong.
    Akhter, Sharmin
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology, Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Selection of Energy Efficient Routing Protocol for Irrigation Enabled by Wireless Sensor Networks2017In: Proceedings of 2017 IEEE 42nd Conference on Local Computer Networks Workshops, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 75-81Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.

  • 3.
    Abedin, Md. Zainal
    et al.
    University of Science and Technology, Chittagong.
    Siddiquee, Kazy Noor E Alam
    University of Science and Technology Chittagong.
    Bhuyan, M. S.
    University of Science & Technology Chittagong.
    Karim, Razuan
    University of Science and Technology Chittagong.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Performance Analysis of Anomaly Based Network Intrusion Detection Systems2018In: Proveedings of the 43nd IEEE Conference on Local Computer Networks Workshops (LCN Workshops), Piscataway, NJ: IEEE Computer Society, 2018, p. 1-7Conference paper (Refereed)
    Abstract [en]

    Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.

  • 4.
    Abrishambaf, Reza
    et al.
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Bal, Mert
    Department of Engineering Technology, Miami University, Hamilton, OH.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Distributed home automation system based on IEC61499 function blocks and wireless sensor networks2017In: Proceedings of the IEEE International Conference on Industrial Technology, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1354-1359, article id 7915561Conference paper (Refereed)
    Abstract [en]

    In this paper, a distributed home automation system will be demonstrated. Traditional systems are based on a central controller where all the decisions are made. The proposed control architecture is a solution to overcome the problems such as the lack of flexibility and re-configurability that most of the conventional systems have. This has been achieved by employing a method based on the new IEC 61499 function block standard, which is proposed for distributed control systems. This paper also proposes a wireless sensor network as the system infrastructure in addition to the function blocks in order to implement the Internet-of-Things technology into the area of home automation as a solution for distributed monitoring and control. The proposed system has been implemented in both Cyber (nxtControl) and Physical (Contiki-OS) level to show the applicability of the solution

  • 5.
    Adewumi, Oluwatosin
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Foteini
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Conversational Systems in Machine Learning from the Point of View of the Philosophy of Science—Using Alime Chat and Related Studies2019In: Philosophies, ISSN 2409-9287, Vol. 4, no 3, article id 41Article in journal (Refereed)
    Abstract [en]

    This essay discusses current research efforts in conversational systems from the philosophy of science point of view and evaluates some conversational systems research activities from the standpoint of naturalism philosophical theory. Conversational systems or chatbots have advanced over the decades and now have become mainstream applications. They are software that users can communicate with, using natural language. Particular attention is given to the Alime Chat conversational system, already in industrial use, and the related research. The competitive nature of systems in production is a result of different researchers and developers trying to produce new conversational systems that can outperform previous or state-of-the-art systems. Different factors affect the quality of the conversational systems produced, and how one system is assessed as being better than another is a function of objectivity and of the relevant experimental results. This essay examines the research practices from, among others, Longino’s view on objectivity and Popper’s stand on falsification. Furthermore, the need for qualitative and large datasets is emphasized. This is in addition to the importance of the peer-review process in scientific publishing, as a means of developing, validating, or rejecting theories, claims, or methodologies in the research community. In conclusion, open data and open scientific discussion fora should become more prominent over the mere publication-focused trend.

  • 6.
    Akter, Shamima
    et al.
    International Islamic University, Chittagong, Bangladesh.
    Nahar, Nazmun
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A New Crossover Technique to Improve Genetic Algorithm and Its Application to TSP2019In: Proceedings of 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), IEEE, 2019, article id 18566123Conference paper (Refereed)
    Abstract [en]

    Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.

  • 7.
    Alam, Md. Eftekhar
    et al.
    International Islamic University Chittagong, Bangladesh.
    Kaiser, M. Shamim
    Jahangirnagar University, Dhaka, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An IoT-Belief Rule Base Smart System to Assess Autism2018In: Proceedings of the 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018), IEEE, 2018, p. 671-675Conference paper (Refereed)
    Abstract [en]

    An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.

  • 8.
    Alberti, M.
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Pondenkandath, V.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Wursch, M.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Ingold, R.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Switzerland.
    DeepDIVA: A Highly-Functional Python Framework for Reproducible Experiments2018In: Proceedings of International Conference on Frontiers in Handwriting Recognition, ICFHR 2018, IEEE, 2018, p. 423-428, article id 8583798Conference paper (Refereed)
    Abstract [en]

    We introduce DeepDIVA: an infrastructure designed to enable quick and intuitive setup of reproducible experiments with a large range of useful analysis functionality. Reproducing scientific results can be a frustrating experience, not only in document image analysis but in machine learning in general. Using DeepDIVA a researcher can either reproduce a given experiment or share their own experiments with others. Moreover, the framework offers a large range of functions, such as boilerplate code, keeping track of experiments, hyper-parameter optimization, and visualization of data and results. To demonstrate the effectiveness of this framework, this paper presents case studies in the area of handwritten document analysis where researchers benefit from the integrated functionality. DeepDIVA is implemented in Python and uses the deep learning framework PyTorch. It is completely open source(1), and accessible as Web Service through DIVAServices(2).

  • 9.
    Alberti, Michele
    et al.
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Pondenkandath, Vinaychandran
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Würsch, Marcel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Bouillon, Manuel
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Seuret, Mathias
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Ingold, Rolf
    Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Document Image and Voice Analysis Group (DIVA), University of Fribourg, Fribourg, Switzerland.
    Are You Tampering with My Data?2019In: Computer Vision – ECCV 2018 Workshops: Proceedings, Part II / [ed] Laura Leal-Taixé & Stefan Roth, Springer, 2019, p. 296-312Conference paper (Refereed)
    Abstract [en]

    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks, causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.

  • 10.
    Al-Douri, Yamur
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Two-Level Multi-Objective Genetic Algorithm for Risk-Based Life Cycle Cost Analysis2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Artificial intelligence (AI) is one of the fields in science and engineering and encompasses a wide variety of subfields, ranging from general areas (learning and perception) to specific topics, such as mathematical theorems. AI and, specifically, multi-objective genetic algorithms (MOGAs) for risk-based life cycle cost (LCC) analysis should be performed to estimate the optimal replacement time of tunnel fan systems, with a view towards reducing the ownership cost and the risk cost and increasing company profitability from an economic point of view. MOGA can create systems that are capable of solving problems that AI and LCC analyses cannot accomplish alone.

    The purpose of this thesis is to develop a two-level MOGA method for optimizing the replacement time of reparable system. MOGA should be useful for machinery in general and specifically for reparable system. This objective will be achieved by developing a system that includes a smart combination of techniques by integrating MOGA to yield the optimized replacement time. Another measure to achieve this purpose is implementing MOGA in clustering and imputing missing data to obtain cost data, which could help to provide proper data to forecast cost data for optimization and to identify the optimal replacement time.

    In the first stage, a two-level MOGA is proposed to optimize clustering to reduce and impute missing cost data. Level one uses a MOGA based on fuzzy c-means to cluster cost data objects based on three main indices. The first is cluster centre outliers; the second is the compactness and separation ( ) of the data points and cluster centres; the third is the intensity of data points belonging to the derived clusters. Level two uses MOGA to impute the missing cost data by using a valid data period from that are reduced data in size. In the second stage, a two-level MOGA is proposed to optimize time series forecasting. Level one implements MOGA based on either an autoregressive integrated moving average (ARIMA) model or a dynamic regression (DR) model. Level two utilizes a MOGA based on different forecasting error rates to identify proper forecasting. These models are applied to simulated data for evaluation since there is no control of the influenced parameters in all of the real cost data. In the final stage, a two-level MOGA is employed to optimize risk-based LCC analysis to find the optimal replacement time for reparable system. Level one uses a MOGA based on a risk model to provide a variation of risk percentages, while level two uses a MOGA based on an LCC model to estimate the optimal reparable system replacement time.

    The results of the first stage show the best cluster centre optimization for data clustering with low  and high intensity. Three cluster centres were selected because these centres have a geometry that is suitable for the highest data reduction of 27%. The best optimized interval is used for imputing missing data. The results of the second stage show the drawbacks of time series forecasting using a MOGA based on the DR model. The MOGA based on the ARIMA model yields better forecasting results. The results of the final stage show the drawbacks of the MOGA based on a risk-based LCC model regarding its estimation. However, the risk-based LCC model offers the possibility of optimizing the replacement schedule.

    However, MOGA is highly promising for allowing optimization compared with other methods that were investigated in the present thesis.

  • 11.
    Altmojo, Udayanto Dwi
    et al.
    Aalto Univ, Dept Elect Engn & Automat, Aalto, Finland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Aalto Univ, Dept Elect Engn & Automat, Espoo, Finland.
    Salcic, Zoran
    Univ Auckland, Dept Elect & Comp Engn, Auckland, New Zealand.
    On Achieving Reliable Communication in IEC 614992018In: 2018 IEEE 23RD INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), Piscataway, NJ: IEEE, 2018, p. 147-154Conference paper (Refereed)
    Abstract [en]

    This paper proposes a novel extension for communication in the IEC 61499 standard. Inspired by the features found in the formal programming language SystemJ, the extension supports reliable and guaranteed communication in distributed execution of function block application(s)/program(s). The extension utilizes mechanisms agnostic on underlying network protocols and are based on formal semantics that guarantee data delivery. The use of proposed extension, called channel, is demonstrated in an industrial automation-type example.

  • 12.
    Andersson, Arne
    et al.
    Lunds universitet.
    Brodnik, Andrej
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. IMFM, Ljubljana, Slovenia.
    Comment on "self-indexed sort"1996In: SIGPLAN notices, ISSN 0362-1340, E-ISSN 1558-1160, Vol. 31, no 8, p. 40-41Article in journal (Other academic)
  • 13.
    Andersson, Karl
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    You, Ilsun
    Soonchunhyang University, Chungcheongnam-do, Republic of Korea.
    Palmieri, Francesco
    University of Salerno, Fisciano (SA), Italy.
    Security and Privacy for Smart, Connected, and Mobile IoT Devices and Platforms2018In: Security and Communication Networks, ISSN 1939-0114, E-ISSN 1939-0122, Vol. 2018, p. 1-2, article id 5346596Article in journal (Refereed)
  • 14.
    Aparicio Rivera, Jorge
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Lindner, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Lindgren, Per
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Heapless: Dynamic Data Structures without Dynamic Heap Allocator for Rust2018In: 2018 IEEE 16TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), Piscataway, NJ: IEEE, 2018, p. 87-94, article id 8472097Conference paper (Refereed)
    Abstract [en]

    Dynamic memory management is typically implemented using a global memory allocator, which may negatively impact the performance, reliability, and predictability of a program; in effect standards around safety-critical applications often discourage or even disallow dynamic memory management. This paper presents heapless, a collection of dynamic data structures (for vectors, strings, and circular buffers) that can be either stack or statically allocated, thus free of global allocator dependencies. The proposed data structures for vectors and strings closely mimic the Rust standard library implementations while adding support to gracefully handling cases of capacity exceedance. Our circular buffers act as queues and allowing channel like usage (by splitting). The Rust memory model together with the ability of local reasoning on memory requirements (brought by heapless) facilitates establishing robustness/safety guarantees and minimize attack surfaces of (industrial) IoT systems. We show that the heapless data structures are highly efficient and have predictable performance, thus suitable for hard real-time applications. Moreover, in our implementation heapless data structures are non-relocatable allowing mapping to hardware, useful, e.g., to DMA transfers. The feasibility, performance, and advantages of heapless are demonstrated by implementing a JSON serialization and de-serialization library for an ARM Cortex-M based IoT platform.

  • 15.
    Arvidsson, Johan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Finding delta difference in large data sets2019Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    To find out what differs between two versions of a file can be done with several different techniques and programs. These techniques and programs are often focusd on finding differences in text files, in documents, or in class files for programming. An example of a program is the popular git tool which focuses on displaying the difference between versions of files in a project. A common way to find these differences is to utilize an algorithm called Longest common subsequence, which focuses on finding the longest common subsequence in each file to find similarity between the files. By excluding all similarities in a file, all remaining text will be the differences between the files. The Longest Common Subsequence is often used to find the differences in an acceptable time. When two lines in a file is compared to see if they differ from each other hashing is used. The hash values for each correspondent line in both files will be compared. Hashing a line will give the content on that line a unique value. If as little as one character on a line is different between the version, the hash values for those lines will be different as well. These techniques are very useful when comparing two versions of a file with text content. With data from a database some, but not all, of these techniques can be useful. A big difference between data in a database and text in a file will be that content is not just added and delete but also updated. This thesis studies the problem on how to make use of these techniques when finding differences between large datasets, and doing this in a reasonable time, instead of finding differences in documents and files.  Three different methods are going to be studied in theory. These results will be provided in both time and space complexities. Finally, a selected one of these methods is further studied with implementation and testing. The reason only one of these three is implemented is because of time constraint. The one that got chosen had easy maintainability, an easy implementation, and maintains a good execution time.

  • 16.
    Atmojo, Udayanto Dwi
    et al.
    Department of Electrical Engineering and Automation, Aalto University.
    Gulzar, Kashif
    Department of Electrical Engineering and Automation, Aalto University.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University.
    Ma, Rongwei
    Department of Electrical Engineering and Automation, Aalto University.
    Hopsu, Alexander
    Department of Electrical Engineering and Automation, Aalto University.
    Makkonen, Henri
    Department of Electrical Engineering and Automation, Aalto University.
    Korhonen, Atte
    Department of Electrical Engineering and Automation, Aalto University.
    Phu, Long Tran
    Department of Electrical Engineering and Automation, Aalto University.
    Distributed control architecture for dynamic reconfiguration: Flexible assembly line case study2018Conference paper (Refereed)
    Abstract [en]

    This article presents the development of a distributed manufacturing case study enhanced with features that enable flexibility during the production process and the capability to continue the production process in case of fault scenarios. The approach described in this paper presents solutions to achieve the production of customized products, handle changes in product order, and minimize downtime and avoid total shutdown of the manufacturing system due to the occurrence of failures during the production process.

  • 17.
    Atmojo, Udayanto Dwi
    et al.
    Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Salcic, Zoran
    Electrical and Computer Engineering, The University of Auckland, Auckland, New Zealand.
    Wang, Kevin I-Kai
    The University of Auckland, 1415 Auckland, New Zealand .
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A Service-Oriented Programming Approach for Dynamic Distributed Manufacturing Systems2019In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050Article in journal (Refereed)
    Abstract [en]

    Dynamic reconfigurability and adaptability are crucial features of the future manufacturing systems that must be supported by adequate software technologies. Currently, they are typically achieved as add-ons to existing software tools and run-time systems, which are not based on any formal foundation such as formal model of computation (MoC). This paper presents the new programming paradigm of Service Oriented SystemJ (SOSJ), which targets dynamic distributed software systems suited for future manufacturing applications. SOSJ is built on a merger and the synergies of two programming concepts of (1) Service Oriented Architecture (SOA), to support dynamic software system composition, and (2) SystemJ programming language based on a formal MoC, which targets correct by construction design of static distributed software systems. The resulting programming paradigm allows the design and implementation of dynamic distributed software systems.

  • 18.
    Axelsson, Tobias
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Using supervised learning algorithms to model the behavior of Road Weather Information System sensors2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Trafikverket, the agency in charge of state road maintenance in Sweden, have a number of so-called Road Weather Information Systems (RWIS). The main purpose of the stations is to provide winter road maintenance workers with information to decide when roads need to be plowed and/or salted. Each RWIS have a number of sensors which make road weather-related measurements every 30 minutes. One of the sensors is dug into the road which can cause traffic disturbances and be costly for Trafikverket. Other RWIS sensors fail occasionally.

    This project aims at modelling a set of RWIS sensors using supervised machine learning algorithms. The sensors that are of interest to model are: Optic Eye, Track Ice Road Sensor (TIRS) and DST111. Optic Eye measures precipitation type and precipitation amount. Both TIRS and DST111 measure road surface temperature. The difference between TIRS and DST111 is that the former is dug into the road, and DST111 measures road surface temperature from a distance via infrared laser. Any supervised learning algorithm trained to model a given measurement made by a sensor, may only train on measurements made by the other sensors as input features. Measurements made by TIRS may not be used as input in modelling other sensors, since it is desired to see if TIRS can be removed. The following input features may also be used for training: road friction, road surface condition and timestamp.

    Scikit-learn was used as machine learning software in this project. An experimental approach was chosen to achieve the project results: A pre-determined set of supervised algorithms were compared using different amount of top relevant input features and different hyperparameter settings. Prior to achieving the results, a data preparation process was conducted. Observations with suspected or definitive errors were removed in this process. During the data preparation process, the timestamp feature was transformed into two new features: month and hour.

    The results in this project show that precipitation type was best modelled using Classification And Regression Tree (CART) on Scikit-learn default settings, achieving a performance score of Macro-F1test = 0.46 and accuracy = 0.84 using road surface condition, road friction, DST111 road surface temperature, hour and month as input features. Precipitation amount was best modelled using k-Nearest Neighbor (kNN); with k = 64 and road friction used as the only input feature, a performance score of MSEtest = 0.31 was attained. TIRS road surface temperature was best modelled with Multi-Layer Perceptron (MLP) using 64 hidden nodes and DST111 road surface temperature, road surface condition, road friction, month, hour and precipitation type as input features, with which a performance score of MSEtest = 0.88 was achieved. DST111 road surface temperature was best modelled using Random forest on Scikit-learn default settings with road surface condition, road friction, month, precipitation type and hour as input features, achieving a performance score of MSEtest = 10.16.

  • 19.
    Baghdo, Simon
    Luleå University of Technology, Department of Arts, Communication and Education.
    Game Telemtry: Store, Analyze and Improve UX in Game from Player-Choices2016Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    During this project, the main objective was to store and analyze the user choices through game telemetry, in the game Bloodlines. With the goal to adjust the game for each member personally, for an improved user experience. This was done through a constructed database. By saving metrics of player choices and events such as: Most used weapon, attempts per session, session time periods, amount of deaths and highest rate of death cause. The results got analyzed with the control group settings in mind. Adjustments made were based on a fundamental foundation. In addition a web application with the functionality to enter and change the settings metrics in real time.

  • 20.
    Balasubramaniam, Sasitharan
    et al.
    Tampere University of Technology.
    Lyamin, Nikita
    Halmstad University.
    Kleyko, Denis
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Skurnik, Mikael
    University of Helsinki.
    Vinel, Alexey
    Halmstad University.
    Koucheryavy, Yevgeni
    Tampere University of Technology.
    Exploiting bacterial properties for multi-­‐hop nanonetworks2014In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 52, no 7, p. 184-191Article in journal (Refereed)
    Abstract [en]

    Molecular communication is a relatively new communication paradigm for nanomachines where the communication is realized by utilizing existing biological components found in nature. In recent years, researchers have proposed using bacteria to realize molecular communication because the bacteria have, (i) the ability to swim and migrate between locations, (ii) the ability to carry DNA contents (i.e. plasmids), which could be utilized for information storage, and (iii) the ability to interact and transfer plasmids to other bacteria (one of this process is known as bacterial conjugation). However, current proposals for bacterial nanonetworks have not considered the internal structures of the nanomachines that can facilitate the use of bacteria as an information carrier. This article presents the types and functionalities of nanomachines that can be utilized in bacterial nanonetworks. A particular focus is placed on the bacterial conjugation and its support for multi-hop communication between nanomachines. Simulations of the communication process have also been evaluated, to analyze the quantity of bit received as well as the delay performances. Wet lab experiments have also been conducted to validate the bacterial conjugation process. The article also discusses potential applications of bacterial nanonetworks for cancer monitoring and therapy.

  • 21.
    Baniya, Rupak
    et al.
    Department of Electrical Engineering, School of Electrical Engineering, Aalto University.
    Maksimainen, Mikko
    Department of Electrical Engineering, School of Electrical Engineering, Aalto University.
    Sierla, Seppo
    Department of Automation and Systems Technology, School of Electrical Engineering, Aalto University.
    Pang, Cheng
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Yang, Chen-Wei
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Smart indoor lighting control: power, illuminance, and colour quality2014In: Proceedings: 2014 IEEE 23rd International Symposium on Industrial Electronics (ISIE) : Grand Cevahir Hotel and Convention Center Istambul, Turkey : 01 - 04 June, 2014, Piscataway, NJ: IEEE Communications Society, 2014, p. 1745-1750, article id 6864878Conference paper (Refereed)
    Abstract [en]

    This paper investigates the correlation between color quality and energy efficiency of indoor lighting control. The color quality, in terms of visual performance and comfort, is quantified using three measurements: illuminance, Color Rendering Index, and Correlated Color Temperature. Several experiments have been conducted to evaluate the potential energy savings of using different portions of light spectrum to obtain the optimal color quality. In particular, Light-Emitting Diodes are used as the lighting sources of the experimental luminaire. Moreover, the above quantification method and experimental results have been incorporated into a previously developed simulation framework for Building Automation and Control Systems, and smart lighting is used to adjust the tradeoff between comfort and energy consumption based on the presence of occupants. The results can be used to evaluate the viability of advanced lighting automation.

  • 22.
    Bediako, Peter Ken
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Long Short-Term Memory Recurrent Neural Network for detecting DDoS flooding attacks within TensorFlow Implementation framework.2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Distributed Denial of Service (DDoS) attacks is one of the most widespread security attacks to internet service providers. It is the most easily launched attack, but very difficult and expensive to detect and mitigate. In view of the devastating effect of DDoS attacks, there has been the increase on the adaptation of a network detection technique to reveal the presence of DDoS attack before huge traffic buildup to prevent service availability.

    Several works done on DDoS attack detection reveals that, the conventional DDoS attack detection methods based on statistical divergence is useful, however, the large surface area of the internet which serve as the main conduit for DDoS flooding attacks to occur, makes it difficult to use this approach to detect attacks on the network. Hence this research work is focused on using detection techniques based on a deep learning technique, because it is proven as the most effective detection technique against DDoS attacks.

    Out of the several deep neural network techniques available, this research focuses on one aspect of recurrent neural network called Long Short-Term Memory (LSTM) and TensorFlow framework to build and train a deep neural network model to detect the presence of DDoS attacks on a network. This model can be used to develop an Intrusion Detection System (IDS) to aid in detecting DDoS attacks on the network. Also, at the completion of this project, the expectation of the produced model is to have a higher detection accuracy rates, and a low false alarm rates.

    Design Science Research Methodology (DSRM) was used to carry out this project. The test experiment for this work was performed on CPU and GPU base systems to determine the base system's effect on the detection accuracy of the model.

    To achieve the set goals, seven evaluating parameters were used to test the model's detection accuracy and performance on both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) systems.

    The results reveal that the model was able to produce a detection accuracy of 99.968% on both CPU and GPU base system which is better than the results by Yuan et al. [55] which is 97.606%. Also the results prove that the model's performance does not depend on the based system used for the training but rather depends on the dataset size. However, the GPU systems train faster than CPU systems. It also revealed that increasing the value of epochs during training does not affect the models detection accuracy but rather extends the training time.

    This model is limited to detecting 17 different attack types on maintaining the same detection accuracy mentioned above. Further future work should be done to increase the detecting attack type to unlimited so that it will be able to detect all attack types.

  • 23.
    Belyakov, Stanislav Leonidovich
    et al.
    Southern Federal University, Department of Applied Information Science Taganrog.
    Savelyeva, Marina
    Southern Federal University, Department of Applied Information Science Taganrog.
    Yan, Jeffrey
    Department of Electrical and Computer System Engineering, University of Auckland, University of Auckland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Adaptation of Material Flows in Mechanical Transportation Systems Based on Observation Experience2015In: IEEE TrustCom-BigDataSE-ISPA 2015: Helsinki, 20-22 Aug. 2015, Piscataway, NJ: IEEE Communications Society, 2015, p. 269-274, article id 7345659Conference paper (Refereed)
    Abstract [en]

    This paper investigates adaptation of material flows in mechanical transportation systems to the appearance of local overloads. The adaptation mechanism is based on the deflection of the forecast of experts who oversee the behavior of flows in the network. We propose a modified version of case-based reasoning, which uses the concept of imagination of situations. Unlike known methods, imaginative description of cases increases the reliability of decision-making. We provide a modification of the algorithm for dynamically building routing tables in distributed controllers of a transportation network. Analytic evaluation of the adaptation method's effectiveness is provided. The paper is concluded with outline of the implementation mechanism using a network of distributed controllers

  • 24.
    Belyakov, Stanislav Leonidovich
    et al.
    Southern Federal University, Department of Applied Information Science Taganrog.
    Savelyeva, Marina
    Southern Federal University, Department of Applied Information Science Taganrog.
    Yan, Jeffrey
    Department of Electrical and Computer System Engineering, University of Auckland, University of Auckland.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Knowledge-based routing in mechanical transportation systems2014In: 12th IEEE International Conference on Industrial Informatics, INDIN 2014: Porto Alegre, Brazil, 27 - 30 July 2014, Piscataway, NJ: IEEE Communications Society, 2014, p. 48-53Conference paper (Refereed)
    Abstract [en]

    This paper presents the ways of constructing routing algorithms in mechanical transport systems based on knowledge. It is assumed that the experts observing system behavior applies his experience by designating subsystems with a specific behavior. To create routing tables, a model of fuzzy temporal hypergraph was used. We consider fixed and dynamic routing, given modifications of Dijkstra's algorithm for the case of fuzzy temporal hypergraphs

  • 25.
    Bengtsson, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Algorithms for aggregate information extraction from sequences2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, we propose efficient algorithms for aggregate information extraction from sequences and multidimensional arrays. The algorithms proposed are applicable in several important areas, including large databases and DNA sequence segmentation. We first study the problem of efficiently computing, for a given range, the range-sum in a multidimensional array as well as computing the k maximum values, called the top-k values. We design two efficient data structures for these problems. For the range-sum problem, our structure supports fast update while preserving low complexity of range-sum query. The proposed top-k structure provides fast query computation in linear time proportional to the sum of the sizes of a two-dimensional query region. We also study the k maximum sum subsequences problem and develop several efficient algorithms. In this problem, the k subsegments of consecutive elements with largest sum are to be found. The segments can potentially overlap, which allows for a large number of possible candidate segments. Moreover, we design an optimal algorithm for ranking the k maximum sum subsequences. Our solution does not require the value of k to be known a priori. Furthermore, an optimal linear-time algorithm is developed for the maximum cover problem of finding k subsequences of consecutive elements of maximum total element sum.

  • 26.
    Bengtsson, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Efficient aggregate queries on data cubes2004Licentiate thesis, monograph (Other academic)
    Abstract [en]

    As computers are developing rapidly and become more available to the modern information society, the possibility and ability to handle large data sets in database applications increases. The demand for efficient algorithmic solutions to process huge amounts of information increases as the data sets become larger. In this thesis, we study the efficient implementation of aggregate operations on the data cube, a modern and flexible model for data warehouses. In particular, the problem of computing the k largest sum subsequences of a given sequence is investigated. An efficient algorithm for the problem is developed. Our algorithm is optimal for large values of the user-specified parameter k. Moreover, a fast in-place algorithm with good trade-off between update- and query-time, for the multidimensional orthogonal range sum problem, is presented. The problem studied is to compute the sum of the data over an orthogonal range in a multidimensional data cube. Furthermore, a fast algorithmic solution to the problem of maintaining a data structure for computing the k largest values in a requested orthogonal range of the data cube is also proposed.

  • 27.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    A note on ranking k maximum sums2005Report (Other academic)
    Abstract [en]

    In this paper, we design a fast algorithm for ranking the k maximum sum subsequences. Given a sequence of real numbers and an integer parameter k, the problem is to compute k subsequences of consecutive elements with the sums of their elements being the largest, second largest, ..., and the k:th largest among all possible range sums. For any value of k, 1 <= k <= n(n+1)/2, our algorithm takes O(n + k log n) time in the worst case to rank all such subsequences. Our algorithm is optimal for k <= n.

  • 28.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Computing maximum-scoring segments in almost linear time2006Report (Other academic)
    Abstract [en]

    Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires n log n time in the worst case. For a given sequence of length n, we present an almost linear-time algorithm for this problem. Our algorithm uses a disjoint-set data structure and requires O(n a(n, n) ) time in the worst case, where a(n,n) is the inverse Ackermann function.

  • 29.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Computing maximum-scoring segments in almost linear time2006In: Computing and Combinatorics: 12th annual international conference, COCOON 2006, Taipei, Taiwan, August 15 - 18, 2006 ; proceedings / [ed] Danny Z. Chen, Encyclopedia of Global Archaeology/Springer Verlag, 2006, p. 255-264Conference paper (Refereed)
    Abstract [en]

    Given a sequence, the problem studied in this paper is to find a set of k disjoint continuous subsequences such that the total sum of all elements in the set is maximized. This problem arises naturally in the analysis of DNA sequences. The previous best known algorithm requires Θ(n log n) time in the worst case. For a given sequence of length n, we present an almost linear-time algorithm for this problem. Our algorithm uses a disjoint-set data structure and requires O(nα(n, n)) time in the worst case, where α(n, n) is the inverse Ackermann function.

  • 30.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Computing maximum-scoring segments optimally2007Report (Other academic)
    Abstract [en]

    Given a sequence of length n, the problem studied in this report is to find a set of k disjoint subsequences of consecutive elements such that the total sum of all elements in the set is maximized. This problem arises in the analysis of DNA sequences. The previous best known algorithm requires time proportional to n times the inverse Ackermann function of (n,n), in the worst case. We present a linear-time algorithm, which is optimal, for this problem.

  • 31.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Computing the k maximum subarrays fast2004Report (Other academic)
    Abstract [en]

    We study the problem of computing the k maximum sum subarrays. Given an array of real numbers and an integer, k, the problem involves finding the k largest values of the sum from i to j of the array, for any i and j. The problem for fixed k=1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. In this paper, we develop an algorithm requiring time proportional to n times square root of k for an array of length n. Moreover, for two-dimensional version of the problem, which computes the k largest sums over all rectangular subregions of an m times n array of real numbers, we show that it can be solved efficiently in the worst case as well.

  • 32.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Efficient algorithms for k maximum sums2004In: Algorithms and Computation: 15th International Symposium, ISAAC 2004 / [ed] Rudolf Fleischer; Gerhard Trippen, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 2004, p. 137-148Conference paper (Refereed)
    Abstract [en]

    We study the problem of computing the k maximum sum subsequences. Given a sequence of real numbers (x1,x2,⋯,xn) and an integer parameter k, l ≤ k ≤ 1/2n(n -1), the problem involves finding the k largest values of Σl=ij xl for 1 ≤ i ≤ j ≤ n. The problem for fixed k = 1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. Recently, Bae and Takaoka presented a θ(nk)-time algorithm for the k maximum sum subsequences problem. In this paper, we design efficient algorithms that solve the above problem in O (min{k + n log2 n, n √k}) time in the worst case. Our algorithm is optimal for k ≥ n log2 n and improves over the previously best known result for any value of the user-defined parameter k. Moreover, our results are also extended to the multi-dimensional versions of the k maximum sum subsequences problem; resulting in fast algorithms as well

  • 33.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Efficient algorithms for k maximum sums2006In: Algorithmica, ISSN 0178-4617, E-ISSN 1432-0541, Vol. 46, no 1, p. 27-41Article in journal (Refereed)
    Abstract [en]

    We study the problem of computing the k maximum sum subsequences. Given a sequence of real numbers {x1,x2,...,xn} and an integer parameter k, 1 ≤ k ≤ 1/2n(n-1),the problem involves finding the k largest values of ∑ℓ=ijxℓ for 1 ≤ i ≤ j ≤ n.The problem for fixed k = 1, also known as the maximum sum subsequence problem, has received much attention in the literature and is linear-time solvable. Recently, Bae and Takaoka presented a Θ(nk)-time algorithm for the k maximum sum subsequences problem. In this paper we design an efficient algorithm that solves the above problem in O(min {k+nlog2n,n√k} ) time in the worst case. Our algorithm is optimal for k = Ω(n log 2 n) and improves over the previously best known result for any value of the user-defined parameter k < 1. Moreover, our results are also extended to the multi-dimensional versions of the k maximum sum subsequences problem; resulting in fast algorithms as well

  • 34.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Ranking k maximum sums2007In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 377, no 1-3, p. 229-237Article in journal (Refereed)
    Abstract [en]

    Given a sequence of n real numbers and an integer parameter k, the problem studied in this paper is to compute k subsequences of consecutive elements with the sums of their elements being the largest, the second largest, and the kth largest among all possible range sums of the input sequence. For any value of k, 1 <= k <= n (n + 1)/2, we design a fast algorithm that takes O (n + k log n) time in the worst case to compute and rank all such subsequences. We also prove that our algorithm is optimal for k = O (n) by providing a matching lower bound.Moreover, our algorithm is an improvement over the previous results on the maximum sum subsequences problem (where only the subsequences are requested and no ordering with respect to their relative sums will be determined).Furthermore, given the fact that we have computed the fth largest sums, our algorithm retrieves the (l + 1)th largest sum in O (log n) time, after O (n) time of preprocessing.

  • 35.
    Bengtsson, Fredrik
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Chen, Jingsen
    Space-efficient range-sum queries in OLAP2004In: Data Warehousing and Knowledge Discovery. Proceedings: 6th international conference, DaWaK 2004, Zaragoza, Spain, September 1 - 3, 2004 : proceedings / [ed] Yahiko Kambayashi; Mukesh Mohania; Wolfram Wöß, Encyclopedia of Global Archaeology/Springer Verlag, 2004, p. 87-96Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a fast algorithm to answer range-sum queries in OLAP data cubes. Our algorithm supports constant-time queries while maintaining sub-linear time update and using minimum space. Furthermore, we study the trade-off between query time and update time. The complexity for query is O(2ℓd) and for updates O((2ℓ2ℓ√n)d) on a data cube of nd elements, where ℓ is a trade-off parameter. Our algorithm improve over previous best known results

  • 36.
    Benko, I.
    et al.
    University of Waterloo.
    Brodnik, Andrej
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Architecture of KRPAN1996In: Elektrotehniski Vestnik, ISSN 0013-5852, Vol. 63, no 2, p. 65-68Article in journal (Refereed)
    Abstract [en]

    In the paper we address the operation of an agency that provides information in a newspaper-like form on the information highway. We describe the software architecture and the physical layout of KRPAN, a kernel that provides the support necessary to operate such an agency. KRPAN is a distributed system which employs intelligent caching to improve space and network utilization. The implementation of KRPAN relies on standardized formats of data which permits usage of commonly available tools. At the end we touch legal and ethical questions and describe how KRPAN helps to solve them.

  • 37.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Brodnik, Andrej
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mrozek, Kent
    Staffansson, Mats
    Luleå tekniska universitet.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Minimum curvature variation B-splines: validation of a path-planning model2004Report (Other academic)
  • 38.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Brodnik, Andrej
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. University of Primorska, Slovenia. .
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Staffansson, Mats
    Luleå tekniska universitet.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Planning smooth and obstacle-avoiding b-spline paths for autonomous mining vehicles2010In: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 7, no 1, p. 167-172Article in journal (Refereed)
    Abstract [en]

    We study the problem of automatic generation of smooth and obstacle-avoiding planar paths for efficient guidance of autonomous mining vehicles. Fast traversal of a path is of special interest. We consider four-wheel four-gear articulated vehicles and assume that we have an a priori knowledge of the mine wall environment in the form of polygonal chains. Computing quartic uniform B-spline curves, minimizing curvature variation, staying at least at a proposed safety margin distance from the mine walls, we plan high speed paths.

  • 39.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Erikson, Ulf
    Luleå tekniska universitet.
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mrozek, Kent
    Navigator AB.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Automatic generation of smooth paths bounded by polygonal chains2001In: CIMCA 2001: 2001 international conference on computational intelligence for modelling, control & automation : 9-11 July 2001, Las Vegas, Nevada, USA : proceedings / [ed] M. Mohammadian, CIMCA , 2001, p. 528-535Conference paper (Refereed)
  • 40.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Greppe, Anders
    Thorén, Johan
    Papp, John
    Curve and surface fitting to measured data with a B-spline approach1999Report (Other academic)
    Abstract [en]

    Report in the Project course in Mathematics, MAM088, 1998/1999. Department of Mathematics, Luleå University of Technology, Sweden

  • 41. Berglund, Tomas
    et al.
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    An obstacle-avoiding minimum variation B-spline problem2003In: Proceedings: 2003 International Conference on Geometric Modeling and Graphics, GMAG 2003 ; 16 - 18 July 2003, London, England, Los Alamitos, Calif: IEEE Communications Society, 2003, p. 156-161Conference paper (Refereed)
    Abstract [en]

    We study the problem of computing a planar curve, restricted to lie between two given polygonal chains, such that the integral of the square of arc-length derivative of curvature along the curve is minimized. We introduce the minimum variation B-spline problem, which is a linearly constrained optimization problem over curves, defined by B-spline functions only. An empirical investigation indicates that this problem has one unique solution among all uniform quartic B-spline functions. Furthermore, we prove that, for any B-spline function, the convexity properties of the problem are preserved subject to a scaling and translation of the knot sequence defining the B-spline.

  • 42.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    The problem of computing an obstacle-avoiding minimum variation B-spline2003Report (Other academic)
    Abstract [en]

    We study the problem of computing a planar curve restricted to lie between two given polygonal chains such that the integral of the square of arc- length derivative of curvature along the curve is minimized. We introduce the Minimum Variation B-spline problem which is a linearly constrained optimization problem over curves defined by B-spline functions only. An empirical investigation indicates that this problem has one unique solution among all uniform quartic B-spline functions. Furthermore, we prove that, for any B-spline function, the convexity properties of the problem are preserved subject to a scaling and translation of the knot sequence defining the B-spline.

  • 43.
    Berglund, Tomas
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Strömberg, Thomas
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Jonsson, Håkan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Epi-convergence of minimum curvature variation B-splines2003Report (Other academic)
    Abstract [en]

    We study the curvature variation functional, i.e., the integral over the square of arc-length derivative of curvature, along a planar curve. With no other constraints than prescribed position, slope angle, and curvature at the endpoints of the curve, the minimizer of this functional is known as a cubic spiral. It remains a challenge to effectively compute minimizers or approximations to minimizers of this functional subject to additional constraints such as, for example, for the curve to avoid obstacles such as other curves. In this paper, we consider the set of smooth curves that can be written as graphs of three times continuously differentiable functions on an interval, and, in particular, we consider approximations using quartic uniform B- spline functions. We show that if quartic uniform B-spline minimizers of the curvature variation functional converge to a curve, as the number of B-spline basis functions tends to infinity, then this curve is in fact a minimizer of the curvature variation functional. In order to illustrate this result, we present an example of sequences of B-spline minimizers that converge to a cubic spiral.

  • 44.
    Birk, Wolfgang
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Eliasson, Jens
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Lindgren, Per
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Riliskis, Laurynas
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Road surface networks technology enablers for enhanced ITS2010In: 2010 IEEE Vehicular Networking Conference, VNC 2010: Jersey City, NJ ; 13-15 Dec 2010, Piscataway, NJ: IEEE Communications Society, 2010, p. 152-159Conference paper (Refereed)
    Abstract [en]

    The increased need for mobility has led to transportation problems like congestion, accidents and pollution. In order to provide safe and efficient transport systems great efforts are currently being put into developing Intelligent Transport Systems (ITS) and cooperative systems. In this paper we extend proposed solutions with autonomous on-road sensors and actuators forming a wireless Road Surface Network (RSN). We present the RSN architecture and design methodology and demonstrate its applicability to queue-end detection. For the use case we discuss the requirements and technological solutions to sensor technology, data processing and communication. In particular the MAC protocol is detailed and its performance assessed through theoretical verification. The RSN architecture is shown to offer a scalable solution, where increased node density offers more precise sensing as well as increased redundancy for safety critical applications. The use-case demonstrates that RSN solutions may be deployed as standalone systems potentially integrated into current and future ITS. RSN may provide both easily deployable and cost effective alternatives to traditional ITS (with a direct impact independent of penetration rate of other ITS infrastructures - i.e., smart vehicles, safe spots etc.) as well as provide fine grain sensory information directly from the road surface to back-end and cooperative systems, thus enabling a wide range of ITS applications beyond current state of the art.

  • 45.
    Birk, Wolfgang
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On the design of cooperative road infrastructure systems2008In: Reglermöte 2008: proceedings / [ed] Thomas Gustafsson; Wolfgang Birk; Andreas Johansson, Luleå: Luleå tekniska universitet, 2008Conference paper (Other academic)
    Abstract [en]

    This paper discusses the design of cooperative road infrastructure systems for infrastructure-based driving support functions. The background of such systems is mapped out and it is shown that there is a need for a cross disciplinary approach. Using an example of a support function, namely the overtaking support, it is shown that such a system is feasible. The different challenges and technological problems that are identified are given and the future work is indicated.

  • 46.
    Birk, Wolfgang
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Eliasson, Jens
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    iRoad - cooperative road infrastructure systems for driver support2009In: 16th World Congress and Exhibition on Intelligent Transport Systems 2009: 16th ITS World Congress ; Stockholm, Sweden, 21 - 25 September 2009, Red Hook: Curran Associates, Inc., 2009Conference paper (Refereed)
    Abstract [en]

    This paper discusses the design and implementation of a cooperative road infrastructure systems, which uses an intelligent road surface. Using an overtaking assist feature as an example it is shown how such a feature can be designed and implemented on a road infrastructure and integrated with drivers and passengers using IMS. The feasibility of this feature is assessed from a functional and communication perspective. Moreover, first results from real-life tests on the Swedish highway E4 are presented which motivate the next research and development steps.

  • 47.
    Birk, Wolfgang
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Riliskis, Laurynas
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hesler, Alban
    NEC.
    Modular design and performance ranking of communication protocols2009Report (Other academic)
    Abstract [en]

    In this deliverable we present a systematic approach towards designing modularized protocols and rank a contribution of their components to the overall system performance. In the nutshell, this approach is based onthree steps: 1.) identifying adjustable parameters in existing protocols, 2.) ranking their influence on the system-level performance metrics and 3.) defining protocol modules exposing the parameters of the highest rank. To this end we present the definition of the components for constructing MAC protocols based on ranking of the impact of adjustable parameters on the overall system performance. We also overview a ranking method for functional blocks of protocols on the routing layer.

  • 48.
    Björklund, Thomas
    et al.
    Luleå tekniska universitet.
    Brodnik, Andrej
    Nordlander, Johan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Formal verification of a trie-based data structure2005Conference paper (Refereed)
  • 49.
    Bladh, Thomas
    et al.
    Luleå tekniska universitet.
    Carr, David A
    Luleå tekniska universitet.
    Kljun, Matjaz
    University of Primorska.
    The effect of animated transitions on user navigation in 3D tree-maps2005In: Proceedings: Ninth International Conference on Information Visualisation : 06 - 08 July 2005, London, England / [ed] Ebad Banissi, Los Alamitos, Calif: IEEE Communications Society, 2005, p. 297-305Conference paper (Refereed)
    Abstract [en]

    This paper describes a user study conducted to evaluate the use of smooth animated transitions between directories in a three-dimensional, tree-map visualization. We looked specifically at the task of returning to a previously visited directory after either an animated or instantaneous return to the root location. The results of the study show that animation is a double-edged sword. Even though users take more shortcuts, they also make more severe navigational errors. It seems as though the promise of a more direct route to the target directory, which animation provides, somehow precludes users who navigate incorrectly from applying a successful recovery strategy.

  • 50.
    Blech, Jan Olaf
    et al.
    RMIT University, Melbourne.
    Lindgren, Per
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Pereira, David
    ISEP, Instituto Superior de Engenharia do Porto.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Zoitl, Alois
    fortiss GmbH, Munich.
    A Comparison of Formal Verification Approaches for IEC 614992016In: 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA): Berlin, 6-9 Sept. 2016, Piscataway, NJ: IEEE conference proceedings, 2016, article id 7733636Conference paper (Refereed)
    Abstract [en]

    Engineering and computer science have come up with a variety of techniques to increase the confidence in systems, increase reliability, facilitate certification, improve reuse and maintainability, improve interoperability and portability. Among them are various techniques based on formal models to enhance testing, validation and verification. In this paper, we are concentrating on formal verification both at runtime and design time of a system. Formal verification of a system property at design time is the process of mathematically proving that the property indeed holds. At runtime, one can check the validity of the property and report deviations by monitoring the system execution. Formal verification relies on semantic models, descriptions of the system and its properties. We report on ongoing verification work and present two different approaches for formal verification of IEC 61499-based programs. We provide two examples of ongoing work to exemplify the design and the runtime verification approaches

1234567 1 - 50 of 612
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf