Change search
Refine search result
1 - 38 of 38
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abdunabiev, Isomiddin
    et al.
    Department of Computer and Software, Hanyang University.
    Lee, Choonhwa
    Department of Computer and Software, Hanyang University.
    Hanif, Muhammad
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    An Auto-Scaling Architecture for Container Clusters Using Deep Learning2021In: 2021년도 대한전자공학회 하계종합학술대회 논문집, DBpia , 2021, p. 1660-1663Conference paper (Refereed)
    Abstract [en]

    In the past decade, cloud computing has become one of the essential techniques of many business areas, including social media, online shopping, music streaming, and many more. It is difficult for cloud providers to provision their systems in advance due to fluctuating changes in input workload and resultant resource demand. Therefore, there is a need for auto-scaling technology that can dynamically adjust resource allocation of cloud services based on incoming workload. In this paper, we present a predictive auto-scaler for Kubernetes environments to improve the quality of service. Being based on a proactive model, our proposed auto-scaling method serves as a foundation on which to build scalable and resource-efficient cloud systems.

  • 2.
    Albertsson, Kim
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. CERN.
    Gleyze, Sergei
    University of Florida.
    Huwiler, Marc
    EPFL.
    Ilievski, Vladimir
    EPFL.
    Moneta, Lorenzo
    CERN.
    Shekar, Saurav
    ETH Zurich.
    Estrade, Victor
    CERN.
    Vashistha, Akshay
    CERN. Karlsruhe Institute of Technology.
    Wunsch, Stefan
    CERN. Karlsruhe Institute of Technology.
    Mesa, Omar Andres Zapata
    University of Antioquia. Metropolitan Institute of Technology.
    New Machine Learning Developments in ROOT/TMVA2019In: 23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018), EDP Sciences, 2019, Vol. 214, article id 06014Conference paper (Refereed)
    Abstract [en]

    The Toolkit for Multivariate Analysis, TMVA, the machine learning package integrated into the ROOT data analysis framework, has recently seen improvements to its deep learning module, parallelisation of multivariate methods and cross validation. Performance benchmarks on datasets from high-energy physics are presented with a particular focus on the new deep learning module which contains robust fully-connected, convolutional and recurrent deep neural networks implemented on CPU and GPU architectures. Both dense and convo-lutional layers are shown to be competitive on small-scale networks suitable for high-level physics analyses in both training and in single-event evaluation. Par-allelisation efforts show an asymptotical 3-fold reduction in boosted decision tree training time while the cross validation implementation shows significant speed up with parallel fold evaluation.

  • 3.
    Burstrom, Thommie
    et al.
    Department of Management and Organisation, Hanken School of Economics, Helsingfors, Finland.
    Lahti, Tom
    FLO Department, Hanken Svenska Handelshogskolan Foretagsledning och organisation, Helsingfors, Finland.
    Parida, Vinit
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering.
    Wartiovaara, Markus
    Business Lab, Hanken School of Economics, Helsinki, Finland.
    Wincent, Joakim
    FLO Department, Hanken School of Economics, Helsinki, Finland.
    Software Ecosystems Now and in the Future: A Definition, Systematic Literature Review, and Integration Into the Business and Digital Ecosystem Literature2022In: IEEE transactions on engineering management, ISSN 0018-9391, E-ISSN 1558-0040Article, review/survey (Refereed)
    Abstract [en]

    Business settings and ecosystems have in the past three decades been transformed by software utilization. It is being applied in several industries shaping innovation—platforms—and business characteristics, thus attracting ever more interest from both practitioners and researchers. During the last 10 years, research on software ecosystems (SECOs) has expanded, and is strongly related to the development of digital ecosystems. This expansion has led to the need to review the status of SECO research, and the present article provides a state-of-the-art literature review on the topic. We explain the connection between the relatively new research field of SECOs and the traditional streams of ecosystem research. This article contributes novel definitions of SECOs and SECO configuration, and proposes a theoretical model illustrating the relationship between vital contingency categories and processes. We identify significant research gaps and present a future research agenda.

  • 4.
    Cruz, Ernesto Monroy
    et al.
    Posgrado CIATEQ A.C., Tecnológico Nacional de México (TecNM) Campus Atitalaquia, and CETIS No. 026, Hidalgo, México.
    Carrillo, Luis Rodolfo Garcia
    Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, New Mexico, USA.
    Patil, Sandeep
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Ceron, Primo
    CETIS No. 026, Hidalgo, México.
    Ceron, Jan F.
    CETIS No. 026, Hidalgo, México.
    Validating effect of Refactoring of IEC 61499 Function Block in Distributed Control Systems2022In: 2022 IEEE International Conference on Automation/25th Congress of the Chilean Association of Automatic Control, ICA-ACCA 2022: For the Development of Sustainable Agricultural Systems / [ed] Mario Fernandez, Gaston Lefranc, Institute of Electrical and Electronics Engineers Inc. , 2022, p. 689-694Conference paper (Refereed)
    Abstract [en]

    We are in the era of continued adoption of Industry 4.0 vision and standard. As the Industrial Cyber-Physical System applications move from centralized to decentralized systems, there is a need to follow a newer and better software design patterns and refactoring techniques for dependable software for these systems. There have been few works that present diverse design patterns and refactoring methods, and this article presents a case study of applying couple of refactoring methods and techniques in order to improve readability, maintainability, reuse-ability and debugging friendliness of existing function block applications. The goal of the article to validate the claim of refactoring advantages by applying the existing techniques with the help of empirical study.

  • 5.
    Danielsson, Robin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Analysis of Simulation tool for Future Flexible Assembly lines2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Volvo trucks can be ordered with many options to meet customers' demands. This challenges the efficiency of the manufacturing process, especially at the final assembly line where bare chassis are customised with parts and accessories. In the future, assembly lines may be more flexible to allow for assembly of different parts at higher efficiency. This thesis presents problem areas in traditional assembly lines and proposes a proof-of-concept for future flexible assembly line sequencing, as well as a computer simulation tool with the capability to evaluate variances in production capacity when trucks of different size and parts are assembled in consecutive order. Virtual models of flexible assembly lines are constructed as part of a software solution and used to simulate production sequences of varying truck configurations. Data collected from all simulations show a correlation between production capacity and the order in which vehicles are produced. The assembly line configuration itself has also shown to greatly impact efficiency and might lead to an improvement of at least 39%, as well as limitations of tools and workers with specific capabilities. However, the presented performance numbers do not represent all possible simulation outcomes, which beyond the assembly line configuration also depend on things like product complexity and the assembly sequence of individual products.

    Download full text (pdf)
    fulltext
  • 6.
    Esenov, Ilaman
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Next Generation Access Control as a support core system in the Arrowhead Framework2022Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In the fourth industrial revolution known as Industry 4.0, massive amounts of data is collected, processed and communicated by cyber-physical systems and Internet of Things (IoT). Although the nature of this data varies, industrial data is often proprietary and may cause harm to the data owner in the event of a resource leak. Nonetheless, Industrial Internet of Things (IIoT) and System of Systems (SoS) architectures frequently rely on data sharing in partner eco-systems to produce value, necessitating selective and granular access control to prevent sensitive data from being unintentionally shared.

    This thesis explores the possibilities of providing unified access control for services in the Arrowhead Framework (AF), a framework that provides an architecture for building IoT-based automation systems. Strong security mechanisms currently exist in AF for ensuring that access to services provided by constituent provider systems is only granted to authorized consumers. However, there is often a need for more dynamic and fine-granular access control than what is currently offered at an orchestration level.

    An Arrowhead system which employs a general policy language to express policy based access control can offer a broad and unified service solution, enabling frequent access queries from different application systems, dynamic policy change, and contextual policy variables. Such a system has the potential to be a highly versatile asset for policy enforcement in Arrowhead SoS, and may serve as a go-to support system in AF.

    Next Generation Access Control (NGAC) is an attribute-based access control (ABAC) standard based on relations between data elements to create, manage and enforce access control policies, and enables systematic access control with a high level of granularity. We examine how NGAC can be used to securely enforce access control policies for data sharing with AF, and present a SoS solution that demonstrates the use of NGAC as the access control model for a resource system.

    The solution includes an Arrowhead policy server that enables enforcement of ABAC for Arrowhead-compliant application systems. We further examine the possibilities of easing integration of Arrowhead systems, and present a Policy Enforcement Point (PEP) library for the policy server. 

    Download full text (pdf)
    fulltext
  • 7.
    Florberg, Jack
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Improving architecture documentation management with object-oriented tools2023Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Architectural documentation is crucial in the development process, as it helps developers understand the system’s architecture and make informed decisions. However, conventional documentation methods are often timeconsuming and error-prone since most of the work is done manually. This becomes even more ambiguous whendealing with complex systems and when requirements are prone to changing during the development process.

    This thesis addresses these challenges by exploring the potential benefits of utilizing object-oriented documentation tools and plugins to improve the efficiency of writing and maintaining architectural documentationwhile also making sure that the conveyed information is sufficient and understandable for junior developers. Moreover, it investigates the possibility of generating an easy-to-maintain context view with the use of metadatato display architectural information.

    The thesis employs both a qualitative case study as well as a rapid application development (RAD) approach.The case study involves interviewing junior developers to find patterns in what junior developers look for inregards to understanding a system’s architecture. Using the RAD approach, a prototype system is developedthat utilizes DollarDoc to treat documentation components as objects by being able to refer to these objectsfrom one file to another.

    The result shows how architectural documentation can be designed to effectively communicate critical aspects of the system’s architecture to junior developers by providing a clear overview of its components. Thiscontributes to a better understanding of the system’s functionality and purpose, leading to increased productivity and engagement. By using object-oriented documentation tools, the documentation structure becomesmore maintainable, allowing for automatic updates and reliable information.

    Download full text (pdf)
    fulltext
  • 8.
    Galar, Diego
    et al.
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. TECNALIA, Spain.
    Seneviratne, Dammika
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. TECNALIA, Spain.
    Kumar, Uday
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Big Data in Railway O&M: A Dependability Approach2022In: Research Anthology on Big Data Analytics, Architectures, and Applications, IGI Global, 2022, Vol. 1, p. 391-416Chapter in book (Other academic)
    Abstract [en]

    Railway systems are complex with respect to technology and operations with the involvement of a wide range of human actors, organizations and technical solutions. For the operations and control of such complexity, a viable solution is to apply intelligent computerized systems, for instance, computerized traffic control systems for coordinating airline transportation, or advanced monitoring and diagnostic systems in vehicles. Moreover, transportation assets cannot compromise the safety of the passengers by only applying operation and maintenance activities. Indeed, safety is a more difficult goal to achieve using traditional maintenance strategies and computerized solutions come into the picture as the only option to deal with complex systems interacting among them and trying to balance the growth in technical complexity together with stable and acceptable dependability indexes. Big data analytics are expected to improve the overall performance of the railways supported by smart systems and Internetbased solutions. Operation and Maintenance will be application areas, where benefits will be visible as a consequence of big data policies due to diagnosis and prognosis capabilities provided to the whole network of processes. This chapter shows the possibilities of applying the big data concept in the railway transportation industry and the positive effects on technology and operations from a systems perspective. © 2022 by IGI Global. All rights reserved.

  • 9.
    Kallempudi, Goutham
    et al.
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Hashmi, Khurram Azeem
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Pagani, Alain
    German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Stricker, Didier
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Afzal, Muhammad Zeshan
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Toward Semi-Supervised Graphical Object Detection in Document Images2022In: Future Internet, E-ISSN 1999-5903, Vol. 14, no 6, article id 176Article in journal (Refereed)
    Abstract [en]

    The graphical page object detection classifies and localizes objects such as Tables and Figures in a document. As deep learning techniques for object detection become increasingly successful, many supervised deep neural network-based methods have been introduced to recognize graphical objects in documents. However, these models necessitate a substantial amount of labeled data for the training process. This paper presents an end-to-end semi-supervised framework for graphical object detection in scanned document images to address this limitation. Our method is based on a recently proposed Soft Teacher mechanism that examines the effects of small percentage-labeled data on the classification and localization of graphical objects. On both the PubLayNet and the IIIT-AR-13K datasets, the proposed approach outperforms the supervised models by a significant margin in all labeling ratios (1%, 5%, and 10%). Furthermore, the 10% PubLayNet Soft Teacher model improves the average precision of Table, Figure, and List by +5.4,+1.2, and +3.2 points, respectively, with a similar total mAP as the Faster-RCNN baseline. Moreover, our model trained on 10% of IIIT-AR-13K labeled data beats the previous fully supervised method +4.5 points.

  • 10.
    Keserwani, Prateek
    et al.
    Indian Institute of Technology, Roorkee, India.
    Saini, Rajkumar
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Roy, Partha Pratim
    Indian Institute of Technology, Roorkee, India.
    Robust Scene Text Detection for Partially Annotated Training Data2022In: IEEE transactions on circuits and systems for video technology (Print), ISSN 1051-8215, E-ISSN 1558-2205, Vol. 32, no 12, p. 8635-8645Article in journal (Refereed)
    Abstract [en]

    This article analyzed the impact of training data containing un-annotated text instances, i.e., partial annotation in scene text detection, and proposed a text region refinement approach to address it. Scene text detection is a problem that has attracted the attention of the research community for decades. Impressive results have been obtained for fully supervised scene text detection with recent deep learning approaches. These approaches, however, need a vast amount of completely labeled datasets, and the creation of such datasets is a challenging and time-consuming task. Research literature lacks the analysis of the partial annotation of training data for scene text detection. We have found that the performance of the generic scene text detection method drops significantly due to the partial annotation of training data. We have proposed a text region refinement method that provides robustness against the partially annotated training data in scene text detection. The proposed method works as a two-tier scheme. Text-probable regions are obtained in the first tier by applying hybrid loss that generates pseudo-labels to refine text regions in the second-tier during training. Extensive experiments have been conducted on a dataset generated from ICDAR 2015 by dropping the annotations with various drop rates and on a publicly available SVT dataset. The proposed method exhibits a significant improvement over the baseline and existing approaches for the partially annotated training data.

  • 11.
    Khosravi, Mahdi
    et al.
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Soleimanmeigouni, Iman
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Ahmadi, Alireza
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Nissen, Arne
    Trafikverket, Luleå, Sweden.
    Xiao, Xun
    Department of Mathematics and Statistics, University of Otago, Dunedin, New Zealand.
    Modification of correlation optimized warping method for position alignment of condition measurements of linear assets2022In: Measurement, ISSN 0263-2241, E-ISSN 1873-412X, Vol. 201, article id 111707Article in journal (Refereed)
    Abstract [en]

    This paper proposes a modification to a well-known alignment method, correlation optimized warping (COW), to improve the efficiency of the method and reduce the positional errors in the measurements of linear assets. The modified method relaxes the restrictions of COW in aligning the start and end of datasets and decreases the computational time. Furthermore, the method takes advantage of the interdependencies between simultaneously measured channels to overcome the missing data problem. A case study on railway track geometry measurements was conducted to implement the proposed method and assess its performance in reducing the positioning inaccuracy of the measurements. The findings revealed that the modified method could decrease the positional errors of defects to below 25 cm in 94% of the trials.

  • 12.
    Kullbrandt, Kenneth
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    The Efficiency of Good Software Practices: A Case Study on a Radar Meteor Analysis Software Rewrite2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software engineering as a profession has since early on in its conception been focused with how to best maximize the quality of software. Quality in this regard is both objectively measurable things (like speed, size, and cost) and less measurable things (like conciseness, elegance, and customer satisfaction).

    A large part of this came from the software crisis in the 1960s, 1970s, and 1980s where many projects either failed, cost much more in time or money, or was inefficient or of low quality. Due to this, new technologies were developed to help combat these issues. Techniques like maintaining documentation, CASE tools, object-oriented programming etc.

    Today, one of the ways seen to improve quality is by employing good software practices. These are often a set of informal rules on how to write your programs, what to factor in when designing, and how to manage the project. Ranging from how testing should be done, the use of version control, continuous integration, and more.

    The purpose of this thesis is to make a case study on how a project employing good software practices compares to a project with limited use of it. To do this, a part of a software project was remade focusing on using good software practices during development. The chosen project was MU analysis - a project with the intent of analyzing meteor echoes and looking for signs of meteors or meteor trails.

    This project was rewritten in a combination of Python and C, with the system in focus being the event searcher and the converter. After the completion of the rewrite,the project was analyzed using a set of qualitative attributes as guidelines for the performance for each project. These were then examined between each project, comparing each qualitative attribute and each software practice used.

    It was found that making a rewrite with focus on good software practices, most relevant quality attributes increased. It was concluded that focusing on good software practices increases the quality of the software if emphasis is put on when to employ which strategies.

    Download full text (pdf)
    fulltext
  • 13.
    Lind, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Visual Scripting for AR Board Games in Thrymd2021Independent thesis Basic level (professional degree), 180 HE creditsStudent thesis
    Abstract [en]

    In recent years, the interest in Augmented Reality (AR) applications for entertainment and productivity has grown. One company exploring this technology is LAZER WOLF STUDIOS, the developers behind Thrymd: an AR-driven board games platform powered by the Unity engine. 

    This paper details the development of a visual scripting framework, meant to provide end users with a means of developing their own games for the platform, without significant programming or background knowledge required. A graph-based visual language was implemented in a custom Unity editor window, in order to maintain a familiar and consistent feel for users. The graph consists of a series of branching, interconnected nodes which pass data in-between each other, and execute in succession. The graph is serialized as a Unity asset, and can easily be interacted with through regular C# scripts. 

    A small number of nodes were implemented, but for the system to be viable, more are needed. For that reason, extensibility was a core ideal; creating new node types must be fast and painless. As with any script layer, performance is generally worse than compiled code. Further work is needed to improve user experience.

    Download full text (pdf)
    fulltext
  • 14.
    Lindblom, William
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Overall Quality Measurement for Guideline Compliance: A Study in Software Quality2019Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
  • 15.
    Liyanage, Reshani
    et al.
    M3S Research Unit, University of Oulu, Oulu, Finland.
    Tripathi, Nirnaya
    M3S Research Unit, University of Oulu, Oulu, Finland.
    Päivärinta, Tero
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems. M3S Research Unit, University of Oulu, Oulu, Finland.
    Xu, Yueqiang
    M3S Research Unit, University of Oulu, Oulu, Finland.
    Digital Twin Ecosystems: Potential Stakeholders and Their Requirements2022In: Software Business: 13th International Conference, ICSOB 2022 Bolzano, Italy, November 8–11, 2022 Proceedings / [ed] Noel Carroll; Anh Nguyen-Duc; Xiaofeng Wang; Viktoria Stray, Springer Nature, 2022, p. 19-34Conference paper (Refereed)
    Abstract [en]

    Context: As industries are heading for digital transformation through Industry 4.0, the concept of Digital Twin (DT) - a software for digital transformation, has become popular. Many industries use DT for its advantages, such as predictive maintenance and real-time remote monitoring. Within DT domain, an emerging topic is the concept of an ecosystem—a digital platform that would create value for different stakeholders in an ecosystem of DT-driven products and services. The identification of potential stakeholders and their requirements provides valuable contributions to the development of healthy Digital Twin Ecosystems (DTE). However, current empirical knowledge of potential stakeholders and their requirements are limited. Objective/Methodology: Thus, the objective of this research was to explore potential stakeholders and their requirements. The research employed an empirical research methodology in which semi-structured interviews were conducted with DT professionals for data collection. Results: Data analysis of the study revealed 13 potential stakeholders who were categorized as primary (manufacturers, suppliers, subcontractors, and intelligent robots), secondary (maintenance service providers, platform integration service providers, tech companies, etc.), and tertiary (research organizations, third-party value-added service providers, cyber security firms, etc.). This study also presents the different requirements of these stakeholders in detail. Contribution: The study contributes to both research and industry by identifying possible stakeholders and their requirements. It contributes to the literature by adding new knowledge on DTEs and fills a research gap while contributing industry by providing ample knowledge to the industry’s practitioners that is useful in the development and maintenance of a healthy DTE.

  • 16.
    Lundgren, Kristoffer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    VR Museum Experience2020Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Museum Anna Nordlander is a part of Skelleftea Museum and focuses on the artist Anna Nordlander. ˚ The goal of the project was to develop a VR Experience centered around Anna Nordlander and one of her paintings. The content for the experience was provided and recorded by the people at Museum Anna Nordlander. The purpose was to create a playable demo with features like audio playback, interactable objects, and a modular approach to the code which would allow future additions to the project. This paper investigates how some of the design choices affects the user experience. Specifically, how our choices affect the users physically, how prevalent certain symptoms common in VR are in the product. To gather the necessary data both the Oculus GO and the Oculs Qust were tested. The user tests showed that room scale tracking is an important feature to reduce user discomfort and nausea, and teleportation style movement is not a good solution while using the Oculus GO. The end result is a playable demo containing all the content provided to the developers and all the requested features. The demo is also intended to be modular and easy to expand upon in the future.

    Download full text (pdf)
    fulltext
  • 17.
    Mishra, Alok
    et al.
    Department of Software Engg, Atilim University, Ankara, Turkey.
    Khatri, Sunil Kumar
    Amity Institute of Information Technology, Amity University, Noida, India.
    Kapur, P. K.
    Amity Institute of Information Technology, Amity University, Noida, India.
    Kumar, Uday
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Quality and reliability engineering: Trends and future directions2018In: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 24, no 12, p. 1677-1679Article in journal (Other academic)
  • 18.
    Moe, Johan
    et al.
    Ericsson Radio Systems AB, Linköping.
    Carr, David
    Luleå University of Technology.
    Using execution trace data to improve distributed systems2002In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 32, no 9, p. 889-906Article in journal (Refereed)
    Abstract [en]

    One of the most challenging problems facing today's software engineer is to understand and modify distributed systems. One reason is that in actual use systems frequently behave differently than the developer intended. In order to cope with this challenge, we have developed a three-step method to study the run-time behavior of a distributed system. First, remote procedure calls are traced using CORBA interceptors. Next, the trace data is parsed to construct RPC call-return sequences, and summary statistics are generated. Finally, a visualization tool is used to study the statistics and look for anomalous behavior. We have been using this method on a large distributed system (more than 500000 lines of code) with data collected during both system testing and operation at a customer's site. Despite the fact that the distributed system had been in operation for over three years, the method has uncovered system configuration and efficiency problems. Using these discoveries, the system support group has been able to improve product performance and their own product maintenance procedures.

    Download full text (pdf)
    fulltext
  • 19.
    Naik, Shivam
    et al.
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany.
    Hashmi, Khurram Azeem
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Pagani, Alain
    German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Stricker, Didier
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Afzal, Muhammad Zeshan
    Department of Computer Science, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; Mindgarage, Technical University of Kaiserslautern, 67663 Kaiserslautern, Germany; German Research Institute for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany.
    Investigating Attention Mechanism for Page Object Detection in Document Images2022In: Applied Sciences, E-ISSN 2076-3417, Vol. 12, no 15, article id 7486Article in journal (Refereed)
    Abstract [en]

    Page object detection in scanned document images is a complex task due to varying document layouts and diverse page objects. In the past, traditional methods such as Optical Character Recognition (OCR)-based techniques have been employed to extract textual information. However, these methods fail to comprehend complex page objects such as tables and figures. This paper addresses the localization problem and classification of graphical objects that visually summarize vital information in documents. Furthermore, this work examines the benefit of incorporating attention mechanisms in different object detection networks to perform page object detection on scanned document images. The model is designed with a Pytorch-based framework called Detectron2. The proposed pipelines can be optimized end-to-end and exhaustively evaluated on publicly available datasets such as DocBank, PublayNet, and IIIT-AR-13K. The achieved results reflect the effectiveness of incorporating the attention mechanism for page object detection in documents.

  • 20.
    Nikolaidou, Konstantina
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Seuret, Mathias
    Pattern Recognition Lab Computer Vision Group, Friedrich-Alexander-Universität, Martensstr. 3, 91058, Erlangen, Bavaria, Germany.
    Mokayed, Hamam
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Liwicki, Marcus
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    A survey of historical document image datasets2022In: International Journal on Document Analysis and Recognition, ISSN 1433-2833, E-ISSN 1433-2825, Vol. 25, no 4, p. 305-338Article in journal (Refereed)
    Abstract [en]

    This paper presents a systematic literature review of image datasets for document image analysis, focusing on historical documents, such as handwritten manuscripts and early prints. Finding appropriate datasets for historical document analysis is a crucial prerequisite to facilitate research using different machine learning algorithms. However, because of the very large variety of the actual data (e.g., scripts, tasks, dates, support systems, and amount of deterioration), the different formats for data and label representation, and the different evaluation processes and benchmarks, finding appropriate datasets is a difficult task. This work fills this gap, presenting a meta-study on existing datasets. After a systematic selection process (according to PRISMA guidelines), we select 65 studies that are chosen based on different factors, such as the year of publication, number of methods implemented in the article, reliability of the chosen algorithms, dataset size, and journal outlet. We summarize each study by assigning it to one of three pre-defined tasks: document classification, layout structure, or content analysis. We present the statistics, document type, language, tasks, input visual aspects, and ground truth information for every dataset. In addition, we provide the benchmark tasks and results from these papers or recent competitions. We further discuss gaps and challenges in this domain. We advocate for providing conversion tools to common formats (e.g., COCO format for computer vision tasks) and always providing a set of evaluation metrics, instead of just one, to make results comparable across studies.

    Download full text (pdf)
    fulltext
  • 21.
    Nilsson, Mattias
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Schelén, Olov
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Lindgren, Anders
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab. Applied AI and IoT, RISE Research Institutes of Sweden, Kista, Sweden.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Paniagua, Cristina
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Delsing, Jerker
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Integration of Neuromorphic AI in Event-Driven Distributed Digitized Systems: Concepts and Research Directions2023In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 17, article id 1074439Article in journal (Refereed)
    Abstract [en]

    Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired “neuromorphic” processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital–computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based conceptual framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which would provide virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices.

    Download full text (pdf)
    fulltext
  • 22.
    Panahi, Parisa
    et al.
    Department of Chemical Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran.
    Khorasani, Saied Nouri
    Department of Chemical Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran.
    Mensah, Rhoda Afriyie
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Structural and Fire Engineering.
    Das, Oisik
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Structural and Fire Engineering.
    Neisiany, Rasoul Esmaeely
    Department of Polymer Engineering, Hakim Sabzevari University, Sabzevar 9617976487, Iran; Biotechnology Centre, Silesian University of Technology, Krzywoustego 8, 44-100 Gliwice, Poland.
    A review of the characterization methods for self-healing assessment in polymeric coatings2024In: Progress in organic coatings, ISSN 0300-9440, E-ISSN 1873-331X, Vol. 186, article id 108055Article, review/survey (Refereed)
  • 23.
    Parnes, Peter
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Backman, Ylva
    Luleå University of Technology, Department of Health, Learning and Technology, Education, Language, and Teaching.
    Gardelli, Viktor
    Luleå University of Technology, Department of Health, Learning and Technology, Education, Language, and Teaching.
    WalkAbout – A net-based interactive multiuser 3D-environment for enhanced and engaging learning2022In: Bidrag från 8:e Utvecklingskonferensen för Sveriges ingenjörsutbildningar / [ed] Helena Håkansson, 2022, p. 150-158Conference paper (Refereed)
    Abstract [en]

    This paper presents the current and ongoing research and development of WalkAbout, a distributed and open virtual world application for enhanced and engaging learning. Using WalkAbout, teachers and learners can engage in active learning using different 3D-environments online, where learning and education can be conducted. The environment allows learners to represent themselves using many different avatars, animations, expressions paired with traditional voice communication. More classical presentations are done using one or several virtual web screens that allow users to bring outside content into the virtual world. Another aspect presented in the paper is how gamification can be used to enhance the learning using missions, points and challenges. The paper also discusses aspects of using a commercial game development engine for a non-game application and discusses possible future directions for how an open world learning environment online can be further developed and be used in other scenarios. 

    Download full text (pdf)
    fulltext
  • 24.
    Pinto, Rui
    et al.
    Department of Informatics Engineering, Faculty of Engineering, University of Porto, Porto, Portugal.
    Gonçalves, Gil
    Department of Informatics Engineering, Faculty of Engineering, University of Porto, Porto, Portugal.
    Delsing, Jerker
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Tovar, Eduardo
    Research Centre in Real-Time and Embedded Computing Systems, Polytechnic of Porto - School of Engineering, Porto, Portugal.
    Enabling data-driven anomaly detection by design in cyber-physical production systems2022In: Cybersecurity, E-ISSN 2523-3246, Vol. 5, article id 9Article in journal (Refereed)
    Abstract [en]

    Designing and developing distributed cyber-physical production systems (CPPS) is a time-consuming, complex, and error-prone process. These systems are typically heterogeneous, i.e., they consist of multiple components implemented with different languages and development tools. One of the main problems nowadays in CPPS implementation is enabling security mechanisms by design while reducing the complexity and increasing the system’s maintainability. Adopting the IEC 61499 standard is an excellent approach to tackle these challenges by enabling the design, deployment, and management of CPPS in a model-based engineering methodology. We propose a method for CPPS design based on the IEC 61499 standard. The method allows designers to embed a bio-inspired anomaly-based host intrusion detection system (A-HIDS) in Edge devices. This A-HIDS is based on the incremental Dendritic Cell Algorithm (iDCA) and can analyze OPC UA network data exchanged between the Edge devices and detect attacks that target the CPPS’ Edge layer. This study’s findings have practical implications on the industrial security community by making novel contributions to the intrusion detection problem in CPPS considering immune-inspired solutions, and cost-effective security by design system implementation. According to the experimental data, the proposed solution can dramatically reduce design and code complexity while improving application maintainability and successfully detecting network attacks without negatively impacting the performance of the CPPS Edge devices.

  • 25.
    Raghavendran, Krishna Raj
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Elragal, Ahmed
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Low-Code Machine Learning Platforms: A Fastlane to Digitalization2023In: Informatics, E-ISSN 2227-9709, Vol. 10, no 2, article id 50Article in journal (Refereed)
    Abstract [en]

    In the context of developing machine learning models, until and unless we have the required data engineering and machine learning development competencies as well as the time to train and test different machine learning models and tune their hyperparameters, it is worth trying out the automatic machine learning features provided by several cloud-based and cloud-agnostic platforms. This paper explores the possibility of generating automatic machine learning models with low-code experience. We developed criteria to compare different machine learning platforms for generating automatic machine learning models and presenting their results. Thereafter, lessons learned by developing automatic machine learning models from a sample dataset across four different machine learning platforms were elucidated. We also interviewed machine learning experts to conceptualize their domain-specific problems that automatic machine learning platforms can address. Results showed that automatic machine learning platforms can provide a fast track for organizations seeking the digitalization of their businesses. Automatic machine learning platforms help produce results, especially for time-constrained projects where resources are lacking. The contribution of this paper is in the form of a lab experiment in which we demonstrate how low-code platforms can provide a viable option to many business cases and, henceforth, provide a lane that is faster than the usual hiring and training of already scarce data scientists and to analytics projects that suffer from overruns.

    Download full text (pdf)
    fulltext
  • 26.
    Raghavendran, Krishnaraj
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Analysis Of Fastlane For Digitalization Through Low-Code ML Platforms2022Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Even a professional photographer sometimes uses automatic default settings that come up with the camera to take a photo. One can debate the quality of outcome from manual vs automatic mode. Until and unless we have a professional level of competence in taking a photo, updating our skills/knowledge as per the latest market trends and having enough time to try out different settings manually, it is worthwhile to use Auto-mode. As camera manufacturers, after several iterations of testing, comes up with the list of ideal parameter values, which is embedded as a factory default setting when we choose auto-mode. For non-professional photographers or amateurs recommend using the auto-mode that comes with the camera for not missing the moment. Similarly, in the context of developing machine learning models, until and unless we have the required data-engineering and ML development competence, time to train and test different ML models and tune different hyper parameter settings, it is worth to try out to Automatic Machine learning feature provided out-of-shelf by all the Cloud-based and Cloud-agnostic ML platforms. This thesis deep dives into evaluating possibility of generating automatic machine learning models with no-code/low-code experience provided by GCP, AWS, Azure and Databricks. We have made a comparison between different ML platforms on generating automatic ML model and presenting the results. It also covers the lessons learnt by developing automatic ML models from a sample dataset across all four ML platforms. Later, we have outlined machine learning subject matter expert’s viewpoints about using Automatic Machine learning models. From this research, we found automatic machine learning can come handy for many off-the-shelf analytical use-cases, this can be highly beneficial especially for time-constrained projects, when resource competence or staffing is a bottleneck and even when competent data scientists want a second-opinion or compare AutoML results with the custom ML model built. 

    Download full text (pdf)
    fulltext
  • 27.
    Rapp, Thomas
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Space Technology. Julius-Maximilians University of Würzburg, Department of Mathematics and Computer Science, Chair of Aerospace Information Technology, Professorship of Space Technology.
    Development and Implementation of a Mission Planning Tool for SONATE2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In the scope of the master's project which is documented with the present thesis a mission planning tool (MPT) for SONATE was developed and implemented. After a thorough research on the current state of the art of MPTs and taking especially the early stage of the SONATE mission into account, it was decided to develop a generic timeline-based MPT. In contrast to existing MPTs a system is envisioned which is both powerful, regarding advanced features like resource control, and applicable for small satellite missions regarding the overall complexity and the associated configuration and training effort. Although it was obvious from an early stage that this vision cannot be reached in the scope of this project, it was kept during the project definition, object oriented analysis and early design stages in order to allow future extensions. Also the decision to develop the MPT on top of the Eclipse Rich Client Platform is mainly due to the argument of future extensibility.

    The MPT, which is released with this thesis, hence is a very basic generic timeline-based MPT omitting all possible advanced features like resource control or procedure validation, but featuring all essential parts of a MPT, i.e. modelling of procedures, scheduling of activities, and the generation of telecommand sequences. Furthermore, the user is supported by an intuitive graphical user interface. The thesis documents the development process, thus giving a broad understanding of the design and the implementation. For specific details of the implementation one may also refer to the separate technical documentation, while a user handbook included as appendix.

    The characteristics of the SONATE mission as a technology demonstrator for highly autonomous systems raise several important questions regarding the overall mission planning process. Therefore, besides the actual development of the MPT, those questions are discussed in a theoretical manner in the scope of this thesis, taking also account of the general emergence of highly autonomous satellites systems.Three concepts, Safe Planning, Sigma Resource Propagation, and Direct Telemetry Feedback, are proposed to face the challenges rising from the foreseen alternation of phases of classical mission operations and phases of autonomous operations of the satellite.

    Concluding the thesis, the final software product's features and capabilities are verified against the previously defined requirements and thus the overall success of the project is determined to be a 100% success fulfilling all primary project objectives. Finally, several fields for further research on the topic in general and work on the MPT itself are identified and outlined to pave the way for follow-up projects.

    Download full text (pdf)
    fulltext
  • 28.
    Reis Da Silva, Bruno
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Use of Big Data Analytics for Public Transport Efficiency:Evidence from Natal, (RN), Brazil2023Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

     Citizens from various cities around the world utilize different types of public transport to commute from one place to another. Additionally, information and communication technology (ICT) has been evolving over the last few decades, and governments are using it to improve the quality of the services provided to their citizens, such as public transport, together with the analysis of the available data. Thus, big data analytics is one of the technologies that are emerging as solutions to help improve efficiency in this specific segment. This thesis presents findings from a variety of articles by conducting a literature review about public transport, big data analytics, and the city of Natal, Rio Grande do Norte (RN), Brazil – the target city of this research. Specifically, the study sought to understand how big data analytics could improve the efficiency of public transport in Natal. Therefore, driven to answer the research question, issues were identified which had been caused by existing public transport in the city, which affected other sectors such as climate change, causes of environmental damage, vehicle engineering design, logistics, overpopulation, pollution, and traffic congestion. By implementing big data analytics solutions to each of these findings, promising outcomes were uncovered that may improve the public transport efficiency of this target city.

    Download (pdf)
    fulltext
  • 29.
    Renman, Filip
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Multi-User Classroom Environment in Virtual Reality: Creating and Implementing new Features2020Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A research group from Luleå University of Technology in Skellefteå is working to create a virtual reality classroom to be used for educational purposes in mining education. In this classroom, a teacher should be able to hold a lesson for their students in an intuitive and pleasant way. In this part of the project I have focused on implementing functionality to the classroom with the help of the game engine Unity. The result is a program where a teacher now can create a virtual classroom that students can connect to. User test have been performed to verify the user-friendliness of the program, which showed, that the classroom was functionable and that it did not cause any feeling of motion sickness, known as cybersickness.

    Download full text (pdf)
    fulltext
  • 30.
    Rovolis, Georgios
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Habibipour, Abdolrasoul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    When participatory design meets data-driven decision making: A literature review and the way forward2024In: Management Science Letters, ISSN 1923-9335, E-ISSN 1923-9343, Vol. 14, no 2, p. 107-126Article, review/survey (Refereed)
    Abstract [en]

    This study explores the impacts of participatory design (PD) on data-driven decision-making (DDDM) in organisations. Despite the extensive examination of PD and DDDM individually, there is a dearth of research in understanding their integration and their impact on decision-making processes in organisations. This research aims to fill this gap by investigating the potential impacts, challenges, benefits, and critical success factors associated with the incorporation of PD activities into DDDM. The study employs a systematic literature review methodology to provide a comprehensive understanding of the topic. The paper provides a research agenda for future researchers as well as discussing best practices for organizations seeking to optimise their data driven decision-making processes in a participatory manner. The research also discussed the ethical implications of data-driven decision-making. Ultimately, this research advances our understanding of how PD and DDDM can be effectively combined to achieve better decision-making outcomes.

    Download full text (pdf)
    fulltext
  • 31.
    Salin, Hannes
    et al.
    Department of Information and Communication Technology, Swedish Transport Administration, 78189 Borlänge, Sweden.
    Lundgren, Martin
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems.
    Towards Agile Cybersecurity Risk Management for Autonomous Software Engineering Teams2022In: Journal of Cybersecurity and Privacy, ISSN 2624-800X, Vol. 2, no 2, p. 276-291Article in journal (Refereed)
    Abstract [en]

    In this study, a framework was developed, based on a literature review, to help managers incorporate cybersecurity risk management in agile development projects. The literature review used predefined codes that were developed by extending previously defined challenges in the literature—for developing secure software in agile projects—to include aspects of agile cybersecurity risk management. Five steps were identified based on the insights gained from how the reviewed literature has addressed each of the challenges: (1) risk collection; (2) risk refinement; (3) risk mitigation; (4) knowledge transfer; and (5) escalation. To assess the appropriateness of the identified steps, and to determine their inclusion or exclusion in the framework, a survey was submitted to 145 software developers using a four-point Likert scale to measure the attitudes towards each step. The resulting framework presented herein serves as a starting point to help managers and developers structure their agile projects in terms of cybersecurity risk management, supporting less overloaded agile processes, stakeholder insights on relevant risks, and increased security assurance

    Download full text (pdf)
    fulltext
  • 32.
    Souza Rossi, Henrique
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Mitra, Karan
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Cotanis, Irina
    Infovista, Skellefteå, Sweden.
    Ögren, Niclas
    Infovista, Skellefteå, Sweden.
    Johansson, Per
    Infovista, Skellefteå, Sweden.
    ALTRUIST: A Multi-platform Tool for Conducting QoE Subjective Tests2023In: 2023 15th International Conference on Quality of Multimedia Experience (QoMEX), IEEE, 2023, p. 99-102Conference paper (Refereed)
    Abstract [en]

    Quality of Experience (QoE) subjective assessment often demands setting up expensive lab experiments that involve controlling several software programs and services. In addition, these experiments may pose significant challenges regarding man-agement of testbed software components as they may have to be synchronized for efficient data collection, leading to human errors or loss of time. Further, maintaining error-free repeatability between subsequent subjective tests and comprehensive data collection is essential. Therefore, this paper proposes, develops and validates ALTRUIST, a multi-platform tool that assists the experimenter in conducting subjective tests by controlling external applications, facilitates data collection and automates test execution for conducting repeatable subjective tests in broad application areas.

  • 33.
    Suteu, Silviu Cezar
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    OPS-SAT Software Simulator2016Independent thesis Advanced level (degree of Master (Two Years)), 80 credits / 120 HE creditsStudent thesis
    Abstract [en]

    OPS-SAT is an in-orbit laboratory mission designed to allow experimenters todeploy new on-board software and perform in-orbit demonstrations of new tech-nology and concepts related to mission operations. The NanoSat MO Frame-work facilitates the process of developing experimental on-board software for OPS-SAT by abstracting the complexities related to communication across the space toground link as well as the details of low-level device access. The objective of thisproject is to implement functional simulation models of OPS-SAT peripherals andorbit/attitude behavior, which integrated together with the NanoSat MO Frame-work provide a sufficiently realistic runtime environment for OPS-SAT on-boardsoftware experiment development. Essentially, the simulator exposes communi-cation interfaces for executing commands which affect the payload instrumentsand/or retrieve science data and telemetry. The commands can be run either fromthe MO Framework or manually, from an intuitive GUI which performs syntaxcheck. In this case, the output will be displayed for advanced debugging. The endresult of the thesis work is a virtual machine which has all the tools installed todevelop cutting edge technology space applications.

    Download full text (pdf)
    fulltext
  • 34.
    Thomay, Christian
    et al.
    Research Studios Austria FG, Vienna, Austria.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Isakovic, Haris
    Viewpointsystem GmbH, Vienna, Austria.
    Lasch, Rainer
    Technische Universitat Dresden, Dresden, Germany.
    Race, Nicholas
    Lancaster University, Lancaster, UK.
    Schmittner, Christoph
    Austrian Institute of Technology, Vienna, Austria.
    Schneider, Germar
    Infineon Technologies Dresden GmbH & Co. KG, Dresden, Germany.
    Szepessy, Zsolt
    evopro Innovation Kft., Budapest, Hungary.
    Tauber, Markus
    Research Studios Austria FG, Vienna, Austria.
    Wang, Zhiping
    Volvo Group, Göteborg, Sweden.
    Towards Adaptive Quality Assurance in Industrial Applications2022In: Proceedings of the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence / [ed] Pal Varga, Lisandro Zambenedetti Granville, Alex Galis, Istvan Godor, Noura Limam, Prosper Chemouil, Jérôme François, Marc-Oliver Pahl, IEEE, 2022Conference paper (Refereed)
    Abstract [en]

    We propose the AQUILA framework (Adaptive Quality Assurance in Industrial Applications), a concept for digitalization in Industry 4.0 to support the entire industrial manufacturing chain, laying the groundwork for adaptive quality assurance in times of disrupted supply chains and, due to the COVID-19 pandemic, restricted travel possibilities. To that end, our proposed framework allows for the definition and description of industrial processes, quality assurance and testing protocols, and training scenarios in a comprehensive notation based on BPMN, and supports users in task execution, documentation, and evaluation by providing smart glass-based HCI with eye tracking technology, delivering a combination of process documentation, context-sensitive AR visualization, gaze-based interaction schemes, and remote maintenance and assistance functionality.

  • 35.
    Wang, Junyong
    et al.
    School of Computer Information and Technology, Beijing Jiaotong University, No.3, Shangyuan Village, Xizhimen Wai, Haidian District, Beijing, China.
    Baker, Thar
    Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates.
    Zhou, Yingnan
    School of Computer Information and Technology, Beijing Jiaotong University, No.3, Shangyuan Village, Xizhimen Wai, Haidian District, Beijing, China.
    Awad, Ali Ismail
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Digital Services and Systems. College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 17551, United Arab Emirates; Centre for Security, Communications and Network Research, University of Plymouth, Plymouth PL4 8AA, UK.
    Wang, Bin
    Zhejiang Key Laboratory of Multi-dimensional Perception Technology, Application and Cybersecurity, Hangzhou, China.
    Zhu, Yongsheng
    School of Electronic Information Engineering, Beijing Jiaotong University, Beijing, China; Institute of Computing Technologies, China Academy of Railway Sciences Corporation Limited, Beijing, China.
    Automatic mapping of configuration options in software using static analysis2022In: Journal of King Saud University - Computer and Information Sciences, ISSN 1319-1578, Vol. 10, no part B, p. 10044-10055Article in journal (Refereed)
    Abstract [en]

    Configuration errors are some of the main reasons for software failures. Some configuration options may even negatively impact the software’s security, so that if a user sets the options inappropriately, there may be a huge security risk for the software. Recent studies have proposed mapping option read points to configuration options as the first step in alleviating the occurrence of configuration errors. Sadly, most available techniques use humans, and the rest require additional input, like an operation manual. Unfortunately, not all software is standardized and friendly. We propose a technique based on program and static analysis that can automatically map all the configuration options of a program just by reading the source code. Our evaluation shows that this technique achieves 88.6%, 97.7%, 94.6%, 94.8%, and 92.6% success rates with the Hadoop modules Common, Hadoop distributed file system, MapReduce, and YARN, and also PX4, when extracting configuration options. We found 53 configuration options in PX4 that were not documented and submitted these to the developers. Compared with published work, our technique is more effective in mapping options, and it may lay the foundation for subsequent research on software configuration security.

    Download full text (pdf)
    fulltext
  • 36.
    Wikström, Tobias
    Luleå University of Technology, Department of Engineering Sciences and Mathematics.
    Electrical Design Automation2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This project aims to develop electrical design automation. Design automation causes decreased work time and therefore a reduction in production cost.

    Gestamp Hardtech AB is a company that produces components for the BIW (Body In White) component stage assembly that automotive manufacturers utilise. Hardtech also manufactures the hot stamping tool used for the production of these components. Part of the work to create these tools is to produce electrical schematics for the wiring of the heaters and thermocouples. Today these are created manually with a 2-D CAD (Computer-Aided Design) software. The work is regarded as both monotone and being exposed to human errors. Each schematic is similar in many ways hence the monotone aspect of the work. They also consist of a lot of details which can lead to human errors. The work, therefore, ends up susceptible to automation. The study at hand is therefore proposed to provide a new automated method to produce electrical schematics for these hot stamping tools.

    In order to develop a successful software able to automate the electrical design, a development process is utilised. The process consists of six different phases: (0) Planning, (1) Requirements & Concept Development, (2) Software Design, (3) Implementation, (4) Testing, and (5) Documentation.

    The software consists of several separate pieces, which together automate the design process: A loader, which extracts and converts data. A convenient data structure for easy access. An API (Application Programming Interface) system for data fetching. A GUI (Graphical User Interface) where the user can manipulate the data. Lastly, the schematic creator, which generates all the schematics based on the manipulated data of the user.

    The software ended up a success. After some learning curve of the user interface, it became simple to use. In one of the tests, the time of completion for the project was reduced with a factor of 30. The software works as intended and is able to generate the most advanced types of designs. A few of the requirements answered are a customisable schematic layout, higher work efficiency, possibility to change existing schematics, no rework time, and a lower total project time for completion. Aside from these, all other requirements have been answered.

  • 37.
    Zdunek, Aleksander
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Automatic error detection and switching of redundant video signals, with focus on loop detection2018Independent thesis Advanced level (professional degree), 180 HE creditsStudent thesis
    Abstract [en]

    This report describes work done on implementing automatic detection of looping frame sequences in a video signal. The central loop detection algorithm is described. Hashing of video frames is used as a means of improving computational performance. Two video signals are compared with respect to containing loops, and switching of displayed stream is done based on evaluated stream qualities. Repeating sequences -distinct from looping sequences- are also discussed, as well as cursory thoughts for further work on implementing a more comprehensive error detection package for video signals.

    Download full text (pdf)
    fulltext
  • 38.
    Zhang, Liangwei
    et al.
    Department of Industrial Engineering, Dongguan University of Technology, Dongguan, 523808, China.
    Lin, Jing
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics.
    Shao, Haidong
    Luleå University of Technology, Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics. State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
    Zhang, Zhicong
    Department of Industrial Engineering, Dongguan University of Technology, Dongguan, 523808, China.
    Yan, Xiaohui
    Department of Industrial Engineering, Dongguan University of Technology, Dongguan, 523808, China.
    Long, Jianyu
    Department of Industrial Engineering, Dongguan University of Technology, Dongguan, 523808, China.
    End-To-End Unsupervised Fault Detection Using A Flow-Based Model2021In: Reliability Engineering & System Safety, ISSN 0951-8320, E-ISSN 1879-0836, Vol. 215, article id 107805Article in journal (Refereed)
    Abstract [en]

    Fault detection has been extensively studied in both academia and industry. The rareness of faulty samples in the real world restricts the use of many supervised models, and the reliance on domain expertise for feature engineering raises Other barriers. For this reason, this paper proposes an unsupervised, end-to-end approach to fault detection based on a flow-based model, the Nonlinear Independent Components Estimation (NICE) model. A NICE model models a target distribution via a sequence of invertible transformations to a prior distribution in the latent space. We prove that, under certain conditions, the L2-norm of normal samples’ latent codes in a trained NICE model is Chi-distributed. This facilitates the use of hypothesis testing for fault detection purpose. Concretely, we first apply Zero-phase Component Analysis to decorrelate the data of normal states. The whitened data are fed to a NICE model for training, in a maximum likelihood sense. At the testing stage, samples whose L2-norm of latent codes fail in the hypothesis testing are suspected of being generated by different mechanisms and hence regarded as potential faults. The proposed approach was validated on two datasets of vibration signals; it proved superior to several alternatives. We also show the use of NICE, a type of generative model, can produce real-like vibration signals because of the model's bijective nature.

1 - 38 of 38
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf