New methods for monitoring the condition of roller element bearings in rotating machinery offer possibilities to reduce repair- and maintenance costs, and reduced use of environmentally harmful lubricants. One such method is sparse representation of vibration signals using matching pursuit with dictionary learning, which so far has been tested on PCs with data from controlled tests. Further testing requires a platform capable of signal processing and control in more realistic experiments. This thesis focuses on the integration of a hybrid CPU-FPGA hardware system with a 16-bit analog-to-digital converter and an oil pump, granting the possibility of collecting real-time data, executing the algorithm in closed loop and supplying lubrication to the machine under test, if need be. The aforementioned algorithm is implemented in a Zynq-7000 System-on-Chip and the analog-to-digital converter as well as the pump motor controller are integrated. This platform enables portable operation of the matching pursuit with dictionary learning in the field under a larger variety of environmental and operational conditions, conditions which might prove difficult to reproduce in a laboratory setup. The platform developed throughout this project can collect data using the analog-to-digital converter and operations can be performed on that data in both the CPU and the FPGA. A test of the system function at a sampling rate of 5 kHz is presented and the input and output are verified to function correctly.
An embedded system is a computer system that is a part of a larger device with hardware and mechanical parts. Such a system often has limited resources (such as processing power, memory, and power) and it typically has to meet hard real-time requirements. Today, as the area of application of embedded systems is constantly increasing, resulting in higher demands on system performance and a growing complexity of embedded software, there is a clear trend towards multi-core and multi-processor systems. Such systems are inherently concurrent, but programming concurrent systems using the traditional abstractions (i.e., explicit threads of execution) has been shown to be both difficult and error-prone. The natural solution is to raise the abstraction level and make concurrency implicit, in order to aid the programmer in the task of writing correct code. However, when we raise the abstraction level, there is always an inherent cost. In this thesis we consider one possible concurrency model, the concurrent reactive object approach that offers implicit concurrency at the object level. This model has been implemented in the programming language Timber, which primarily targets development of real-time systems. It is also implemented in TinyTimber, a subset of the C language closely matching Timber’s execution model. We quantify various costs of a TinyTimber implementation of the model (such as context switching and message passing overheads) on a number of hardware platforms and compare them to the costs of the more common thread-based approach. We then demonstrate how some of these costs can be mitigated using stack resource policy. On a separate track, we present a feasibility test for garbage collection in a reactive real-time system with automatic memory management, which is a necessary component for verification of correctness of a real-time system implemented in Timber
With the growing complexity of modern embedded real-time systems, scheduling and managing of resources has become a daunting task. While scheduling and resource management for internal events can be simplified by adopting a commonplace real-time operating system (RTOS), scheduling and resource management for external events are left in the hands of the programmer, not to mention managing resources across the boundaries of external and internal events. In this paper we propose a unified system view incorporating earliest deadline first (EDF) for scheduling and stack resource policy (SRP) for resource management. From an embedded real-time system view, EDF+SRP is attractive not only because stack usage can be minimized, but also because the cost of a pre-emption becomes almost as cheap as a regular function call, and the number of preemptions is kept to a minimum. SRP+EDF also lifts the burden of manual resource management from the programmer and incorporates it into the scheduler. Furthermore, we show the efficiency of the SRP+EDF scheme, the intuitiveness of the programming model (in terms of reactive programming), and the simplicity of the implementation.
Evolving traceability requirements increasingly challenge manufacturing supply chain actors to collect tamperproof and auditable evidence about what inputs they process, in what way these inputs are used, and what the resulting process outputs are. Traceability solutions based on blockchain technology have shown ways to satisfy the requirements of creating a tamper-proof and auditable trail of traceability data. However, the existing solutions struggle to meet the increasing storage requirements necessary to create an evidence trail using manufacturing data. In this paper, we show a way to create a tamper-proof and auditable evolving product story that uses a decentralized file system called the InterPlanetary File System (IPFS). We also show how using linked data can help auditors derive a traceable product story from such an accumulating evidence trail. The solution proposed herein can supplement existing blockchain-based traceability solutions and enable traceability in global manufacturing supply chains where forming a consortium incurs prohibitive costs and where storage requirements are high.
This thesis demonstrates a technique for developing efficient applications interpreting spacial deep learning output using Hyper Dimensional Computing (HDC), also known as Vector Symbolic Architecture (VSA). As a part of the application demonstration, a novel preprocessing technique for motion using state machines and spacial semantic pointers will be explained. The application will be evaluated and run on a Google Coral edge TPU interpreting real time inference of a compressed object detection model.
A self-sensing material can not only carry a load but can also provide data aboutthe load and stress it’s being subjected to. Traditional additive manufacturing haslimited capabilities in producing self-sensing material. Existing 3D printers eitherused in industry or in scientific applications are either limited by closed-off software and planar motion which limits the design freedom, or the type of material orcost often limiting the attainability. Being capable of placing self-sensing materialwith full design freedom means that the sensor structure as well as the load carryingpart of the material can be tailored to the application specific use of the material,making application specific load carrying and sensing capabilities possible. Themanufacturing method produced in this aims to solve these existing limitations. Aliterature review in the topic of additive manufacturing of self-sensing material andcontinuous Carbon Fiber Reinforced Thermoplastics (CFRTPs) has been producedas a literature base. The review seeks to educate and inspire the design of an noveladditive manufacturing method and device capable of printing a self-sensing material as well as non-planar motion. A design for extruding self-sensing material andnon-planar motion has been realized through modified Commercial-Off-The-Shelf(COTS) parts and Geometric Code (G-Code). Existing hardware capable of producing this can be priced in the range of 70 000 C, but this result has been achievedwith around 200 C [42]. A software structure capable of manufacturing the selfsensing material has been produced. Real-world testing in terms of extrusion of theself-sensing material and non-planar motion has been tested and proven which arethe main practical outcomes demonstrating the technological feasibility.
Today the majority of embedded software is written in C or C++ using the thread paradigm. C and C++ are memory unsafe programming languages that often appear in CVE (Common Vulnerability and Exploits) reports. Threads are a popular concurrency paradigm in SMP (Symmetric Multi Processor) systems; however, threads can deadlock and are hard to statically analyze for schedulability. At the same time, security is becoming more and more important thanks to the exponential grow of IoT (Internet of Things) devices; meanwhile, vendors are starting to ship more and more heterogeneous multi-core devices where the thread paradigm can not be applied. In this thesis, we present an alternative programming framework for building real time, safety critical and general purpose embedded software that is memory safe by construction and suitable for single-core, homogeneous multi-core and heterogeneous multi-core systems.
The Industrial Internet of Things (IIoT) has the potential to improve the production and business processes by enabling the extraction of valuable information from industrial processes. The mining industry, however, is rather traditional and somewhat slow to change due to infrastructural limitations in communication, data management, storage, and exchange of information. Most research efforts so far on applying IIoT in the mining industry focus on specific concerns such as ventilation monitoring, accident analysis, fleet and personnel management, tailing dam monitoring, and pre-alarm system while an overall IIoT architecture suitable for the general conditions in the mining industry is still missing. This article analyzes the current state of Information Technology in the mining sector and identifies a major challenge of vertical fragmentation due to the technological variety of various systems and devices offered by different vendors, preventing interoperability, data distribution, and the exchange of information securely between devices and systems. Based on guidelines and practices from the major IIoT standards, a high-level IIoT architecture suitable for the mining industry is then synthesized and presented, addressing the identified challenges and enabling smart mines by automation, interoperable systems, data distribution, and real-time visibility of the mining status. Remote controlling, data processing, and interoperability techniques of the architecture evolve all stages of mining from prospecting to reclamation. The adoption of such IIoT architecture in the mining industry offers safer mine site for workers, predictable mining operations, interoperable environment for both traditional and modern systems and devices, automation to reduce human intervention, and enables underground surveillance by converging operational technology (OT) and information technology (IT). Significant open research challenges and directions are also studied and identified in this paper, such as mobility management, scalability, virtualization at the IIoT edge, and digital twins.
Standard compliance in system of systems (SoS) means complying with standards, laws, and regulations that apply to services from several sources and different levels. Compliance is a major challenge in many organizations because any violation will lead to financial penalties, lawsuits fines, or revocation of licenses to operate within specific industrial market. To support the business lifecycle, organizations also need to monitor the actual processes during run time and not only in their design time. Standard compliance verification is important in the lifecycle for reasons, such as detection of noncompliance as well as operational decisions of running processes. With the promotion of connectivity of systems, existing and new security standards can be employed but there are important aspects, such as technically measurable indicators, in the standards and automation of compliance verification that need to be addressed. This article presents an automated and continuous standard compliance verification framework used to check devices, systems, and services for standard compliance during secure onboarding and run time. In addition, a case study for the Eclipse Arrowhead framework is used to demonstrate the functionality of the standard compliance verification in SoS.
The increased need for mobility has led to transportation problems like congestion, accidents and pollution. In order to provide safe and efficient transport systems great efforts are currently being put into developing Intelligent Transport Systems (ITS) and cooperative systems. In this paper we extend proposed solutions with autonomous on-road sensors and actuators forming a wireless Road Surface Network (RSN). We present the RSN architecture and design methodology and demonstrate its applicability to queue-end detection. For the use case we discuss the requirements and technological solutions to sensor technology, data processing and communication. In particular the MAC protocol is detailed and its performance assessed through theoretical verification. The RSN architecture is shown to offer a scalable solution, where increased node density offers more precise sensing as well as increased redundancy for safety critical applications. The use-case demonstrates that RSN solutions may be deployed as standalone systems potentially integrated into current and future ITS. RSN may provide both easily deployable and cost effective alternatives to traditional ITS (with a direct impact independent of penetration rate of other ITS infrastructures - i.e., smart vehicles, safe spots etc.) as well as provide fine grain sensory information directly from the road surface to back-end and cooperative systems, thus enabling a wide range of ITS applications beyond current state of the art.
We discuss a Nambu-Jona-Lasinio (NJL)-type quantum field theoretical approach to the quark matter equation of state with color superconductivity and construct hybrid star models on this basis. It has recently been demonstrated that with increasing baryon density, the different quark flavors may occur sequentially, starting with down-quarks only, before the second light quark flavor and at highest densities the strange quark flavor also appears. We find that color superconducting phases are favorable over non-superconducting ones, which entails consequences for thermodynamic and transport properties of hybrid star matter. In particular, for NJL-type models no strange quark matter phases can occur in compact star interiors due to mechanical instability against gravitational collapse, unless a sufficiently strong flavor mixing as provided by the Kobayashi-Maskawa-'t Hooft determinant interaction is present in the model. We discuss observational data on mass-radius relationships of compact stars which can put constraints on the properties of the dense matter equation of state.
Engineering and computer science have come up with a variety of techniques to increase the confidence in systems, increase reliability, facilitate certification, improve reuse and maintainability, improve interoperability and portability. Among them are various techniques based on formal models to enhance testing, validation and verification. In this paper, we are concentrating on formal verification both at runtime and design time of a system. Formal verification of a system property at design time is the process of mathematically proving that the property indeed holds. At runtime, one can check the validity of the property and report deviations by monitoring the system execution. Formal verification relies on semantic models, descriptions of the system and its properties. We report on ongoing verification work and present two different approaches for formal verification of IEC 61499-based programs. We provide two examples of ongoing work to exemplify the design and the runtime verification approaches
Samlingsprojekt för alla ESIS-projekt
Machine vision is required by autonomous heavy construction equipment to navigate and interact with the environment. Wheel loaders need the ability to identify different objects and other equipment to perform the task of automatically loading and dumping material on dump trucks, which can be achieved using deep neural networks. Training such networks from scratch requires the iterative collection of potentially large amounts of video data, which is challenging at construction sites because of the complexity of safely operating heavy equipment in realistic environments. Transfer learning, for which pretrained neural networks can be retrained for use at construction sites, is thus attractive, especially if data can be acquired without full-scale experiments. We investigate the possibility of using scalemodel data for training and validating two different pretrained networks and use real-world test data to examine their generalization capability. A dataset containing 268 images of a 1:16 scale model of a Volvo A60H dump truck is provided, as well as 64 test images of a full-size Volvo A25G dump truck. The code and dataset are publicly available 1 . The networks, both pretrained on the MS-COCO dataset, were fine-tuned to the created dataset, and the results indicate that both networks can learn the features of the scale-model dump truck (validation mAP of 0.82 for YOLOv3 and 0.95 for RetinaNet). Both networks can transfer these learned features to detect objects on a full-size dump truck with no additional training (test mAP of 0.70 for YOLOv3 and 0.79 for RetinaNet).
Current challenges in production automation requires the involvement of new technologies like Internet of Things (IoT), Systems of Systems and local automation clouds. The objective of this paper is to address one of the challenges involved in establishing and managing a cloud based automation system. Three key capabilities have been identified as required to create the expected benefits of local automation clouds; 1) capturing of plant design 2) capturing and distributing configuration and deployment information 3) coordinating information exchange.
This paper addresses the capturing and distribution of configuration and deployment information. For this purpose a system service is proposed, the ConfigurationStore, following the principles of the Arrowhead Framework. The service is accompanied by a deployment methodology and a bootstrapping procedure. These are discussed for several types of automation technology, e.g. controllers, sensors, actuators. A qualitative evaluation of the proposed approach is made for four use cases; Building automation, Manufacturing automation, Process automation and IoT devices. Concluding the usability for large-scale deployment and configuration of Industrial Internet of Things.
The use of UWB for Industrial Internet of Things (IIoT) applications benefits from the following four main properties; 1) scalability due to the inherent short transmissions times of the UWB radio, 2) bandwidth-consuming applications such as condition monitoring with vibration sensing, 3) applications with real-time positioning (RTLS) requirements, and 4) wireless communication in electromagnetically harsh environments with a high level of multipath fading. In this paper, we present a UWB-based 6LoWPAN implementation in the Contiki OS as a step towards incorporating UWB in the industrial IoT domain.
Industrial Internet of Things (IIoT) and Industry 4.0 rely heavily on data for reasons such as production follow-up, planning and optimization. Industrial data come in large volumes from production logs and sensors whereof some data carries business and strategic value, sensitive information, or a combination of both. Such data must be protected from unauthorized access, but also be easy to access for authorized users to facilitate work to gain business and operational values from the data. The efficient creation and maintenance of access policies for secure data sharing is hence essential, but unfortunately also challenging in terms of the complexity and administrative effort for fine-grained such. Attribute-based access control (ABAC) such as the Next Generation Access Control (NGAC) provides efficient models for handling access policies. Existing access control models fail however to provide a simple and easy-to-maintain policy language capable of efficiently enforcing fine-grained access control policies for large volumes of time-series data. In this paper, we propose extensions to NGAC based on filter strings that facilitates efficient enforcement of row-level value and time constraint policies for time-series data. We evaluate two approaches for storing and retrieving these filter strings and provide a qualitative and quantitative discussion of the results.
The IMC-AESOP architecture has been used to implemente a smart house demonstration. Six different systems has been integrated with local (802.11, 802.15.4) and global (telecom) communication. The six systems integrated are: Car arrival detection system, Garage door opening system, House security system, External house lightning system, External electrical outlet system, House energy control system. The SOA technologies used are CoAP and EXI using SenML to encode the services. Engineering tools have been used to simulate the usage scenario and provide prediction of system behaviour.
There is currently a rapid development of new types of wireless communication channels for industrial automation. This paper aims to provide some experimental data and theoretical justification on packet latency and packet loss for a wireless communication channel exposed to intentional radio interference. The intentional radio interference used in the experiments is an attempt to simulate possible future co-existence scenarios in a dense wireless communication environment at an industrial site. For the cases tested packet loses of less than 10% was obtained. Latency is shown to depend on channel access and will has a deterministic behaviour.
Various forms of cloud computing principles and technologies are becoming important recently. This paper ad- dresses cloud computing for automation and control applications. It’s argued that the open Internet cloud idea has such limitations that its not appropriate for automation.
Since automation is physically and geographically local, it is inevitable to introduce the concept of local automation clouds. It’s here proposed that local automation clouds should be self contained an be able to execute the intended automation func- tionalities without any external resources. Thus providing a fence at the rim of the local cloud preventing any inbound or outbound communication. Such a local cloud provides possibilities to address key requirements of both todays and future automation solutions. Adding mechanisms for secure inter-cloud administra- tion and data tranfere enables local automation cloud to meet IoT automation system requirements as: 1) Interoperability of a wide range of IoT and legacy devices 2) Automation requirement on latency guarantee/prediction for communication and control computations. 3) Scalability of automation systems enabling very large integrated automation systems 4) Security and related safety of automation systems 5) Ease of application engineering 6) Multi stakeholder integration and operations agility.
How these requirements can be met in such a local automation cloud is discussed with references to proposed solutions. The local automation cloud concept is further verified for a compartment climate control application. The control application included an IoT controller, four IoT sensors and actuators, and a physical layer communication gateway. The gateway acted as host for local cloud core functionalities. The climate control application has successfully been implemented using the open source Arrowhead Framework and its supports for design and implementation of self contained local automation clouds.
The success of the ongoing fourth industrial revolution largely depends on our ways to cope with the novel design challenges arising from a combination of an enormous increase in process and product complexity, as well as the expected autonomy and self-organization of complex and diverse industrial hardware-software installments, often called systems-of-systems. In this paper, we employ the service-oriented architectural paradigm, as materialized in the Eclipse Arrowhead framework, to represent modern systems engineering principles and their open structural principles and, thus, relevance to flexible and adaptive systems. As for adequately capturing the structural aspect, we propose using model-based engineering techniques and, in particular, a SysML-based specialization of systems modeling. The approach is illustrated by a real-life use-case in industrial automation.
Technology for the real time measurement of mechanical data from a javelin throw has been developed. The javelin is instrumented with an ineartial measurement unit measuring, IMU, acceleration, angle speed and direction to the earth magnetic field all in three dimensions i.e. in total nine parameters. The IMU is buildt into the javelin still maintaining the javelin properties and keeping it within the IAAF specifications. The instrumentation is build using the EIS architecture thus incorporating TCP/IP support including an Internet server. The wireless communication technology choosen is Bluetooth that connects to Internet through either a Bluetooth enabled mobile phone or a stationary Bluetooth accesspoint
We envision ambient intelligent environments with an infrastructure based on heterogeneous sensor and actuator devices accessible over the Internet. Initial steps to realize this concept have been taken by developing an Embedded Internet System (EIS) architecture for Internet protocol enabled devices. In many cases these devices will be in close proximity to a person. Such applications are found in for example sport and wellness. The mobile connection of such devices to the global Internet in a simple and cheap way is of particular interest. It is here proposed that such connection will make use of the existing and wide spread mobile phone networks. Since a few years most new mobile phones are equipped with Bluetooth technology making a mobile phone capable of connecting to 7 other Bluetooth devices. Thus by giving EIS devices a Bluetooth communication channel it will become possible to tunnel the EIS sensor communication through a mobile phone nearby the sensor. The proposed architecture will be described with discussion on limitations due to existing infrastructures and business models in the telecom networks.
This paper is a review of the fascinating development of sensors and the communication of sensor data. A brief historical introduction is given, followed by a discussion on architectures for sensor networks. Further, realistic specifications on sensor devices suitable for ambient intelligence and ubiquitous computing are given. Based on these specifications, the status and current frontline development are discussed. In total, it is shown that future technology for ambient intelligence based on sensor and actuator devices using standardized Internet communication is within the range of possibilities within five years.
As space missions continue to increase in complexity, the operational capabilities and amount of gathered data demand ever more advanced systems. Currently, mission capabilities are often constrained by the link bandwidth as well as onboard processing capabilities. A large number of commands and complex ground station systems are required to allow spacecraft operations. Thus, methods to allow more efficient use of the bandwidth, computing capacity and increased autonomous capabilities are of strong research interest. Artificial Intelligence (AI), with its vast areas of application scenarios, allows for these challenges and more to be tackled in the spacecraft design. Particularly, the flexibility of Artificial Neural Networks as Machine Learning technology provides many possibilities. For example, Artificial Neural Networks can be used for object detection and classification tasks. Unfortunately, the execution of current Machine Learning algorithms consumes a large amount of power and memory resources. Additionally, the qualification of such algorithms remains challenging, which limits their possible applications in space systems. Thus, an increase in efficiency in all aspects is required to further enable these technologies for space applications. The optimisation of the algorithm for System on Chip (SoC) platforms allows it to benefit from the best of a generic processor and hardware acceleration. This increased complexity of the processing system shall allow broader and more flexible applications of these technologies with a minimum increase of power consumption. As Commercial off-the-shelf embedded systems are commonly used in NewSpace applications and such SoC are not yet available in a qualified manner, the deployment of Machine Learning algorithms on such devices has been evaluated. For deployment of machine learning on such devices, a ConvolutionalNeural Network model was optimised on a workstation. Then, the neural network is deployed with Xilinx’s Vitis AI onto a SoC which includes a powerful generic processor as well as the hardware programming capabilities of an Field ProgrammableGate Array (FPGA). This result was evaluated based on relevant performance and efficiency parameters and a summary is given in this thesis. Additionally, a tool utilising a different approach was developed. With a high-level synthesis tool the hardware description language of an accelerated linear algebra optimised network is created and directly deployed into FPGA logic. The implementation of this tool was started, and the proof of concept is presented. Furthermore, existing challenges with the auto-generated code are outlined and future steps to automate and improve the entire workflow are presented. As both workflows are very different and thus aim for different usage scenarios, both workflows are outlined and the benefits and disadvantages of both are outlined.
As spacecraft missions continue to increase in complexity, the system operation and amount of gathered data demand more complex systems than ever before. Currently, mission capabilities are constrained by the link bandwidth as well as on-board processing capacity, depending on a high number of commands and complex ground station systems to allow spacecraft operations. Thus, efficient use of the bandwidth, computing capacity and increased autonomous capabilities are of utmost importance. Artificial intelligence, with its vast areas of application scenarios, allows for these challenges and more to be tackled in spacecraft design. Particularly, the flexibility of neural networks as machine learning technology provides many possibilities. For example, neural networks can be used for object detection and classification tasks. Unfortunately, the execution of current machine learning algorithms consumes a large amount of power and memory resources, and qualified deployment remains challenging which limits their possible applications in space systems. Thus, an increase in efficiency is a major enabling factor for these technologies. The optimisation of the algorithm for System-on-Chip platforms allows it to benefit from the best of a generic processor and hardware acceleration shall allow broader applications of these technologies with a minimum increase of power consumption. Additionally, COTS embedded systems are commonly used in NewSpace applications due to the possibility to add external or software fault mitigation. For deployment of machine learning on such devices, a CNN model was optimised on a workstation. Then, the neural network is deployed with Xilinxâs Vitis AI onto different embedded systems that include a powerful generic processor as well as the hardware programming capabilities of an FPGA. This result was evaluated based on relevant performance and efficiency parameters and a summary is given in this paper. Additionally, a different approach was developed which creates, with a high-level synthesis tool, the hardware description language of an accelerated linear algebra optimized network. The implementation of this tool was started, and the proof of concept is presented. Furthermore, existing challenges with the auto-generated code are outlined and future steps to automate and improve the entire workflow are presented. This paper aims to contribute to increasing the efficiency and applicability of artificial intelligence in space. Specifically, the performance of machine learning algorithms is evaluated on FPGAs which are commonly used for complex algorithmsâ execution in space.
This chapter presents a static diagnosis tool that locates type errors in untyped CLP programs without executing them. The existing prototype is specialised for the programming language CHIP [4.10], but the idea applies to any CLP language. The tool works with approximated specifications which describe types of procedure calls and successes. The specifications are expressed as a certain kind of term grammars. The tool automatically locates at compile time all the errors (with respect to a given specification) in a program. The located erroneous program fragments are (prefixes of) clauses. The tool aids the user in constructing specifications incrementally; often a fragment of the specification is already sufficient to locate an error. The presentation is informal. The focus is on the motivation of this work and on the functionality of the tool. Some related formal aspects are discussed in [4.15, 4.29]. The prototype tool is available from http://www.ida.liu.se/~pawpi/Diagnoser/diagnoser.html.
The paper presents a diagnosis tool for CLP programs. It deals with partial correctness w.r.t. specifications which describe procedure calls and successes. The space of possible specifications is restricted to a kind of regular types; we propose a generalization of the concept of types used in so called descriptive typing of logic programs. In particular we distinguish ground types from those containing non-ground elements.The tool is able to automatically locate at compile time all errors in a program, this means all the clauses or clause prefixes responsible for the program being incorrect w.r.t. a given specification. The tool aids the user in constructing specifications incrementally; often a fragment of the specification is already sufficient to locate an error.Our prototype is specialized for the programming language CHIP, but the idea is applicable to any untyped CLP (and LP) language. We believe that the presented approach makes it possible to combine the advantages of typed and untyped programming languages.
This paper introduces a framework of parametric descriptive directional types for Constraint Logic Programming (CLP). It proposes a method for locating type errors in CLP programs, and presents a prototype debugging tool. The main technique used is checking correctness of programs w.r.t. type specifications. The approach is based on a generalization of known methods for proving the correctness of logic programs to the case of parametric specifications. Set constraint techniques are used for formulating and checking verification conditions for (parametric) polymorphic type specifications. The specifications are expressed in a parametric extension of the formalism of term grammars. The soundness of the method is proved, and the prototype debugging tool supporting the proposed approach is illustrated on examples. The paper is a substantial extension of the previous work by the same authors concerning monomorphic directional types.
This paper proposes a tool to support reasoning about (partial) correctness of constraint logic programs. The tool infers a specification that approximates the semantics of a given program. The semantics of interest is an operational "call-success" semantics. The main intended application is program debugging. We consider a restricted class of specifications, which are regular types of constrained atoms. Our type inference approach is based on bottom-up abstract interpretation, which is used to approximate the declarative semantics (c-semantics). By using "magic transformations" we can describe the call-success semantics of a program by the declarative semantics of another program. We are focused on CLP over finite domains. Our prototype program analyzer works for the programming language CHIP.
In modern design flows low-power aspects should be considered as early as possible to minimize power dissipation in the resulting circuit. A new binary decision diagram-based design style that considers switching activity optimization using temporal correlation information is presented. The technique is based on an approximation method for switching activity estimation. In the case of finite state machines, the presented method extracts signal statistics by means of Markov chain analyses. Experimental results on a set of MCNC and ISCAS89 benchmarks show the estimated reduction in power dissipation.
Vid dödnätstart av produktionsanläggningar och drift av svaga nät eller ö-drift är frekvensomriktare som driver pumpar och fläktar kritiska komponenter. Om frekvensomriktare påverkas av störningar i nätet kan elproduktion kopplas bort och det svaga nätet eller ö-driften kollapsa. Projektet ska studera frekvensomriktare ur ett antal aspekter såsom uppbyggnad, styrning och implementering i syfte att utveckla mer robusta frekvensomriktare och implementering av dessa för att säkerställa drift av svaga nät och ö-drift och minimera ytterligare driftstörningar vid svåra påfrestningar på elnätet.
This thesis treats the development of a control system for an inverse soil conditioner prototype. A simulation model was created to develop a control system with the purpose of validation and verification of the prototypes efficacy. The simulation model is created in Simulink, where a part of the soil conditioner is imported as a solid model, which then is coupled to a model of a hydraulic system. In the simulation a control system and regulator were implemented and tuned. when the software was test-ready the hardware-interface was tested to validate that the current software could receive inputs and send meaningful outputs, and then real movements were logged to validate the software function for the machine.
The results of this project can then be summarized as a simulation model, a control system, and a solid basis for real world verification are completed.
TCP/IP has recently taken promising steps toward being a viable communication architecture for networked sensor nodes. Furthermore, the use of Bluetooth can enable a wide range of new applications, and in this article, an overview of the performance and characteristics of a networked sensor node based on TCP/IP and Bluetooth is presented. The number of Bluetooth-enabled consumer devices on the market is increasing, which gives Bluetooth an advantage compared to other radio technologies from an interoperability point of view. However, this excellent ability to communicate introduces disadvantages since neither TCP/IP nor Bluetooth were designed with resource-constrained sensor nodes in mind. We, however, argue that the constraints imposed by general purpose protocols and technologies can be greatly reduced by exploiting characteristics of the communication scheme in use and efficient and extensive use of available low-power modes. Furthermore, we claim that a Bluetooth-enabled networked sensor node can achieve an operating lifetime in the range of years using a total volume of less than 10 cm3. The Mulle Embedded Internet System (EIS), along with its advanced power management architecture, is presented as a case-study to support the claims.
Wireless sensor nodes are a versatile, generalpurpose technology capable of measuring, monitoring and controlling their environment. Even though sensor nodes are becoming ever smaller and more power efficient, there is one area that is not yet fully addressed; Power Supply Units (PSUs). Standard solutions that are efficient enough for electronic devices with higher power consumption than sensor nodes, such as mobile phones or PDAs, may prove to be ill suited for the extreme low-power and size requirements often found on wireless sensor nodes. In this paper, a system-level design of a Power Management Architecture (PMA) is presented. The PMA is an integration of PSU hardware and various software components, and is capable of supplying a sensor node with energy from multiple sources, as well as providing status information from the PSU. The heart of the architecture is a context- and power-aware Task manager, which controls when the nodes low-power modes are activated, and is highly integrated with PSU hardware as well as other software components in the system. Its main responsibility is to schedule when energy consuming tasks can be dispatched. Depending on the task priority and system configuration, a task can be either dispatched, discarded or delayed. This approach ensures that only critical tasks will be allowed to use the battery, and that the system will be powered by renewable energy when performing other non-critical tasks.
Bluetooth-equipped wireless sensor nodes can be quickly integrated in small home networks. These networks can be utilized e.g. for surveillance, home monitoring and automation. Accurate time is an important factor for time-stamping of sensor data, encryption/authentication and it can also to used to implement time synchronous schemes for low power radio communication. We argue that IP-based time synchronization, such as various flavors of the NTP protocol, can be used with Bluetooth networks. This in combination with an activation schedule allows an efficient trade-off between energy consumption and communication delay, and provides easy integration with available infrastructure. The proposed approach in this paper is well suited for smaller wireless home networks, typically singlehop networks with access points that are always available. Our approach is verified by experiments performed on a COTS-based platform using Bluetooth.
The AutoSAR specification provides a common development standard for automotive software. The functionality was initially aimed at single-core processors, and as the paradigm shifted to multiple cores in the automotive industry, new performance challenges arose. Performance is the main reason for the shift, yet the industry is competing to overcome obstacles stemming from peripherals and memory sharing. Many optimization algorithms on single-core processors do not apply to the multi-core platforms and this thesis presents a solution in the form of improvement with predictability in the multi-core AutoSAR model, including the communication time between software components. Investigated here are the scheduling policy and the priority inversion caused by scheduling inter-communicating tasks on a multi-core processor. Several problems with resource sharing and mutual locks, used in concurrent execution of the AutoSAR tasks, are explored.
Due to the scale of these models, a tool has additionally been developed to assist with the integration to fit solutions into the final production workflow. The tool makes use of the Artop framework, based on the Eclipse Technologies, and includes the necessary function to handle the AutoSAR model. This thesis begins with a detailed introduction of the AutoSAR software development tools used in the industry and its adaption to multi-core processors in automotive application.
Embedded systems are often operating under hard real-time constraints. Such systems are naturally described as time-bound reactions to external events, a point of view made manifest in the high-level programming and systems modeling language Timber. In this licensiate thesis we demonstrate how the Timber semantics for parallel reactive objects translates to embedded real-time programming in C. This is accomplished through the use of a minimalistic Timber Run-Time system, TinyTimber. The TinyTimber kernel ensures state integrity, and performs scheduling of events based on given time-bounds in compliance with the Timber semantics. In this way, we avoid the volatile task of explicitly coding parallelism in terms of traditional processes/threads/semaphores/monitors, and side-step the delicate task to encode time-bounds into process/thread priorities. Moreover, a simulation environment is developed that enables the behaviour of a heterogeous distributed system, consisting of both the hardware and the Timber based embedded software to be observed under a model of the environment. Furthermore, pedagogic issues of reactive objects have been studied in the context of higher education. First results indicate that the use of TinyTimber give students an increased ability to understand and solve embedded programming assignments. Finally, the TinyTimber kernel implementation is discussed. Performance metrics are given for a number of representative platforms, showing the applicability of TinyTimber to small embedded systems. A comparison to a traditional system tick driven, thread based, real-time kernel shows that TinyTimber provides tighter timing and a simpler (yet comprehensive) API. In conclution we find that the use of Reactive Objects in C, realized through TinyTimber is a viable alternative for Embedded Real-Time Programming.
Model and component based design is an established means for the development of large software systems, and is starting to get momentum in the realm of embedded software development. In case of safety critical (dependable systems) it is crucial that the underlying model and its realization captures the requirements on the timely behavior of the system, and that these requirements can be preserved and validated throughout the design process (from specification to actual code execution). To this end, we base the presented work on the notion of Concurrent Reactive Objects (CRO) and their abstraction into Reactive Components.In many cases, the execution platform puts firm resource limitations on available memory and speed of computations that must be taken into consideration for the validation of the system.In this paper, we focus on code synthesis from the model, and we show how specified timing requirements are preserved and translated into scheduling information. In particular, we present how ceiling levels for Stack Resources Policy (SRP) scheduling and analysis can be extracted from the model. Additionally, to support schedulability analysis, we detail algorithms that for a CRO model derives periods (minimum inter-arrival times) and offsets of tasks/jobs. Moreover, the design of a micro-kernel supporting cooperative hardware- and software-scheduling of CRO based systems under Deadline Monotonic SRP is presented.
Lightweight Real-Time Operating Systems have gained widespread use in implementing embedded software on lightweight nodes. However, bare metal solutions are chosen, e.g., when the reactive (interrupt-driven) paradigm better matches the programmer’s intent, when the OS features are not needed, or when the OS overhead is deemed too large. Moreover, other approaches are used when real-time guarantees are required. Establishing real-time and resource guarantees typically requires expert knowledge in the field, as no turn-key solutions are available to the masses.In this paper we set out to bridge the gap between bare metal solutions and traditional Real-Time OS paradigms. Our goal is to meet the intuition of the programmer and at the same time provide a resource-efficient (w.r.t. CPU and memory) implementation with established properties, such as bounded memory usage and guaranteed response times. We outline a roadmap for Real-Time For the Masses (RTFM) and report on the first step: an intuitive, platform-independent programming API backed by an efficient Stack Resource Policy-based scheduler and a tool for kernel configuration and basic resource and timing analysis.
In this paper, we present a comprehensive approach to design of embedded real-time software for electrically controlled mechanical systems in automotive applications. As a case study, we implement a Gear change and Clutch controller for a Formula SAE car. This includes a generic communication interface and protocol for CAN bus communication, I/O interfaces for A/D conversion and PWM output, together with a PID controller for clutch actuation. Under our framework, the embedded software is developed using Timber, a programming language and formalism that provides executable models for embedded real-time systems. The case study shows how a complete control system can be straightforwardly modeled, simulated and transformed into executable code. The system has been realized and tested onto a lightweight, 8-bit AVR-5, embedded platform. Compared to the raw C code design flow, the proposed framework has in our case study showed increased efficiency with respect to development time. We boldly conclude that our Timber based framework offers true "work with the work".
In this paper a distributed system for engine management is presented. The system is in use on the 2006 and 2005 Formula SAE cars from Luleå University of Technology. The purpose of building such a system from scratch is to have a comprehensive, predictable and easily extendable platform, giving the possibility to add extra features even at the racetrack. This allows the system to serve as a research platform for embedded real-time systems and vehicle dynamics. Another motivation is to get low weight on the complete system, and to integrate the electronics in such a way that the total cabling required will be minimal. The initial requirements are that the system should implement launch control, traction control, electric gear shift and clutch control. To control the engine the system must implement sequential fuel injection, direct fire ignition and closed loop lambda control. Moreover to remotely tune and monitor the system parameters in real-time - even on the racetrack, the system should facilitate wireless communication. To achieve these goals a system consisting of five units communicating over a standard automotive bus (CAN1) was developed. In this paper we will describe the systems functionality and the units developed.
A major challenge for the automotive industry is to reduce the development time while meeting quality assessments for their products. This calls for new design methodologies and tools that scale with the increasing amount and complexity of embedded systems in today's vehicles.In this paper we undertake an approach to embedded software design based on executable models expressed in the high-level modelling paradigm of Timber. In this paper we extend previous work on Timber with a multi-paradigm design environment, aiming to bridge the gap between engineering disciplines by multi-body co-simulation of vehicle dynamics, embedded electronics, and embedded executable models. Its feasibility is demonstrated on a case study of a typical automotive application (traction control), and its potential advantages are discussed, as highlighted below:shorter time to market through concurrent, co-operative distributed engineering, andreduced cost through adequate system design and dimensioning, andimproved efficiency of the design process through migration and reuse of executable software components, andreduced need for hardware testing, by specification verification on the executable model early in the design process, andimproved quality, by opening up for formal methods for verification.