This paper addresses the problem of automatic detection of burn-through in weld joints. Gas metal arc (GMA) welding with pulsed current is used, and welding voltage and current are recorded. As short-circuitings are common between the welding electrode and the work piece during burn-through, a short-circuit detector is developed to detect these events. To detect another specific characteristic of burn-through - this detector is combined with a square-law detector. This second detector is based on a non-linear modification of an autoregressive model with extra input (ARX-model) of the welding process. The results obtained from this compound detector indicate that it is possible to detect burn-through in the welds automatically. The work also indicates that it is possible to design an on-line monitoring system for robotic GMA welding.
Handwritten signatures remain a widely used method for personal authentication in various official documents, including bank checks and legal papers. The verification process is often labor-intensive and time-consuming, necessitating the development of efficient methods. This study evaluates the performance of machine learning models in handwritten signature verification using the ICDAR 2011 Signature and CEDAR datasets. The investigation involves preprocessing, feature extraction using CNN architectures, and optimization techniques. The most effective models undergo a rigorous evaluation process, followed by classification using supervised ML algorithms, such as linear SVM, random forest, logistic regression, and polynomial SVM. The results indicate that the VGG16 architecture, optimized with the Adam optimizer, achieves the satisfactory performance metrics. This study demonstrates the potential of ML methodologies to enhance the efficiency and accuracy of signature verification, offering a robust solution for document authentication.
We study the possibility of using convolutional neural networks for wavefront sensing from a guide star image in astronomical telescopes. We generated a large number of artificial atmospheric wavefront screens and determined associated best-fit Zernike polynomials. We also generated in-focus and out-of-focus point-spread functions. We trained the well-known “Inception” network using the artificial data sets and found that although the accuracy does not permit diffraction-limited correction, the potential improvement in the residual phase error is promising for a telescope in the 2–4 m class.
This thesis demonstrates a technique for developing efficient applications interpreting spacial deep learning output using Hyper Dimensional Computing (HDC), also known as Vector Symbolic Architecture (VSA). As a part of the application demonstration, a novel preprocessing technique for motion using state machines and spacial semantic pointers will be explained. The application will be evaluated and run on a Google Coral edge TPU interpreting real time inference of a compressed object detection model.
Size measurement of pellets in industry is usually performed by manual sampling and sieving techniques. Automatic on-line analysis of pellet size based on image analysis techniques would allow non-invasive, frequent and consistent measurement. We evaluate the statistical significance of the ability of commonly used size and shape measurement methods to discriminate among different sieve-size classes using multivariate techniques. Literature review indicates that earlier works did not perform this analysis and selected a sizing method without evaluating its statistical significance. Backward elimination and forward selection of features are used to select two feature sets that are statistically significant for discriminating among different sieve-size classes of pellets. The diameter of a circle of equivalent area is shown to be the most effective feature based on the forward selection strategy, but an unexpected five-feature classifier is the result using the backward elimination strategy. The discrepancy between the two selected feature sets can be explained by how the selection procedures calculate a feature's significance and that the property of the 3D data provides an orientational bias that favours combination of Feret-box measurements. Size estimates of the surface of a pellet pile using the two feature sets show that the estimated sieve-size distribution follows the known sieve-size distribution.
Size measurement of rocks is usually performed by manual sampling and sieving techniques. Automatic on-line analysis of rock size based on image analysis techniques would allow non-invasive, frequent and consistent measurement. In practical measurement systems based on image analysis techniques, the surface of rock piles will be sampled and therefore contain overlapping rock fragments. It is critical to identify partially visible rock fragments for accurate size measurements. In this research, statistical classification methods are used to discriminate rocks on the surface of a pile between entirely visible and partially visible rocks. The feature visibility ratio is combined with commonly used 2D shape features to evaluate whether 2D shape features can improve classification accuracies to minimize overlapped particle error.
The size distribution as a function of weight of particles is an important measure of product quality in the mining and aggregates industries. When using manual sampling and sieving, the weight of particles is readily available. However, when using a machine vision system, the particle size distributions are determined as a function of the number of particles. In this paper we first show that there can be a significant weight-transformation error when transforming from one type of size distribution to another. We also show how the problem can be overcome by training a classifier and scaling the results according to calibrated average weights of rocks. The performance of the algorithm is demonstrated with results of measurements of limestone particles on conveyor belts.
Evaluation of Spherical Fitting as a technique for sizing iron ore pellets is performed. Size measurement of pellet in industry is usually performed by manual sampling and sieving techniques. Automatic on-line analysis of pellet size would allow non-invasive, frequent and consistent measurement. Previous work has used an assumption that pellets are spherical to estimate pellet sizes. In this research we use a 3D laser camera system in a laboratory environment to capture 3D surface data of pellets and steel balls. Validation of the 3D data against a spherical model has been performed and demonstrates that pellets are not spherical and have physical structures that a spherical model cannot capture.
Size measurement of pellets in industry is usually performed by manual sampling and sieving techniques. Automatic on-line analysis of pellet size based on image analysis techniques would allow non-invasive, frequent and consistent measurement. We make a distinction between entirely visible and partially visible pellets. This is a significant distinction as the size of partially visible pellets cannot be correctly estimated with existing size measures and would bias any size estimate. Literature review indicates that other image analysis techniques fail to make this distinction. Statistical classification methods are used to discriminate pellets on the surface of a pile between entirely visible and partially visible pellets. Size estimates of the surface of a pellet pile show that the overlapped particle error is overcome by only estimating the surface size distribution with entirely visible pellets.
The use of ultrasound to measure preload in screws and bolts has been studied quite frequently the last decades. The technique is based on establishing a relationship between preload and change in time of flight (TOF) for an ultrasonic pulse propagating back and forth through a screw. This technique has huge advantages compared to other methods such as torque and angle tightening, mainly because of its independence of friction. This is of great interest for Atlas Copco since it increases the accuracy and precision of their assembly tools.
The purpose of this thesis was to investigate ultrasonic wave propagation in pre-stressed screws using a simulation software, ANSYS, and to analyse the results using signal processing. The simulations were conducted in order to get an understanding about the wavefront distortion effects that arise. Further, an impulse response of the system was estimated with the purpose of dividing the multiple echoes that occur from secondary propagation paths from one other.
The results strengthen the hypothesis that the received echoes are superpositions of reflections taking different propagation paths through the screw. An analytical estimation of the wavefront curvature also shows that the wavefront distortion due to a higher stress near the screw boundaries can be neglected. Additionally, a compressed sensing technique has been used to estimate the impulse response of the screw. The estimated impulse response models the echoes as superpositions of secondary echoes, with significant taps corresponding to the TOF of the shortest path and a mode-converted echo. The method is also shown to be stable in noisy environments.
The simulation model gives rise to a slower speed of sound than expected, which most likely is due to the fact that finite element analysis in general overestimates the stiffness of the model.
Due to the current research needs and the lack of commercial multi-channel, multi-constellation GNSS receivers, a two-board solution has been developed so it can be mated with and take advantage of the processing power of the FPGA board branded as MicroZed.
In order to achieve the proposed goals, an initial phase for assessing and updating the older design, building, and testing of SiGe modules (including both the electronics and casings) has been carried out. The results included demonstrate performances at logging GPS-L1 data with similar C/N0 and AGC values as the previous versions of the modules and offering navigation solutions with accuracies of a few meters. Secondly, a first iteration and design proposal for the new generation receiver has been proposed for GPS and GLONASS L1 and L2, which has been manufactured and tested. Partial tests have been performed due to the flaws of the current revision of the MicroZed Board in regards to its communication peripherals, and the results have validated the receiver’s design provided certain modifications are considered for future iterations. Furthermore, voltage and frequency tests have provided results with an error of less than 7%, and signal tests have provided C/N0 values similar to those of the SiGe modules of around 47[dB-Hz] which will be a useful baseline for future iterations. Finally, a design proposal for an Interface Board used between the older NT1065_PMOD Board and other FPGA boards carrying the standardized FMC connectors has been added to the report and negotiations with manufacturers have been engaged.
The high Peak-to-Average Power Ratio (PAPR) of Orthogonal Frequency Division Multiplexing (OFDM) signals leads to a serious system performance degradation. To work around this issue, several algorithms have been proposed in the literature to reduce the PAPR, but, they often suffer from multiple limitations; in particular, the main issue with interleaving techniques is the spectral efficiency loss, as the transmission of a Side Information (SI) is generally required. In contrast to previous works, this article proposes a blind interleaving technique for OFDM systems with signal space diversity. Indeed, with Rotated and Cyclically Q-Delayed (RCQD) constellations, the In-phase (I) and Quadrature (Q) components of constellations symbols are correlated, which allows the receiver to estimate the interleaver index without any SI. Moreover, to lower down the complexity burden at the receiver side, we first design a blind decoder based on the Minimum Mean Square Error (MMSE) criterion and we then propose a low complexity decoder for the Uniformly Projected RCQD (UP-RCQD) QAM, as this constellation has several interesting structural properties and achieves near optimum BER performance. Simulation results show that our proposal leads to a large PAPR reduction and to a near optimum BER performance that outperforms, over various channels, the solution currently used in DVB-T2. They also underline the good performance of the blind decoding performed with up to 98% of complexity reduction compared to the max-log Maximum Likelihood (ML) estimation.
As with all digital communications, understanding the propagation channel is essential. In this paper we present an analytical model of a channel consisting of a thin plate, including effects of frequency-dependent speed of sound and attenuation. We show how a compressed sensing approach can be used to estimate this channel impulse response from real measurements, even for cases when the plate thickness causes the reverberating pulses to overlap. The estimate can be seen as a sparsity constrained deconvolution of the combined impulse responses of the transmitting and receiving transducers. We then show with simulations that the proposed sparsity-constrained estimate is able to cope also in the presence of dispersion. We also analyze the performance of the proposed method both with simulations and experiments on 6 mm and 2 mm thick glass plate and 3 mm thick aluminum plate, and our results show that the model assumptions seems to hold.
In all digital communications, knowledge of the propagation channel between the transmitter and the receiver is essential. For transmitting of data through solid bodies, such as metal plates, pipe walls, etc., ultrasound is a viable alternative to radio communication and wired transmission. In ultrasound communication, the channel consists of two parts, the combined response of the transducers used as transmitter and receiver, and the impulse response of the propagation medium itself. For a thin plate with parallel surfaces, this results in a reverberating channel that significantly reduces the achievable bitrate if not handled properly. In this paper we show with simulations how the bit-error rate in Orthogonal Frequency Division Multiplexing (OFDM) communications, is affected by the reverberating nature of the plate, and how this can be overcome by the introduction of a channel shortening filter placed in front of the OFDM conventional receiver. The results show that this significantly reduces the bit-error rate, especially for thin plates. If the reverberations instead were to be compensated by the conventional channel equalization method in OFDM, we show that for the example in the simulations, the bitrate would drop by almost 25 %, from about 3.9 Mbit/s to about 2.9 Mbit/s.
We present a new approach to approximate continuous-domain mathematical morphology operators. The approach is applicable to irregularly sampled signals. We define a dilation under this new approach, where samples are duplicated and shifted according to the flat, continuous structuring element. We define the erosion by adjunction, and the opening and closing by composition. These new operators will significantly increase precision in image measurements. Experiments show that these operators indeed approximate continuous-domain operators better than the standard operators on sampled one-dimensional signals, and that they may be applied to signals using structuring elements smaller than the distance between samples. We also show that we can apply the operators to scan lines of a two-dimensional image to filter horizontal and vertical linear structures.
This paper proposes a way of better approximating continuous, two-dimensional morphologyin the discrete domain, by allowing for irregularly sampled input and output signals. We generalizeprevious work to allow for a greater variety of structuring elements, both flat and non-flat. Experimentallywe show improved results over regular, discrete morphology with respect to the approximation ofcontinuous morphology. It is also worth noting that the number of output samples can often be reducedwithout sacrificing the quality of the approximation, since the morphological operators usually generateoutput signals with many plateaus, which, intuitively do not need a large number of samples to be correctlyrepresented. Finally, the paper presents some results showing adaptive morphology on irregularlysampled signals.
Mathematical morphology (MM) on grayscale images is commonly performed in the discretedomain on regularly sampled data. However, if the intention is to characterize or quantify continuousdomainobjects, then the discrete-domain morphology is affected by discretization errors that may bealleviated by considering the underlying continuous signal, given a correctly sampled bandlimited image.Additionally, there are a number of applications where MM would be useful and the data is irregularlysampled. A common way to deal with this is to resample the data onto a regular grid. Often this createsproblems where data is interpolated in areas with too few samples. In this paper, an alternative way ofthinking about the morphological operators is presented. This leads to a new type of discrete operatorsthat work on irregularly sampled data. These operators are shown to be morphological operators thatare consistent with the regular, morphological operators under the same conditions, and yield accurateresults under certain conditions where traditional morphology performs poorly
This paper introduces a new operator that can be used to ap-proximate continuous-domain mathematical morphology on irregularly sampled surfaces. We define a new way of approximating the continuous domain dilation by duplicating and shifting samples according to a flat continuous structuring element. We show that the proposed algorithm can better approximate continuous dilation, and that dilations may be sampled irregularly to achieve a smaller sampling without greatly com-promising the accuracy of the result.
In this paper, we propose a perturbation amplitude adaption scheme for phasor extremum seeking control based on the plant's estimated gradient. By using phasor extremum seeking instead of classical extremum seeking, the problem of algebraic loops in the controller formulation is avoided. Furthermore, a stability analysis for the proposed method is provided, which is the first stability analysis for extremum seeking controllers using adaptive amplitudes. The proposed method is illustrated using numerical examples and it is found that changes in optimum can be tracked accurately while the steady-state perturbations can be reduced significantly.
An array processing GNSS (Global Navigation Satellite System) receiver may provide increased accuracy, reliability and integrity by forming beams towards satellites and nulls towards interference or reflective surfaces. Also, software defined receivers have proven themselves versatile and provide a convenient environment to implement novel algorithms.This paper first describes the gain/phase calibration of a seven element custom array antenna and proceeds to compare the single antenna performance to that of the performance attained by forming beams towards the satellites. IF (Intermediate Frequency) data, high rate samples representing the received signal in a narrow band around the GPS L1 frequency, from an array antenna have been recorded both in an environment with open sky conditions and also in more challenging areas (central Boulder, Colorado). Simultaneously, data from a high quality GPS based INS was recorded in order to obtain accurate estimates of position/ orientation. Calibration of the system (including antennas and front-ends) was performed using data from the benign environment, and based on this information, deterministic beams were formed towards the satellites using data from the semi-urban dataset. The single antenna accuracy was then compared to the position obtained by processing after forming beams.
Software defined receivers (SDR) are an increasingly important tool within the GNSS research community as the high level of flexibility offer a significant advantage over traditional hardware implementations. Over the last decade, software receivers have been used to investigate techniques as diverse as bi-static radar (additional correlators), multipath mitigation techniques, GPS/INS integration and array processing.Mentioned above are only a few examples of features that could be required of an SDR, other include support for new signals (Galileo, GPS L5), multiple data file formats, high sensitivity and support for very long data sets. The large number of available features should ideally be coupled with program simplicity (such that other people can understand the program) and efficiency. This paper discusses these issues and proposes several solutions such asgeneralized data buffers (that is trivial to extend for new data formats) and a unified tracking structure (regardless of signal modulation). Examples are given using a Matlab implementation based on the Borre/Akos book Ä Software-Defined GPS and Galileo Receiver", however with significant modifications. Where critical, Java is used to increase performance while maintaining cross platform compatibility. Near real-time operation is available under optimal circumstances and the receiver currently supports GPS C/A- and GPS P-code signals.
In this contribution we have examined how the maximum reach vary at different bit rates on a VDSL system, when different numbers of ADSL and VDSL systems share the same binder. In this context it is concluded that when using the Zipper duplex scheme, VDSL can coexist with ADSL in the same binder without a significant degradation in reach. Further, it is shown that the Zipper duplex scheme secure that ADSL is not disturbed by NEXT from VDSL The Zipper performance has also been compared with both a TDD proposal and a FDD proposal. The results from this comparison shows that Zipper outperforms FDD and TDD in terms of extended reach at all studied mixes of ADSL and VDSL sharing the same binder. For example with a moderate number (5) of ADSL disturbers, the Zipper reach is 150 meters longer than for TDD, and is 350 meters longer than for FDD when studying the medium range asymmetrical bit rate (26:3.2 Mbps).
Discrete Multitone (DMT) modulation is a multicarrier technique which makes efficient use of the channel, maximizing the throughput by sending different numbers of bits on different subchannels. The number of bits on each subchannel depends on the Signal-to-Noise Ratio of the subchannel. The performance of a DMT system can be further increased by using powerful coding techniques. This paper investigates an implementation of trellis coded modulation in a DMT system intended for transmission over short copper cables, less than 1000m. We suggested trellis code is Wei's 4-dimensional 16-state coder combined with trellis shaping. A single encoder is used which codes across the tones of each DMT-symbol. At a bit error probability of 10-7, the suggested code gains 4-5 dB over uncoded transmission.
In this thesis I present the algorithms and methods used to extract a shoreline from remote sensing data (i.e. satellite imagery) and the determination of the water level at the time of the data recording, which are both used to generate a DEM (Digital Elevation Model) of the intertidal area. The generation of the DEM is done in multiple steps, with the first one being the shoreline extraction. To find the shoreline, satellite imagery from the PROBA-V satellite is used. The image data consists of four channels (Red, Blue, NIR, SWIR), which are then combined to generate an artificial RGB image. This RGB image is then converted into the HSV colour space. To finally determine the threshold, the hue and value channel are selected, and a simple thresholding is applied to separate water masses from land masses. The final binary image is then cleaned from noise, and reduced to only a pixel-wide line representing the detected shoreline. This process is applied on several images taken at different water levels (i.e. different parts of the tidal cycle). To estimate the altitude level of the waterline, tidal data from tide gauges at Chittagong and Coxs Bazar are used. First, both tidal records are compared to determine the phase and amplitude scaling with respect to the distance of the two gauges. Afterwards, these values are used to inter- and extrapolate the water level along the shoreline. This allows to generate a synthetic tide measurement for every point at any time based on only the tidal records at Chittagong. The synthetic tide measurements are then combined with the shorelines to generate the final DEM. In the end, the generated DEM is compared with nautical charts of the area, as well as a different remotely-sensed DEM of Chittagong to estimate its accuracy. This whole process allows for a simple generation of inter tidal areas without having to make in-situ measurements of the area, and especially without repeating the measurements due to fast changes in the shoreline.
This article proposes a Bayesian procedure to calculate posterior probabilities of active effects for unreplicated two-level factorials. The results from a literature survey are used to specify individual prior probabilities for the activity of effects and the posterior probabilities are then calculated in a three-step procedure where the principles of effects sparsity, hierarchy, and heredity are successively considered. We illustrate our approach by reanalyzing experiments found in the literature.
The paper introduces a novel framework for safe and autonomous aerial physical interaction in industrial settings. It comprises two main components: a neural network-based target detection system enhanced with edge computing for reduced onboard computational load, and a control barrier function (CBF)-based controller for safe and precise maneuvering. The target detection system is trained on a dataset under challenging visual conditions and evaluated for accuracy across various unseen data with changing lighting conditions. Depth features are utilized for target pose estimation, with the entire detection framework offloaded into low-latency edge computing. The CBF-based controller enables the UAV to converge safely to the target for precise contact. Simulated evaluations of both the controller and target detection are presented, alongside an analysis of real-world detection performance.
Transit-time ultrasonic flow meters present some advantages over other flow meters for district heating industries. They are both accurate and non-intrusive. It is well-known that ultrasonic flow meters are sensitive to installation effects. Installation effects could be static or dynamic. Among the possible dynamic installation effects is pulsating flow. The influence of pulsating flow on the prediction and the zero-crossing operations is investigated. Expressions are found for the prediction error and the zero-crossing error. The relative errors due to the prediction and the zero-crossing are plotted. The prediction error can reach dramatic values while the zero-crossing operation is hardly influenced by flow pulsations.
In the context of the ESA BIOMASS mission in which for the first time, a P-Band SAR sensor isgoing to be mounted into a spaceborne system. With its penetration capability, it will contributeto the measurement of the biomass and carbon content in the Earth’s forests. An autofocusalgorithm is needed for the correction of phase errors introduced by the changing diffraction indexin the ionosphere. Because of the quickly changing nature of the ionosphere, defocusing has to bemeasured and corrected locally over several sections of a SAR capture.In this thesis, a deep introduction into phase errors is made having in mind that the ionosphereis expected to introduce time varying low frequency errors that can be constructed as a series ofquadratic curves. These quadratic phase errors introduce defocusing that is seen as blur and lossof contrast. An algorithm is proposed and tested for measuring this defocusing, while its strengthsand weaknesses are discussed.The idea of measuring defocusing is to try to recover the temporal phase function that introduceddefocusing in the first place. Here a method to recover this temporal phase function is introduced,and a thorough performance assessment of this retrieval is carried out. The variables involved thequality and reliability of this retrieval are studied one by one.
Air- and spaceborne radars play an important role for civilian and military use. There are numerous applications such as earth observations, surveillance and others. High performance clutter suppression is a crucial part of many of these radar systems. Space time adaptive processing(STAP)has become a topic of interest for clutter suppression applications. Although for most moving target indication(MTI) radars other applications are used for clutter suppression. This master thesis analyses STAP on two antenna configuration for airborne radar applications. The first configuration is based on auxiliary antennas, the second configuration is based on a multitapering method called discrete prolate spheroidal sequences(DPSS). This theses shows that both antenna configurations are valid choices for STAP applications. Although the later configuration, DPSS, has a higher clutter suppression performance in general. However, there are fundamental limitations with the DPSS configuration. These limitations are shortly discussedin this theses but more work should be done before implementing the DPSS configuration
This paper presents the validation of a software-defined baseband (SDB) system for satellite telemetry and telecommand (TM/TC). The baseband system was developed using the open-source GNU Radio development kit. It runs on a personal computer connected to a commercial-off-the-shelf (CoTS) RF frontend. The validation process was performed by the use of a mission-qualified satellite emulator, a state-of-the-art baseband unit, and orbiting satellites. The baseband is designed to offer multimission support. Hence, it includes a suite of modulation schemes, line codes, matched filters, and Consultative Committee for Space Data Systems (CCSDS) forward error correction codes (convolutional, Reed–Solomon, concatenated, and low-density parity-check [LDPC]) typically employed in TM/TC missions. The figures of merit used for the validation of the TM receiver are bit error rate (BER) and frame error rate (FER). For the TC transmitter, the validated features are modulation index, power spectrum, and the physical layer operations procedures (PLOP).
This paper presents functional analysis and system specifications of a baseband system using software-defined radio (SDR) technology. The analysis is primarily based on the latest blue-book standards from the Consultative Committee for Space Data Systems (CCSDS). It covers telemetry, telecommand, and ranging, as well as some specifications of the associated physical layers. The SDR-based baseband system is envisioned to support ground operations in the form of a software-as-a-Service (SaaS) private cloud.
This paper presents the development of a ground system based on software-defined radio for supporting both ground testing and space telemetry and telecommand of one of the nanosatellites in the QB50 mission. The QB50 project is an ongoing European Commission Seventh Framework initiative, which aims at launching a constellation of 50 CubeSats in the lower thermosphere to carry out in-situ scientific measurements. The paper discusses the implementation of amateur radio protocols and telecommunication modulation schemes on the ground system. The system setup, deployment and scheduling are also discussed using two separate ground stations. The use of different software for testing the system is detailed, the results show the operability of the developed ground system. © 2016, American Institute of Aeronautics and Astronautics Inc
This paper presents the verification of phase and frequency modulation schemes for a software-defined radio baseband system that is being prototyped to support satellite telemetry, telecommand and ranging. It presents the theory behind the two modulation schemes, implementation and verification against emulated signals from a space-qualified hardware-based baseband system as well as from the Odin satellite.
Communication systems are adopting all‐software architectures, because of their scalability, extensibility, flexibility, and cost‐effectiveness. This paper introduces a concurrent approach to the development and verification of baseband systems for satellite ground operations based on the behaviour‐driven development methodology. The open‐source GNU Radio development kit is used for developing the software‐defined radio baseband signal processing, as well as simulating the satellite and realistic channel impairments. The system performance at the end shows deviations of less than 1 dB with respect to the ideal performance and the Green Book standards specified by the Consultative Committee for Space Data Systems.
DMT-VDSL signals have a high peak-to-average power ratio (PAR). In the transmitters, the PAR governs the necessary resolution of the digital-to-analog converter (DAC) and is an important factor for the power consumption of the line-driver. Aiming at implementation in a specific system, we propose a low complexity PAR-reduction method based on the iterative algorithm derived in [5, 20, 24, 25]. We maintain good performance while stressing a straigthforward and low-complex implementation. Key elements of the method are: low latency; no loss in data rate; precalculated and stored peak-cancellation waveform; and bit-shifts (multiplication with powers of two) replacing the scaling of the waveform. Computer simulations show that, for a DMT frame length of 4096 samples and a frame clip rate of 10-4, the PAR can be reduced about 1.5 to 2.0 dB, depending on the number of peaks cancelled. When multiplication is replaced by bit-shifts, the reduction is still 1.5-1.7 dB.
In this paper we describe an LTE based demonstrator of the Universal Link Layer API (ULLA) and Cognitive Resource Manager (CRM) modules that are developed in ARAGORN project. The demonstrated LTE system comprises one LTE TDD eNode B and one User Equipment (UE). We first introduce ULLA and CRM framework and then demonstrate their suitability to be implemented with the existing LTE equipments. We show how, through ULLA, CRM is able to obtain PHY/MAC status information of the link between the eNode B and UE, and in turn change system parameters to achieve better resource utilization and transmission efficiency. The control logic can be implemented with simple adaptation or policy-based intelligent methods. The platform clearly shows the feasibility to use ULLA/CRM architecture for radio resource management in a LTE network. It also shows the neutrality of ULLA/CRM mechanisms towards PHY/MAC characteristics of LTE technology platform; hence the platform is viable to flexibly switch between technology platforms (e.g. between LTE access and WiFi access) under the control of ULLA/CRM.
The particle size distribution of fragmented rock in mines significantly affects operational performance of loading equipment, materials handling and crushing systems. A number of methods to measure rock fragmentation exist at present, however these systems have a number of shortcomings in an underground environment. This paper outlines the first implementation of high resolution 3D laser scanning for fragmentation measurement in an underground mine. The system is now used routinely for fragmentation measurement at the Ernest Henry sublevel-cave mine following extensive testing and calibration. The system is being used to study the effects of blasting parameters on rock fragmentation to optimise blast design. Results from 125 three dimensional scans measured the average P50 and P80 to be 230mm and 400mm respectively. The equipment, methodology and analysis techniques are described in detail to enable application of the measurement system at other mines.
Experimental work has been performed on a selection of small ultrasonic flow meters for water. This work was accomplished in order to investigate the in fluence of temperature and flow profile disturbances on the performance of flow meters in district heating applications. The flow meters tested were all ultrasonic flow meters of sing-around type. The selection of flow meters contains in total seven meters of three different brands. All meters have a flow range from 0.015 m3/h to 1.5 m3/h. These meters are commonly used in heat meters in small district heating subscriber stations. The flow meters are presented without identification. All tests were performed in a flow meter calibration facility and in a flow range including the minimum and maximum flow of each flow meter. In the tests three different water tempera-tures and three different installations were investigated. Water temperatures of 20 °C, 50 °C and 70 °C were used. These temperatures are representative for district heating applications. The installations tested involved flow meters mounted with long straight pipes both up-and down-stream representing ideal conditions, a single elbow and a double elbow out of plane both generating disturbed flow profiles. All set-ups are in accordance with the flow meter specifications. The results demonstrate that both the change in temperature and the disturbed flow profiles introduce errors in the flow measurements. The change from 20 °C to 50 °C and 70 °C can cause a shift in meter performance larger than the specified maximum permissible error. Compared with the ideal installation the installations generating disturbed flow profiles cause errors up to more than 2 %. The errors due to temperature and installation effects have a bias to add when combined. This might lead to even larger errors.
In this paper we present a method for the measurement of mass fractions of iron ore in two-phase flows consisting of water and iron ore particles. The proposed method uses a clamp-on PVDF transmitter and a 12-element PVDF receiver array to measure the excess attenuation of pulsed ultrasound due to the presence of particles. From the measured data we are able to extract a quantity that varies linearly with particle mass fraction in the range from 0 to 13%. With the proposed method we can measure the mass fraction in whole percent, at a 95% confidence level. Using an array of receivers we are also able to calculate an attenuation profile over a cross section of the flow, which indicates how the particles are distributed within the flow. This is used to examine how the particle distribution is affected by flow speed, flow meter position, and mass fraction, respectively.
In process industries such as for example the oil and gas industry, the paper pulp industry, and the mining industry, multiphase flows are common. It is often of interest to measure the mass fractions of the different phases. In for example the mining industry, iron ore powder is transported using water, and there is a need of measurement techniques to monitor the particle mass fraction. Most existing techniques are either invasive, inaccurate, or too slow to be used in an on-line manner. The long-term goal of this research project is to develop a method for measuring mass fractions and mass fraction velocities, using ultrasound. The first two papers in this thesis consider how scattering of sound can be measured, and how this can be used to measure mass fractions. The ideas are verified with experiments. The third paper is on optimal experimental design. The problem is selecting suitable experiments from a large candidate set. We present a new algorithm for generating optimal designs. The methods in the first two papers can be extended to incorporate more of the underlying physics, as well as using more sophisticated multi-dimensional signal processing techniques.
This thesis deals with three different applications of ultrasound measurement technology. In process industries like the mining industry, the oil and gas industry, and the paper pulp industry, multiphase flows play an important role. It is of interest to measure several different parameters of these flows, such as the mass fractions and the mass fraction velocities of the different phases. There are currently no single technique available that can measure all of these properties, and commercial multiphase flow meters are in practice a combination of several flow meters that each measure different parameters. The long-term goal of the project presented in this thesis is to develop an ultrasonic technique that can measure all of these properties. The first focus of the work presented in this thesis has been to develop an ultrasonic method that can measure the mass fraction of particles in a solid/liquid multiphase flow. The technique is based on a sensor array that measures an entire cross section of the flow. The use of an array makes it possible to measure the particle distribution. This can then be used to detect static installation effects, thus enabling the use of single point sensor. The sensor array used is clamped on to the outside of the flow pipe which means the technique is completely non-invasive. The second focus is on imaging of opaque flows. While traditional optical techniques such as LDV, etc. does not work for opaque media, there is no such restriction on the ultrasonic method. The imaging technique, called ultrasonic speckle correlation velocimetry (USV) has been applied to image vortices in flows, and to measure particle velocity profiles in multiphase flows. The third and last contribution is in the field of non-destructive evaluation (NDE) of materials. In a biomaterial engineering project, the goal has been to develop an injectable bone cement that can be used to repair or replace fractured bone. During the setting reaction, the cement undergoes a series of phase changes, which have implications on how the cement can be used. The research is motivated by the lack of satisfying standards to measure the setting time. The existing methods are based on mechanical testing and visual examination, which makes them time-consuming and subjective. The ultrasonic technique presented in this thesis provides a non-destructive and objective way to determine both the setting time and some mechanical properties of the cement, during the entire setting process. The thesis consists of an introductory part and a collection of seven papers.