By measuring the spectral reflection from the four different road conditions dry, wet, icy, and snowy asphalt, a method of classification for the different surfaces -- using two and three wavelengths -- is developed. The method is tested against measurements to ascertain the probability of wrong classification between the surfaces. From the angular spectral response, the fact that asphalt and snow are diffuse reflectors and water and ice are reflective are confirmed.
An investigation of different road conditions has been conducted using a short-wave infrared (SWIR) light online sensor to examine the possibility of estimating road condition parameters such as porosity, depth and roughness. These parameters are essential for non-contact road friction estimation. The investigation show that it is possible to detect changes of depths of water and ice as well as classify different types of ice, by utilising polarised short-wave infrared (SWIR) light and a modified Hapke directional reflectance model
We extend the single channel adaptive linear receiver (ALR) to the multidimensional case. The extension is used for the cancellation of strong spatially-distributed narrowband interference in direct sequence spread spectrum communications. Simulations show a gain of 8 dB for the case of two interferences occupying 30% of the bandwidth of the spread spectrum signal
In this paper, we present a blind equalization algorithm for noisy IIR channels when the channel input is a finite state Markov chain. The algorithm yields estimates of the IIR channel coefficients, channel noise variance, transition probabilities, and state of the Markov chain. Unlike the optimal maximum likelihood estimator which is computationally infeasible since the computing cost increases exponentially with data length, our algorithm is computationally inexpensive. Our algorithm is based on combining a recursive hidden Markov model (HMM) estimator with a relaxed SPR (strictly positive real) extended least squares (ELS) scheme. In simulation studies we show that the algorithm yields satisfactory estimates even in low SNR. We also compare the performance of our scheme with a truncated FIR scheme and the constant modulus algorithm (CMA) which is currently a popular algorithm in blind equalization.
Advances in blind identification of fractionally-spaced models for digital communication channels and blind fractionally-spaced equalizer adaptation rely on the assumption that the time span chosen for the fractionally-spaced equalizer exceeds that of the channel. This paper considers time-domain design formulas minimizing the mean-squared symbol recovery error achieved by a finite-length FIR fractionally-spaced equalizer with a time span shorter than the channel impulse response time span for white zero-mean QAM sources in the presence of white zero-mean channel noise. For minimum mean-squared error designs the symbol error rates achievable are plotted versus the ratio of the source variance to the channel noise variance (with the channel model power normalized to achieve a received signal of unit variance) for different fractionally-spaced equalizer lengths on 64-QAM for several T/2-spaced channel models derived from experimental data. Our intent is to fuel the ongoing debate about fractionally-spaced equalizer length selection .
This paper investigates the model order selection problem for use with the multidimensional autoregressive (MAR) process in airborne radar detection processing which uses an innovations based detection algorithm (IBDA). Results indicate that a low order model should be used to accurately portray the return signal spectrum. Specifically, this paper investigates the use of the Akaike (1971) information criterion for model order selection. Examples are included for physically modeled data sets as well as actual radar data sets
A unique failure mechanism, identified on an unused output buffer located near a used input protection device, occurs when excessive substrate current is generated during an electrostatic discharge (ESD) event. This new mechanism, the proximity effect, plays an important role when the n moat region of an input ESD circuit is within 20 μm of an unrelated n moat diffusion region contacted to the power supply, V cc. The operation of the most commonly used ESD input protection circuitry when stressed with respect to Vcc is reviewed. A laser cut experiment has verified that disconnecting the Vcc bus from the unused n moats eliminates this type of ESD failure. Device metal mask changes have confirmed these findings. This ESD failure mechanism, has been demonstrated on a variety of I/O buffer layouts, and a solution has been identified
We study the local minima relocation of the fractionally spaced constant modulus algorithm (FSE-CMA) cost function in the presence of noise. Local minima move in a particular direction as the noise power increases and their number may be eventually reduced. In such cases the performance of FSE-CMA may fail to adequately reduce intersymbol interference (ISI), but achieve an approximated MMSE by reducing its equalizer noise gain under certain constraints. We analyze the mechanism of relocation of the FSE-CMA cost function local minima in terms of the auto-correlation matrix of the sub-channel convolution matrix and its eigenvectors
We present a computationally efficient method of separating mixed speech signals. The method uses a recursive adaptive gradient descent technique with the cost function designed to maximize the kurtosis of the output (separated) signals. The choice of kurtosis maximization as an objective function (which acts as a measure of separation) is supported by experiments with a number of speech signals as well as spherically invariant random processes (SIRPs) which are regarded as excellent statistical models for speech. Development and analysis of the adaptive algorithm is presented. Simulation examples using actual voice signals are presented
A variety of blind equalization algorithms exist. These algorithms, which draw on some theoretical justification for the demonstration or analysis of their purportedly ideal convergence properties, almost invariably rely on the input data being independent and identically distributed (i.i.d.). In contrast, in this paper we show that input correlation can have a marked effect on the character of algorithm convergence. We demonstrate that under suitable input data correlation and channels: (i) undesirable local minima present in the i.i.d. case are absent for certain correlated sources implying ideal global convergence for some situations and, (ii) the most commonly employed practical algorithm can exhibit ill-convergence to closed-eye minima even under the popular single spike initialization when an eye-opening equalizer parameterization is possible.
We examine the use of a blind adaptive “pre-whitening” filter to precede an equalizer adapted by the constant modulus algorithm (CMA). The idea is based on results presented in which the use of a (fixed, or non-adaptive) pre-whitening filter provides an isometry (i.e. geometry preserving transformation) between the combined channel-equalizer (or global space) and the equalizer tap space. As much analysis found in CMA literature is done in “global space” now applies to the pre-whitened “equalizer tap space”, considerable exploitation of a known geometric structure is now possible. This paper’s main result demonstrates a method for improving the convergence rates of CMA adapted equalizers using the results of prewhitening analysis. The slow convergence of CMA equalizers is a common negative point associated with this well-known, robust blind equalization algorithm.
The constant modulus algorithm (CMA) is a popular blind equalization algorithm. A common device used in demonstrating the convergence properties of CMA is the assumption that the source sequence is i.i.d. (independent, identically distributed). Previous results in the literature show that a finite length fractionally-spaced equalizer allows for perfect equalization of moving average channels (under certain channel conditions known as zero-forcing criteria). CMA has previously been shown to converge to such perfectly equalizing settings under an independent, platykurtic source. This paper investigates the effect of the distribution from which an independent source sequence is drawn on the CMA error surface and stationary points in the perfectly-equalizable fractionally-sampled equalizer case. Results include symbolic identification of all stationary points, as well as the eigenvalues and eigenvectors associated with their Hessian matrix. Results show quantitatively the loss of error surface curvature (in both direction and magnitude) at all stationary points. Simulations included demonstrate the affect this has on convergence speed.
Following a general presentation of the system architectures for two types of reliable energy stations, GEODE and ALFATEL, used in telecommunication facilities, the authors describe the concept of a centralized management system covering several energy stations situated at different geographical locations. The role of communication with the power supply station monitoring units is then described. This makes it possible to define the functions carried out by the management unit, some of which may be operated through remote control by staff on standby duty. The functions of remote monitoring of processes, statistical studies, inventories, background records, man/machine relationships, and maintenance assistance are backed up by a PC-type machine .
This work focuses on gaining insight to CMA behavior through identi cation of global properties of CMA stationary point locations due to source statistics (distributions and temporal correlation). The CMA error function under source correlation effects is viewed as a deformation of the iid CMA error surface. As deformations are the realm of topology, we look at some of the topological aspects connected with CMA source correlation. Some general characteristics of CMA under iid source are presented, noting relations to Morse Theory, polytopes, and Euler Characteristics.
We consider a nonlinear equalizer structure that provides effective and computationally efficient intersymbol interference (ISI) reduction for channels with nonlinearities. The technique makes use of a RAM-DFE structure enhanced with the ability to cancel pre-cursor nonlinear ISI. In many nonlinear channels, the distortion caused by the nonlinear element (such as a nonlinear power amplifier) and bandlimiting filters creates nonlinear ISI. Such distortion typically limits the system to loworder constellations (such as QPSK). Higher order constellation are possible only if effective reduction of the nonlinear ISI is achieved. The applicability of our nonlinear equalizer and its performance for a nonlinear satellite channel is included.
We present a detection method for nonlinear channels with memory. This method uses a linear feedforward equalizer in conjunction with a random access memory (RAM) based (nonlinear) equalizer. The design criterion for the feedforward equalizer is non-traditional, with the intent to improve performance of the RAM-based equalizer. Development of the design criterion and a training-mode based adaptive implementation along with examples of this equalizer with a Pre-Cursor Enhanced RAM-DFE Canceller are included.
CMA fractionally spaced equalizers (CMA-FSEs) have been shown, under certain conditions, to be globally asymptotically convergent to a setting which provides perfect equalization. Such a result relies heavily on the assumptions of a white source and no channel noise (as is the case in much of the literature's analysis of CMA). Herein, we relax the white source assumption and examine the effect of source correlation on CMA. Analytic results are meshed with examples showing CMA-FSE source correlation effects. Techniques for finding all stationary and saddle points on the CMA-FSE error surface are presented using recent developments in the algebraic-geometry community.
This paper presents methods for detection and localization of photon-limited objects in noise. As opposed to the correlation based or Fourier transform based techniques which exhibit sensitivity to object scaling, we propose a method based on the continuous wavelet transform with its ability to reject noise and to localize objects in space and time as well as in scale. An advantageous twist presented here is the use of the wavelet transform on the complex envelope of the signal of interest. This has the advantage of reducing "rippling" effects seen in the transform of the original waveform. An example of further post-processing on the wavelet-transformed data is provided
The problem of imaging through turbulent media has been studied frequently in connection with astronomical imaging and airborne radars. Therefore most image restoration methods encountered in the literature assume a stationary object, e.g., a star or a piece of land. In this paper the problem of interferometric measurements of slowly moving or deforming objects in the presence of air disturbances and vibrations is discussed. Measurement noise is reduced by postprocessing the data with a digital noise suppression filter that uses a reference noise signal measured on a small stationary plate inserted in the field of view. The method has proven successful in reducing noise in the vicinity of the reference point where the size of the usable area depends on the degree of spatial correlation in the noise, which in turn depends on the spatial scales present in the air turbulence. Vibrations among the optical components in the setup tend to produce noise that is highly correlated across the field of view and is thus efficiently reduced by the filter. © 2008 Optical Society of America.
The RAMO network is the fundamental tool for operating the technical environment equipment of telecommunications centres which concern the power supply, cooling system, building surveillance, access control, security, EMC and electrostatic environment. It implements functions which make it possible to analyse the quality of the equipment's performance in operation, to precisely identify equipment failures to reduce their mean time to repair. The type of equipment included in this network varies depending on the size of the centre which houses it. It comprises the power supply systems, consisting of an electrical generator, an AC distribution cabinet, a conversion and power storage cabinet from the Alfatel range, catering for all power requirements from 2.5 kVA to 60 kVA, and from the Geode range for power greater than 100 kVA, together with the cooling systems and uninterruptible power supplies. The networking of all these items of equipment on an X.25 protocol using France Telecom's operating architecture comprises the RAMO network.
We present a narrowband interference (NBI) canceller that suppresses spectral leakage in an orthogonal frequency-division multiplexing (OFDM)-based system caused by a narrowband (NB) signal. We assume that the spectrum of the NB signal is within the spectrum of the OFDM signal. This can be the case, e.g., on digital subscriber lines (DSL) and in new unlicensed frequency bands for radio transmission. The canceller makes linear minimum mean-square error estimates of the spectral leakage by measuring the NBI on a few modulated or unmodulated OFDM subcarriers. It uses a model of the NB signal's power spectral density as a priori information. Using a frequency invariant design, it is possible to cancel NBI from signals that are changing their frequency location with significantly reduced complexity overhead. The operational complexity of the canceller can be lowered by using the theory of optimal rank reduction and using the time-bandwidth product of the NB signal. Analytical performance evaluations, as well as Monte Carlo simulations, show that, without perfect a priori information, this canceller can suppress the spectral leakage from a strong NB signal (e.g., with equal power as the OFDM signal) to well below the background noise floor for typical applications where it causes negligible signal-to-noise ratio and symbol error rate degradation.
The scale invariant third order moment, skewness, is analysed as an objective function to an adaptive gradient ascent algorithm. The purpose is to achieve a spectrum at the filter output that can enable identification of possible bearing defect signatures which are impulsive and periodic. Harmonically related sinusoids are used to represent such signatures and to build a signal model allowing characterization of the objective surface of skewness, providing insight to its convergent behaviour. The results are supported with an experiment from an industry setting. Robustness of the proposed algorithm is demonstrated by examining the frequency spectrum resulting from the signal model.
Blind adaptation with appropriate objective function results in enhancement of signal of interest. Skewness is chosen as a measure of impulsiveness for blind adaptation to enhance impacting sources arising from defective rolling bearings. Such impacting sources can be modelled with harmonically related sinusoids which leads to discovery of harmonic content with unknown fundamental frequency by skewness maximization. Interfering components that do not possess harmonic relation are simultaneously suppressed with proposed method. An experimental example on rolling bearing fault detection is given to illustrate the ability of skewness maximization in uncovering harmonic content.
In pulse-position modulation (PPM) signaling, the time location of short-duration pulses are used to convey information over a communication channel. For successful noncoherent reception, the channel duration must be short compared to the symbol interval. This paper analyzes the use of third moments in a blind adaptive equalizer setting to limit the effective delay spread of the channel. Results detail the global convergence properties of the proposed method, showing that the parameters approach ISI-free settings under general conditions.
A novel blind equalization strategy for pulse position modulation (PPM) based on maximizing the third-order moment of the equalizer output is presented. Compared to traditional fourth-order (e.g. kurtosis-based) methods, third-order moments give faster convergence and are less sensitive to noise. This work demonstrates that the intersymbol interference that plagues typical indoor ultra wideband (UWB) channels can be combatted using a third-moment maximizing blind equalizer which could therefore provide a cold start-up for a decision-directed scheme. Adaptation is shown to be asymptotically globally convergent with increasing time-hopped PPM frame length. Simulation experiments compare the performance to standard fourth-order schemes in practical settings, taking into account the conditions of time-hopping,multiple-access interference and a realistic UWB channel model.
Third-order central moments have been shown to be well suited as objective functions for blind deconvolution of impulsive signals. Online implementations of such algorithms may suffer from increasing filter norm, forcing adaptation under constrained filter norm. This paper extends a previously known efficient algorithm with self-stabilizing properties to the case of using a third-order moment objective function. New results herein use averaging analysis to determine adaptation stepsize conditions for asymptotic stability of the filter norm.
A new signal pre-processing algorithm for condition monitoring of rolling bearings is presented. By enhancing the statistical asymmetry of a measured vibration signal, it is shown to enhance weak impacts from outer-and inner race defects to aid feature extraction and fault detection. Unlike many popular methods such as the high-frequency resonance technique (e.g. envelope analysis), the proposed algorithm is based solely on linear time-domain processing of lowpass vibration signals and hence does not rely on non-linear processing or on the potentially difficult task of selecting an appropriate frequency band for analysis. Consequently, low computational complexity as well as ease of use and implementation can be obtained. A key feature of the method is the enhancement of fault impulses in noisy and distorted measurement data series. This is accomplished through a novel type of adaptive filtering that selectively enhances transient impulses while simultaneously suppressing noise and disturbances. Numerical results are presented demonstrating the new algorithm applied to accelerometer data from industrial environments
In blind deconvolution problems, a deconvolution filter is often determined in an iterative manner, where the filter taps are adjusted to maximize some objective function of the filter output signal. The kurtosis of the filter output is a popular choice of objective function. In this paper, we investigate some advantages of using skewness, instead of kurtosis, in situations where the source signal is impulsive, i.e. has a sparse and asymmetric distribution. The comparison is based on the error surface characteristics of skewness and kurtosis.
Traditional methods for online adaptive blind deconvolution using higher order statistics are often based on even-order moments, due to the fact that the systems considered commonly feature symmetric source signals (i.e., signals having a symmetric probability density function). However, asymmetric source signals facilitate blind deconvolution based on odd-order moments. In this letter, we show that third-order moments give the benefits of faster convergence of algorithms and increased robustness to additive Gaussian noise. The convergence rates for two algorithms based on third- and fourth-order moments, respectively, are compared for a simulated ultra-wideband communication channel.
The performance of nonlinear decision feedback equalizer implementations based on the use a random access memory (RAM) look-up table are considered. RAM look-up tables admit the modeling, without approximation, of the nonlinear intersymbol interference of nonlinear channels with finite memory. Three equalizers are considered; the RAM-DFE; RAM-canceler; and recently developed pre-cursor enhanced RAM-DFE canceler (PERC). In contrast to the classical “feedforward” Volterra equalizers, these equalizers have the advantage that the nonlinear processing element is out of the path of the noise, thus avoiding noise amplification. Results are presented for 16-QAM which compare and demonstrate the effectiveness of these equalizers, particularly the PERC, on the nonlinear satellite channel
We present the performance of OFDM systems with coding, spreading, and clipping. Low-Density Parity-Check (LDPC) codes give coding gains and spreading by the Walsh Hadamard transform gives gains in terms of increased frequency diversity as well as reduced peak-toaverage power ratio (PAPR) of the transmitted OFDM signal. By evaluating both the IFFT transform (OFDM) and the Walsh Hadamard transform in a single step, the number of operations needed for the spread OFDM system is actually less than for the conventional OFDM system. Reducing than for the conventional OFDM system. Reducing the PAPR is important in systems with clipping since it is related to the probability of clips. Each clip introduces clipping noise to the system which reduces the performance. Results of a clipped OFDM system with LDPC coding and spreading for an ETSI indoor wireless channel model are presented and compared with other systems. It is shown that there is a gain by spreading for LDPC coded OFDM systems, and especially for systems with clipping.
This paper considers the performance penalty of a blind, compared to a non-blind, separation technique of a MIMO-FIR channel. In the blind method the mixing filters are first identified, while they are assumed to be known in the non-blind case. The blind system identification is performed using a recently proposed method based on cumulant subspace decomposition. Separation is then achieved by the FIR part of the mixing system inverse, which minimizes the cross-channel power. The performance penalty due to blindness is investigated for the case when the channel order is underestimated. Results of average residual cross-channel power of the wireless COST207 channel model are included.
The high peak-to-average power ratio (PAR) in Orthogonal Frequency Division Multiplexing (OFDM) modulation systems can significantly reduce performance and power efficiency. A number of methods exist that combat large signal peaks in the transmitter. Recently several methods have emerged that alleviate the clipping distortion in the receiver. We analyze the performance of two receiver clipping mitigation methods in an OFDM system with Cartesian clipping and low-density parity-check (LDPC) coding. Surprisingly, the cost of completely ignoring clipping in the receiver is minimal, even though we assume that the receiver has perfect knowledge of the channel. The results suggest that clipping mitigation strategies should be concentrated to the transmitter.
This paper investigates the possible system gain for a multiresolution broadcast system using multilayer transmission of multiresolution data by utilizing nonuniform layer transmission energies. It shows how to find the energy distribution that maximizes the system performance, measured in the form of a sum of a weighted layer values /spl times/ population product (representing possible revenue for service providers). Through the introduction of the relative population coverage function P(a/sub i/) it is shown for a N layer system that in many cases when P(a/sub i/) is a concave function (equivalent to -P(a/sub i/) being convex) it is possible to reduce what seems to be an N-dimensional problem to N line searches. The paper also shows how the relative population coverage function can be constructed in two ways. The first uses analytic models for signal strength and population coverage (Uniform and Rayleigh). The second uses numerical signal strength and population estimates in grid format. The paper also includes examples to illustrate how the method works and the performance gain it provides. One of the examples uses actual grid estimates for an example transmitter located in Lulea, Sweden.
We investigate physical layer opportunities resulting from multimedia streams which use progressive coding. With such progressively encoded video frames the location of the first bit error is more important than the overall bit error probability. This makes the expected error free length a reasonable metric for performance optimization. One physical layer opportunity is the adjustment of bit energy within data frames. For wireless links nonuniform bit-energy distributions result in a flexible solution that may be used in conjunction with channel coding. We identify system gains to be achieved with nonuniform bit energy distribution optimizations and also show that a simplified frame truncation technique may be used without significant loss of quality.
Optimization of an N-layer digital multiresolution broadcasting system with respect to some form of quality/spl times/population product objective function appears to he an N-dimensional problem. However, with a generalized broadcasting model using N orthogonal channels with BPSK signaling under the assumption of uniform population distribution the problem can be narrowed down to three cases. This paper identifies each of these three cases and explains their meaning for system performance optimization.
For progressive frame based video data it is possible to use more bit-energy for important bits of the frame to increase the probability that these bits are transfered correctly in a wireless link. With a progressive bit stream the usual bit error probability is less important than the position measure of the first error within the frame. Here we propose the use of two alternate metrics to dynamically fine tune the performance for a link with a low data rate feedback channel by changing the bit-energy distribution in the transmitted signal and thereby aid unequal error protection.
Real-time multimedia transfer through the Internet becomes more difficult when wireless links are in the path, due to the time varying channel capacity, from interference and multipath fading effects which introduce additional stochastic variations beyond the wireline network traffic effects. These wireless variations create problems for existing end-to-end rate adaption using feedback. This paper introduces a framework for cross-layer solutions to the streaming video problem with a focus on graceful degradation under network congestion and/or wireless fading effects without direct coordination between source coder, channel coder, and physical layer modulator. The solution presented includes network transportation and wireless based optimizations and requires little reliance upon end-to-end rate adaption. The suggested method uses progressively coded, leaky prediction source data and physical layer based rate adaption in concert with error tolerant network protocols.