Advanced DSP

Autocorrelation Estimation

Lecture 9
Conducted by: Udayan Kanade

To be able to apply the Wiener filtering methodology, we have to be able to get an estimate of the autocorrelation function of a stationary random process. We will assume that we have a “partially opened” stationary random process.

The first methodology, called the unbiased estimate, just averages sample values of the correlation at a particular lag, to get an estimate of the autocorrelation function at that particular lag. This, though has the problem that large lags have a very few data points to estimate from. The “biased estimate” does the same thing as above, but while calculating the average, instead of dividing by the number of sample correlations, divides by the number of samples in the opened signal! This weights larger lags towards zero – if you don't know what to believe in, believe in orthogonality.

If we want an MA Wiener filter, only the first few coefficients of the autocorrelation are necessary, of which we have a good estimate of by the above methodologies. Suppose we use the first few autocorrelation coefficients to find the best Wiener prediction filter, the inverse of the prediction error filter will be the “generation filter” from the innovations process, whose autocorrelation will then be the autocorrelation function estimate for our process. This method is the Yule-Walker method. It works well if we can assume that the Wiener prediction filter of the order we used actually whitens the process, i.e. we can assume that the process is an AR process of the order of the Wiener filter.

If the process is better modeled as an ARMA process, we use one of the ARMA system identification methods to get the ARMA coefficients. We can then find the autocorrelation function as the autocorrelation of the MAed (infinite) ARMA filter.

In fact, if we have an ARMA model of a process X from which a process Y is to be predicted, we can find an ARMA Wiener filter which will work better than an MA Wiener filter. We first filter X using the orthogonalizing (reverse) ARMA filter to get Q, the innovations process. Since Q is orthogonal, projections on it are linear combiners, thus the filter from Q to Y is just the coefficients of the crosscorrelation function of Q and Y. This can be got by convolving the cross correlation function of X and Y with the reverse filter which got us from X to Q. The cascade of the X→Q and Q→Y' systems is the ARMA Wiener filter.

A more direct method is to find the best prediction filter for the opened process in the least squares sense – this can be done using part 1 of the Levinson-Durbin recursion. These coefficients are the estimates for the AR generation filter of the process, which can be directly used in Wiener filtering, or can be used to find the autocorrelation function. This is the basic idea behind the Burg method, which estimates the AR process coefficients as the optimum coefficients of the succesive stages of the lattice predictor.



Links:
Last year's lecture: Spectrum Estimation


Relations:

Autocorrelation Estimation is used to get estimates of the autocorrelation function or predictor coefficients directly, to use in Wiener Filtering. The estimation algorithms are implemented using Levinson-Durbin since they use LS Deconvolution and LS System Identification methodologies. The ARMA Wiener filter is designed using the idea of Successive Orthogonalization.