Advanced DSP

Wiener Filter

Lecture 8
Conducted by: Udayan Kanade

If we pass an orthogonal process through a filter, we get a stationary random process, whose autocorrelation function is just the autocorrelation of the filter (scaled by the variance of the orthogonal process). Furthermore, as we saw in the last lecture, every stationary process can be got this way. This is one of the main reasons of the importance of stationary processes. An independent process (or a chaotic one - in which case sensitivity to initial conditions makes it unpredictable) passed through an LTI channel gives a stationary random process. The other important stationary random processes are periodic signals with a known period – which can be thought of as an innovations process being disregarded by a filter whose steady state response is a periodic signal of the given period.

If we pass a stationary random process through a filter, we get another stationary random process. The autocorrelation function of the new process is the autocorrelation function of the old process convolved with the autocorrelation of the filter. (The Fourier Transform of the autocorrelation function is called the power spectrum. The output power spectrum is the input power spectrum multiplied by the squared-magnitude spectrum of the filter.)

If we have processes X and Y which are jointly stationary – stationary processes where even the covariance between a sample of X and a sample of Y depends only on the time distance between them – we can open X one by one and predict Y from it. Say we want to linear-predict Y0 from X0, X-1, X-2 and X-3. Since we know all the covariances involved, we just use the Levinson-Durbin algorithm to find a set of coefficients to multiply the Xs by to get Y0. These same coefficients, because of joint stationarity, are the coefficients which predict Y1 from X1, X0, X-1 and X-2, and so forth for each successive Y. Thus, as we are opening X, we are applying a filter to it to get the Ys. This filter is called the Wiener Filter.

Now let us see some useful cases where joint stationarity occurs. Suppose we have a stationary random process Y, but we can sense it with added stationary noise N, to get X=Y+N. It is valid to assume the noise is independent of Y, and thus Y and N are orthogonal. It is easy to see that X and Y are jointly stationary, with E[XiXj] = E[YiYj] + E[NiNj] and E[XiYj] = E[YiYj]. Now the Wiener methodology will give us the best filter to apply to X to get an estimate of Y

Suppose what we can sense is not Y+N but X2=(Y+N)*f. X2 and Y are also stationary and it is still possible to filter X2 to get the best estimate of Y.

Suppose X=Y*δ1, where δ1 is the unit delay. X and Y are obviously jointly stationary. The Wiener filter in this case is the best linear predictor for the process. (The limit case for this, when we design an infinite-coefficient Wiener filter, is the orthogonalizing filter, which will predict everything other than the innovations process.) Using the same methodology, predictor for samples further in the future can be designed. Prediction can be used for compression, by sending only the prediction error.



Links:
Last year's lecture: Wiener Filtering


Relations:

The Wiener Filter is the Least Squares best Linear Estimation of a Random Variable, designed using the Levinson-Durbin Algorithm, and applied recursively to a random process. The Wiener filter is derived from the autocorrelation function, which has to be Estimated.