Advanced DSP

Deconvolution

Lecture 2
Conducted by: Udayan Kanade

If a signal x went through an MA filter f, and we got the output y=x*f. Suppose we know y and f but we do not know x. We can find the AR inverse of f, (call it f -1), and find x=y*(f -1). There are problems with this method. Suppose, y has noise in it (due to sensor noise, calculational errors and environmental conditions – “backscatter”), then y=x*f+n. Now passing y through f -1 will give x'=y*(f -1)=x*f*f -1+n*f -1=x+n*f -1. Now f -1, being an AR filter, is likely to be unstable, thus causing an oscillatory inverse-noise term n*f -1 which will eclipse the deconvolution x.

The AR deconvolver goes “berserk” because it finds further and further “false” explanations to hide its past mistakes – while at the same time being convinced that it is doing a perfect job, there being no errors. What we need to be able to do, is specify that beyond a certain point, the input signal x is known to be zero. This will make it impossible for any algorithm to find a perfect explanation, making a best fit explanation necessary.

When we thus constrain the system, we now have a finitely many inputs x to play with, which cannot match the expected output y perfectly, but only in the least squares sense. If the matrix equivalent of the convolution “*f” is taken to be F, then what we want is to find x such that Fx=y. If this is not achievable, we want to minimize the error ||Fx-y||. This is a least squares matrix inversion problem, which can be solved using the methodology of lecture 1.



Links:
Last year's lecture: Least Squares Deconvolution



Relations:

LS deconvolution is an application of LS matrix inversion. The LS deconvolution methodology is used in system identification and autocorrelation estimationalgorithms. A fast algorithm for LS deconvolution is Levinson-Durbin.