Advanced DSPOrthogonalizationLecture 4Conducted by: Udayan Kanade Suppose we want to solve an overdetermined system of linear equations Ax=y in the least squares sense. For the optimal x, the optimal approximation r=Ax is the projection of y onto the subspace spanned by the columns of A (called the column space of A). If we had an orthogonal basis for this space, our projection would be easy to find. Finding such a basis is called orthogonalization. In this case, we are going to do successive orthogonalization using the Gram-Schmidt algorithm, giving the QR factorization. We start with the first column vector a1 and go on adding vectors, finding an orthogonal basis for each such “partial” subspace. If we already have {q1, q2, ...qi} spanning the subspace of {a1, a2, ...ai}, inclusion of the vector ai+1 will give the new direction ai+1 − ‹ai+1,q1› − ‹ai+1,q2› ...− ‹ai+1,qi›. Normalizing this vector will give us qi+1.
The above procedure will give us Q, an orthogonal
matrix whose column space will be equal to the column
space of A, and R, a matrix of relations
such that A=QR. Now Q, being orthogonal,
is pseudoinverted just by transposing it. R
is upper triangular (because of successive
orthogonalization), meaning R can be inverted
very easily (using elimination). Thus, the LS problem
above can be solved
x=R-1QTy.
Orthogonalization is a methodology for solving least squares matrix inversion problems. The Levinson-Durbin algorithm is also based on orthogonalization. When the basis-relative dot products are known, and all other dot products have to be inferred from these, a variant of the Gram-Schmidt procedure is used. This occurs in random variable linear estimation problems. The ARMA Wiener filter is derived using the orthogonalization methodology. |