Advanced DSPLeast Squares Matrix InversionLecture 1Conducted by: Udayan Kanade Consider a 1x1 matrix. y=ax. Then, given a particular y, the x that will achieve it is x=a/y. Division is the opposite of multiplication. Consider a 2x2 matrix. y=Ax. Then, given a particular y, the x that will achieve it is x=A-1y. Here, A-1 is a matrix whose rows are orthogonal to all but one column of A each. Consider a 3x2 matrix. Takes in two inputs and gives three outputs. y=Ax. It is not necessary that for each y, there need be an x that will satisfy the equation. In this case, we try to find an x that will minimize the norm (energy/length/sum squared) of the error vector ||Ax-y||. The minimum such error vector is orthogonal to the subspace spanned by A, i.e. orthogonal to each row of A, i.e. AT(Ax-y)=0. Thus, ATAx=ATy, i.e. the actuated solution Ax and required solution y look the same to all the columns of A. Assuming linear independance between columns of A, we get x=(ATA)-1ATy=A†y, where A†=(ATA)-1AT is called the pseudoinverse of the matrix A. This is a whole least-squares-inversion procedure. Take y, and project it on the columns of A. Then pass it through the matrix which takes projections and converts them to linear combiners. That matrix is (ATA)-1, since ATA is the matrix that converts linear combiners to projections. Links: Preparatory: Linear Systems, Dot Product Last year's lecture: Least Squares Matrix Inversion Exercises: Homework 1 Relations: LS matrix inversion is used in LS deconvolution, system identification and linear estimation of random variables. LS matrix inversion is achieved in practice using orthogonalization methods, specifically, the Levinson-Durbin algorithm for Toeplitz matrices. LS matrix inversion is a generalization of the dot product. |