Eigenbasis and Diagonalization

Given that we know that a transformation can have up to n Eigenvectors, where n is the number of rows, what happens if we use the Eigenvectors as a change of basis, by multiplying the transformation by the matrix of the Eigenvectors?

As it turns out, converting the transformation to an Eigenbasis, if possible, (a conversion otherwise known as Eigendecomposition) is an incredibly useful conversion because of what happens to the transformation when it is converted in such a way.

Take for example, the matrix [100021001]. This matrix scales by a factor of 2 along the y-axis, shears along the xz axis by a factor of 1.

This transformation has Eigenvalues λ=2 and λ=1 with algebraic multiplicity 2.

It also has Eigenvectors [010], for λ=2, and [011], and [100], for λ=1 visualized below:

These Eigenvectors can be arranged into a new matrix called an Eigenbasis:

[010101100]

And the inverse of the Eigenbasis can be found too:

[001100011]

Consider what happens if we change the basis of our matrix by premultiplying by the inverse of the Eigenbasis, then postmultiplying by the Eigenbasis (a transformation also known as an Eigendecomposition)

[001100011][100021001][010101100]= [100010002]

Notice anything familiar? The result of changing the basis to a matrix to its Eigenbasis is that the matrix is put into a Diagonalized form. This is extremely useful, because while the matrix is in a diagonalized form, we can represent it like this

(112)I

Thus, if we want to apply any matrix multiplication operation to the matrix in its diagonalized form, it is the same as applying a matrix-vector optimization. Computer Scientists will recognize this as a huge performance win, since an O(N2) operation just became O(N). Say for example we wanted to calcalculate the 16th power of the matrix:

[100021001]

Conventionally, this would take 92×16=1296 operations. If we did the same thing on the diagonal, we can exploit the fact that we are exponentiating by powers of two and same thing would take just three barrel-shift operations, preceded by and followed by a normal matrix multiplication to undo the diagonalization.

Given a set of eigenvectors and eigenvalues for a matrix, we can re-construct the original matrix. Why is this the case? Notice that when we decomposed the matrix, we did the following:

AP=PDP1AP=D

Where P was our matrix of eigenvectors, A was our original matrix that underwent eigendecomposition and D is the eigendecomposed matrix.

Now, a property of eigenvalues is that multiplying the original matrix A by an eigenvector V is the same as multiplying that eigenvector by its eigenvalue λ. All the multiplication does in both cases is scale the vector.

This is the same thing if you multiply by a matrix that only has elements on the diagonal - the effect is scaling, regardless of whether the multiplication was a premultiplication or a postmultiplication. So it stands to reason that if you were to arrange the eigenvalues into a diagonal matrix E with their columns corresponding to each eigenvector in the matrix P, then the following, just like above with AP=PD, holds true:

AP=PE

Lets see if this checks out:

AP=[100021001][010101100] = [010102100]

PE=[010101100][100010002] = [010102100]

Same thing! So now we can do the same thing as before - postmultiply both sides by P1 and it should be the case that we recover A, eg:

APP1=PEP1A=PEP1

A=[010101100][100010002][001100011]=[100021001]