Eigenvectors

Now that we know about Eigenvalues, we can talk about Eigenvectors and what they mean for Eigenvalues.

Like Eigenvalues, Eigenvectors translate to own vectors. Unlike Eigenvalues, it is far easier to give a geometric interpretation of what they Eigenvectors are. They are the vectors forming a basis for all vectors that do not have their direction changed when a transformation is applied to them.

For instance, when we consider the transformation [2002], which just scales everything by a factor of two, we can pretty easily say that[10] and [01] are the Eigenvectors, because the transformation as a whole does not change the direction of space. It only scales it. And of course, those two vectors are the basis for all of 2D space.

In other cases, it is not so obvious what the Eigenvectors might be. For instance, the transformation [1202]has two Eigenvectors, but we would not immediately be able to say what they were:

Unsurprisingly, Eigenvectors and Eigenvalues are linked. Remember when we were computing the Eigenvalues, we were using an equation like this:

det(x1λx2y1y2λ)=0

If we zoom out a little and call our matrix A and recall the Identity Matrix I, then what we were really doing was this:

det(AλI)

Like solving for a zero-determinant, if we solve for vectors in the null space of AλI, i.e, the vector v in: (AλI)v=0, then we will find the Eigenvectors. Remember how we said the Eigenvectors are those vectors which do not change their direction? Here is proof of that.

(AλI)v=0AvλIv=0Av=λIv

From here it can be observed that λIis just a uniform scaling transformation. So what we are saying is that whateverA does to v, is the same as what λI does to the vector v.

Solving for vector v is not anything we have not seen before. In essence, we are just solving for the null space of[x1λy1x2y2λ], for some λ that we already worked out as an Eigenvalue before.

Suppose we have the matrix [1202], with the Eigenvalues λ=1 and λ=2. That gives us two possible Eigenvectors to solve for.

First, solve for λ=1

[112021][v1v2]= [00]
[0201][v1v2]= [00]

This matrix cannot be reduced any further by elementary row operations. Multiplying out the first row, we have 0v1+2v2=0. This means that v1 is a free variable and we expressv2 in terms of it, eg:

0v1=2v2

which means that v2 is just zero, so we can express a vector like so:

v1(1,0)

The span of which is just (1,0) which is our first Eigenvector, for Eigenvalue λ=1. Lets now have a look at λ=2

[122022][v1v2]= [00]
[1200][v1v2]= [00]

In this case we have no free variables and an equation by examining the first row again:

1v1+2v2=01v1=2v2v1=2v2

So, we can express another vector in terms of v2, that is, v2(2,1), which is our second Eigenvector.

Recall that for some transformations, we had less than n solutions for Eigenvalues, but those solutions had higher multiplicity, m. In such cases, there may be up to m linearly independent vectors in the span of solutions for the Eigenvectors for that Eigenvalue. In general, where an Eigenvalue has multiplicity m then you can find up tom vectors, but they are not guaranteed to be linearly independent. Take for example the following matrix which shears only in the x direction:

[1101]

The characteristic equation for this Matrix is:

det(1λ201λ)=0(1λ)220=0(λ+1)2=0λ22λ+1=0(λ1)2=0λ=1

Here we have a single Eigenvalue with algebraic multiplicity 2. If we were to solve for the Eigenvectors with this Eigenvalue, we would have:

[111011][v1v2]= [00]
[0100][v1v2]= [00]
v2=0

And so we would have a single vector, (0,1)

The dimension of the span of the corresponding linearly independent Eigenvectors for an Eigenvalue is called the geometric multiplicity of that Eigenvalue. If the sums of algebraic multiplicities and geometric multiplicities for all Eigenvalues of a matrix are equal to each other, then the matrix is said to be Diagonalizable.