Like Row Space, we can also read off the columns of our matrix and try and work out the span of that set of vectors. That span is called the Column Space, since it is the space accessible by the span of all the columns of the matrix.
The column space of a matrix tells us about the output space of the transformation - since each column tells us where the standard basis vectors in a similar identity matrix would land if they were transformed by that matrix.
For instance in this matrix, the standard basis vector , or lands on . Similarly, the standard basis vector lands on .
Now, just like the row space, we might want to work out a basis for the column space. We can either do that by looking at the columns themselves to see if there are obvious dependencies, or we can try and recover a set of vectors each having their own leading entry.
However, we cannot do this with row operations. We are examining the set of column vectors for linear dependence, so applying row operations will effectively change a single component of each column, as opposed to all components of that column. Such an operation will fundamentally change the nature of the resultant space.
As a trick, we can use row-reduction if we find a way to express the columns of the matrix as rows, temporarily. We can do that with the transpose,
In general, the transpose rearranges the matrix such that the first row becomes the first column, the second row becomes the second column and so on. As such, transpose of an matrix will be an matrix.
Now we can row-reduce as usual:
Transposing our row-reduced matrix, we get a new matrix with our column space.
So our column space here comprises of the vectors and
Which, you will notice, forms a plane, indicating that our output space is two dimensional.