# Row Space

Given what we know about spans and matrices, the row space is just the span of each of the rows, if we are to consider each row to be a vector in a set.

Recall that the span is just the set of all linear combinations of a set of vectors, which describes the space that is reachable by those linear combinations.

So if you have a matrix defined like this:

$$\left[\begin{array}{cc}1& 1\\ 2& 2\end{array}\right]$$

Then because the rows are not linearly independent, the row space is just going to be the line defined by $$y=-x$$

However, if you have a matrix defined with two linearly independent vectors, then the row space is going to be all 2D space.

$$\left[\begin{array}{cc}1& 1\\ 1& -1\end{array}\right]$$

One thing which you might be interested in is finding the basis for a row space. Recall that since all the vectors in a basis must be linearly independent, the number of vectors in the basis is going to tell you, at most, how many dimensions the space which has the transformation applied to it, is going to have.

Thankfully, you do not have to do too much work to compute the dimension of the row space. If you can immediately tell that the rows are all lineraly independent from each other, then you know that the row-space is n-dimensional, where n is the number of rows in the matrix. Visually, this would mean that if you visualized all the rows as planes with a solution of $\left[\begin{array}{c}0\\ 0\\ 0\end{array}\right]$, then there would be a single point where they all intersect.

If you want to more rigorously prove what the dimension of the row space is, you can use Elementary Row Operations as explained above. Recall that since we are only interested in finding the set of all possible vectors spanned by the rows and that either our row space consisted of linearly independent vectors or linearly dependent vectors and all Elementary Row Operations actually do is recover the standard basis vectors through linear combinations, it follows that performing such operations are safe, in that they will not change the dimension, which is what we are looking for.

With that mouthful out of the way, recall that the matrix above row-reduced to:

So, given that we have three vectors that are linearly independent, our transformation has at most three dimensions.

The row space might not always have as many dimensions as the number of rows in the matrix. For instance, consider the matrix:

With this matrix, we can immediately tell that the second row has a linear dependence on the first (it is just a scalar multiple of it). Indeed, it row-reduces to:

So in reality, the dimension of the row-space of this matrix is is just 2. It makes more sense if you visualize the basis vectors

The only part of space reachable by linear combinations of all three of those vectors is a plane.

If we had a matrix with three linearly dependent rows:

$$\left[\begin{array}{ccc}1& 1& 0\\ 2& 2& 0\\ 3& 3& 0\end{array}\right]$$

Then the only thing reachable is a line, specifically the $x=-y$ line.

Sometimes you can get a linear dependence between the rows that is not as obvious as just one row being a scalar multiple of the other. For instance the following system has a dimension of 2. Look carefully at the diagram and you will see that there is not really a single point of intersection for all three planes. Instead, they all seem to intersect with each other along a line.

See what happens when we row-reduce it. First, add the first row to the third:

Now subtract half the third row from the second:

Then, multiply the first row by 4 and subtract 3 times the last row from it

Finally, notice that the first row is twice the second. Subtract twice the second row from it.

The final two rows are linearly independent of each other. We can represent them as basis vectors to show the basis of the row space.

Which, you will notice, forms a plane, indicating that our mapping space is two dimensional.