This abbreviated pset is due at 11am on Wednesday Feb 23. The material on this pset may be on exam 1 (Feb. 25).
(a) Does $C(A)$ necessarily contain the $C(AB)$ or vice versa? (That is, is $C(A) \subseteq C(AB)$ or $C(AB) \subseteq C(A)$?)
(b) Give an example of a square matrix $A$ where $C(A^2)$ is lower-dimensional than $C(A)$.
(c) If $V$ is the set of polynomials $f(x)$ of degree $< 4$, consider the derivative linear operator $d/dx$ acting on $V$. Give a basis for $N(d/dx)$ and $C(d/dx)$, the null and column spaces of this operator for inputs in $V$.
Recall that by definition $C(A)$ consists of all the vectors of the form $Ax$.
We claim that $\boxed{C(AB) \subseteq C(A)}$: any vector in $C(AB)$ is of the form $ABx$, and setting $y=Bx$ we see that $ABx=Ay\in C(A)$.
On the contrary, $C(A)$ is not necessarily subset of $C(AB)$, in other words, the inclusion $C(AB)\subset C(A)$ proved above might be strict. This happens when $C(B)$ is too small and the vectors $Ay$ for $y\in C(B)$ are not enough to obtain all potential outputs of $A$. As a simple example of this situation we can take any non-zero matrix $A$ (for example, $2\times 2$ identity matrix) and take $B=0$. Then $AB=0$ so $C(AB)=0$ is zero-dimensional, but $C(A)$ has at least dimension one.
To find an example of matrix $A$ note that $C(A^2)$ consists of all vectors obtained by application of $A$ to $C(A)$, so if we want $C(A^2)$ to have lower dimension than $C(A)$, the operator $A$ should send the space $C(A)$ to a strictly smaller space. This is only possible if $A$ "merges" some vectors from $C(A)$ together, that is, if $Ax=Ay$ for different vectors $x,y$ from $C(A)$. The latter implies that $x-y\neq 0$ is both in $C(A)$ and $N(A)$, so to have $C(A^2)$ smaller than $C(A)$ the dimension of intersection $N(A)\cap C(A)$ should be non-zero ($N(A)\cap C(A)$ is not a vector space consisting only of $0$ element).
Using this understanding, we can give the following example: $$ \boxed{ A=\begin{pmatrix} 0& 1\\ 0&0 \end{pmatrix}}. $$ Then $C(A)$ is one-dimensional and has basis $\begin{pmatrix}1\\0\end{pmatrix}$, while $A^2=0$ and $C(A^2)=0$ is zero-dimensional. Direct inspection also gives $N(A)=C(A)$ for this example, in line with the discussion above.
$N(d/dx)$ consists of all polynomials of degree $<4$ such that $\frac{df}{dx}=0$. All such polynomials must be constant, so $N(d/dx)$ is the one-dimensional space of constant polynomials with basis $\boxed{\{1\}}$.
$C(d/dx)$ consists of all polynomials which can be obtained as $\frac{df}{dx}$ for a degree $<4$ polynomial $f$. Any degree $<4$ polynomial has the form $a_0+a_1x+a_2x^2+a_3x^3$, so $C(d/dx)$ is the space of polynomials of the form $a_1+2a_2x+3a_3x^2$. A basis for this space is $\boxed{\{1, x, x^2\}}$ (though there are many other valid choices for a basis, say, $\{1, 2x, 3x^2\}$).
The part (c) can also be approached differently, by identifying the polynomial $a_0+a_1x+a_2x^2+a_3x^3$ with the vector $\begin{pmatrix}a_0\\a_1\\a_2\\a_3\end{pmatrix}$ and identifying the linear operator $d/dx$ with the operator given by the matrix $$ A=\begin{pmatrix} 0&1&0&0\\0&0&2&0\\0&0&0&3\\0&0&0&0 \end{pmatrix} $$ In the matrix $A$ the first column is free, while the other columns are pivot columns. The special basis for $N(A)$ can be found by backsolving $A\begin{pmatrix}1\\a_2\\ a_3\\a_4\end{pmatrix}=0$, which gives $\begin{pmatrix}1\\0\\ 0\\0\end{pmatrix}$, corrsponding to the polynomial 1. A basis of $C(A)$ can be found by taking the pivot columns, $\begin{pmatrix}1\\0\\0\\0\end{pmatrix}$,$\begin{pmatrix}0\\2\\ 0\\0\end{pmatrix}$,$\begin{pmatrix}0\\0\\ 3\\0\end{pmatrix}$, they correpsond to polynomials $1,2x,3x^2$.
Come up with a matrix $A$ and a vector $b\ne 0$ such that the solutions $x$ of $Ax=b$ form a line in $\mathbb{R}^3$, where all of the entries of $A$ are nonzero. Find the complete solution (i.e., all solutions).
(i.e. come up with your own homework problem — how do you think Gil Strang does it?)
Let us first explain a way to find such a matrix $A$. Since the solutions form a line in $\mathbb R^3$, the input is from $\mathbb R^3$ and the matrix $A$ must have three columns. Recall that $Ax=b$ is solvable if $C(A)$ contains $b$, and the set of all solutions has the form $x+N(A)$, where $x$ is any solution of $Ax=b$. So, if solutions of $Ax=b$ form a line, $N(A)$ must have dimension $1$. Finally, recall that row operations do not change $N(A)$. This gives us a following plan for finding $A,b$ as in the problem: we start with a matrix $A^{tmp}$ having only $1$ free column and $2$ pivot columns, so the dimension of $N(A^{tmp})$ is $1$, then we perform row operations to get a matrix $A$ with all entries being nonzero and then pick $b$ from $C(A)$.
Now, let's solve the problem following the plan above. Take $$ A^{tmp}=\begin{pmatrix} 1&1&1\\0&0&1 \end{pmatrix}. $$ The columns 1,3 are pivot columns, while column 2 is free. Backsolving $A^{tmp}\begin{pmatrix}x\\1\\z\end{pmatrix}=0$ gives the special solution $\begin{pmatrix}-1\\1\\0\end{pmatrix}$, which forms a basis of the one-dimensional space $N(A^{tmp})$.
Now we can perform row operations on $A^{tmp}$ to get rid of $0$ entries. For example, by adding the first row to the second row we get $$ A=\begin{pmatrix} 1&1&1\\1&1&2 \end{pmatrix}. $$ Note that $N(A)$ is still one-dimensional with basis $\begin{pmatrix}-1\\1\\0\end{pmatrix}$.
Finally, take $b\in C(A)$. For simplicity, we can use the first column of $A$ as $b$. So, our answer is $$ \boxed{ A=\begin{pmatrix} 1&1&1\\1&1&2 \end{pmatrix},\qquad b=\begin{pmatrix} 1\\1 \end{pmatrix}. } $$ To find the complete solution $Ax=b$, we need to find a solution and then add vectors from $N(A)$ to it. Note that $x=\begin{pmatrix} 1\\0\\0\end{pmatrix}$ is a solution (not surprising, since $b$ is the first column of $A$), so all solutions have the form $$ \boxed{ \begin{pmatrix} 1\\0\\0\end{pmatrix}+f \begin{pmatrix} -1\\1\\0\end{pmatrix}=\begin{pmatrix} 1-f\\f\\0\end{pmatrix}, } $$ where $f$ is an arbitrary real number.
The following matrix is from problem 7(a) of pset 2. Feel free to re-use the pset–2 solutions as needed. $$ A = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 1 & 2 & 4 & 6 \\ 0 & 0 & 1 & 2 \end{pmatrix} $$
(a) Give the dimensionality and a basis for $C(A)$.
(b) If $b = \begin{pmatrix} \alpha \\ 6 \\ 1 \end{pmatrix}$, for what values of $\alpha$ will $Ax=b$ have a solution?
(c) For the $\alpha$ from (b), give the complete solution to $Ax=b$.
We use the following argument from the lecture: we perform elimination to obtain the matrix $U$, find pivot columns of $U$ (which form a basis of $C(U)$) and then go back to matrix $A$ and take the same columns to get a basis of $C(A)$. This works because, while row operations change the columns of matrices, row operations do not change linear independence between columns (that is, if some columns are independent, then they are independent after any sequence of row operations)
From the solutions of pset 2, the elimination leads to $$ U=\begin{pmatrix} 1&2&3&4\\0&0&1&2\\0&0&0&0 \end{pmatrix}. $$ The pivot columns are 1 and 3, so the dimension of $C(A)$ is $2$ and a basis for $C(A)$ is given by columns 1 and 3 of $A$, that is $\boxed{\begin{pmatrix}1\\1\\0\end{pmatrix},\begin{pmatrix}3\\4\\1\end{pmatrix}}$.
We will use elimination to check if $Ax=b$ is solvable, where $b = \begin{pmatrix} \alpha \\ 6 \\ 1 \end{pmatrix}$. Recall from the solutions of pset 2 that to obtain $U$ from $A$ we have first subtracted row 1 from row 2, and then row 2 from row 3. Performing these operations on $b$, we see that $Ax=b$ is equivalent to $$ \begin{pmatrix} 1&2&3&4\\0&0&1&2\\0&0&0&0 \end{pmatrix}x= \begin{pmatrix} \alpha \\ 6-\alpha \\ -5+\alpha \end{pmatrix} $$ Note that the last row of this system implies that there is a solution only if $\boxed{\alpha=5}$.
To find all solutions, we need to find any solution and add $N(A)$ to it. To find a solution of $Ax=b$ we can use the same eliminiation as in part (b) and look for a solution of $$ \begin{pmatrix} 1&2&3&4\\0&0&1&2\\0&0&0&0 \end{pmatrix}\begin{pmatrix} x_1\\x_2\\x_3\\x_4 \end{pmatrix}= \begin{pmatrix} 5 \\ 1 \\ 0 \end{pmatrix} $$ We can find a solution $x$ by setting free variables $x_2, x_4$ to $0$ and backsolving $$ \begin{pmatrix} 1&2&3&4\\0&0&1&2\\0&0&0&0 \end{pmatrix}\begin{pmatrix} p_1\\0\\p_2\\0 \end{pmatrix}= \begin{pmatrix} 5 \\ 1 \\ 0 \end{pmatrix}. $$ The back-substitution gives $p_2=1$ and $p_1=2$ and $x=\begin{pmatrix}2\\0\\1\\0\end{pmatrix}$ is a solution of $Ax=b$.
From the solutions of pset2 we know that $N(A)$ has a basis $\begin{pmatrix}-2\\1\\0\\0\end{pmatrix},\begin{pmatrix}2\\0\\-2\\1\end{pmatrix}$. Hence, all solutions have the form $$ \boxed{ x=\begin{pmatrix}2\\0\\1\\0\end{pmatrix}+f_1\begin{pmatrix}-2\\1\\0\\0\end{pmatrix}+f_2\begin{pmatrix}2\\0\\-2\\1\end{pmatrix}=\begin{pmatrix}2-2f_1+2f_2\\f_1\\1-2f_2\\f_2\end{pmatrix}} $$ where $f_1,f_2$ are arbitrary reals.