(a) Give the rank of $D$ and a basis for the nullspace $N(D)$ for the $D$ matrix from pset 1: $$ D=\begin{pmatrix} -\frac{1}{\Delta x} & \frac{1}{\Delta x} & 0 & 0 & 0\\ 0 & -\frac{1}{\Delta x} & \frac{1}{\Delta x} & 0 & 0 \\ 0 & 0 & -\frac{1}{\Delta x} & \frac{1}{\Delta x} & 0 \\ 0 & 0 & 0 & -\frac{1}{\Delta x} & \frac{1}{\Delta x} \end{pmatrix} $$ (Does your answer depend on the value of $\Delta x$?)
(b) Consider the vector space $V$ consisting of all differentiable functions $f(x)$, and let $A = \frac{d}{dx}$ be the linear operator that takes the derivative, i.e. $Af = f'$ for any $f \in V$. Give a basis for $N(A)$, the nullspace of the derivative. Is there a resemblance to your answer in part (a)?
(a) The rank is the number of pivots in a matrix's upper-triangular form after elimination, but this matrix is already upper triangular, so we can just read off the rank $\boxed{r = 4}$.
There are multiple ways to get the nullspace.
Following the "brute-force" approach from class, the first four columns are the pivot columns and the fifth column is the free column (we have a single free variable), so we just want to set the free variable $f_1 = 1$ and solve for the pivot variables $p_1,p_2,p_3,p_4$) via:
$$ D \begin{pmatrix} p_1 \\ p_2 \\ p_3 \\ p_4 \\ 1 \end{pmatrix} = 0 \implies \underbrace{\begin{pmatrix} -\frac{1}{\Delta x} & \frac{1}{\Delta x} & 0 & 0 \\ 0 & -\frac{1}{\Delta x} & \frac{1}{\Delta x} & 0 \\ 0 & 0 & -\frac{1}{\Delta x} & \frac{1}{\Delta x} \\ 0 & 0 & 0 & -\frac{1}{\Delta x} \end{pmatrix}}_{\mbox{pivot columns }U_0} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \\ p_4 \end{pmatrix} = - \underbrace{\begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ \frac{1}{\Delta x} \end{pmatrix}}_{\mbox{free column}} \implies \begin{pmatrix} -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \\ p_3 \\ p_4 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ -1 \end{pmatrix} $$where we have cancelled the $\Delta x$ factors on both sides. Working upwards via backsubsitution, we immediately find that $p_4 = 1$, $p_3 = p_4 = 1$, $p_2 = p_3 = 1$, and $p_1 = p_2 = 1$, hence the special solution (our nullspace basis) is $$ \boxed{\begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}}. $$
There are other ways to see this, of course. If you look at $D$ and realize that it is just taking the difference of adjacent entries, i.e. $$ Dx = \frac{1}{\Delta x}\begin{pmatrix}x_2 - x_1 \\ x_3 - x_2 \\ x_4 - x_3 \\ x_5 - x_4\end{pmatrix}, $$ then you can immediately see that it gives zero only for vectors where all the entries are the same ($x_2 = x_1, x_3 = x_2, x_4 = x_3, x_5 = x_4$), so a basis would be any nonzero vector whose entries are the same.
And, of course neither the rank nor the nullspace depend on Δx, which is just an overall scale factor. For any matrix $A$ and any nonzero scalar $\alpha \ne 0$, $A$ and $\alpha A$ have the same rank and the same nullspace.
(b) The nullspace of our linear operator $A = \frac{d}{dx}$ are functions $f$ for which $\frac{d}{dx} f = 0$. To be precise, $0$ refers to the zero function $\mathbf{0}:\mathbb{R}\to \mathbb{R}$ such that $\mathbf{0}(x) = 0$ for all $x \in \mathbb{R}$. A differentiable function with derivative of zero everywhere must be a constant function, so the nullspace is the set of constant functions. That is, $f \in N(A)$ if and only if $f(x) = \alpha$ for all $x$ for some constant $\alpha$. As we are working with functions rather than vectors, the basis of $N(A)$ is a list of functions. Let $f(x) = \mathbf{1}$ denote the constant function that is $1$ for all inputs. Note that $N(A)$ is all constant multiples of $\mathbf{1}$, so its basis is composed only of this function.
Commentary: column vectors in $\mathbb{R}^n$ can be thought of as functions whose inputs are natural numbers $1,2,\ldots,n$. For example, the column vector $f = \begin{pmatrix}f_1 & f_2 & f_3 & f_4 & f_5\end{pmatrix}^T$ can be thought of as a function $f$ that takes inputs from $\{1, 2, 3, 4, 5\}$ and outputs a real number. The matrix $D$ from (a) takes discrete derivatives (forward differences) over this domain and its nullspace is constant "functions" over this domain. Similarly, the linear operator $A$ in (b) takes derivatives over real numbers and its nullspace is constant functions over real numbers. The two bases are similarly analogous.
In class, we considered elimination on the matrix $A = \begin{pmatrix} 1 & 2 & 3 & 1 \\ 1 & 2 & 5 & -3 \\ 1 & 2 & 7 & -7 \end{pmatrix}$, and found that it was rank 2 with the pivot and free columns "interleaved" (the first and third columns were pivot columns).
(a) Give a permutation matrix $P$ such that doing Gaussian elimination on $AP$ (which re-orders the _______ of $A$) leads to the first two columns being the pivot columns (i.e. "non-interleaved" pivot and free columns).
(b) How do the nullspace $N(AP)$ and column space $C(AP)$ relate to the null and column spaces of $A$, respectively? In class, a basis for $N(A)$ was the two special solutions $[-2,1,0,0]$ and $[-7,0,2,1]$ and from these how can you get a basis for $N(AP)$? (Note: the problem gave the nullspace vector as $[-7,0,3,1]$, following a typo from the notes. You won't be penalized if you copy this and use 3 instead of 2 here.)
(a) The pivot columns of $A$ were the first and third columns. The easiest way to make the first two columns of $AP$ be the pivot columns would be to swap the second and third columns of $A$. The permutation matrix $P$ which does this can be found by applying this operation $P = I_4P$ to the $4\times 4$ identity matrix $I_4$ (swapping the second and third columns of $I_4$) and is $$P = \begin{pmatrix}1 & 0 & 0 & 0\\0 & 0 & 1& 0\\0 & 1 & 0 & 0\\0 & 0 & 0 &1\end{pmatrix}.$$
(b) If a vector $y$ is in $N(AP)$, then $$(AP)y = 0 \implies A(Py) = 0 \implies Py \text{ is in }N(A).$$ Similarly, if a vector $x$ is in $N(A)$, then $P^{-1}x$ is in $N(AP).$ ($P$ is a permutation matrix so it has an inverse. In fact, $P^{-1} = P$ in this case: swapping columns twice restores the original order). So the nullspaces are such that their second and third entries of their elements are swapped. To get a basis for $N(AP)$, we simply apply this operation to our original basis to get $$ \boxed{\begin{pmatrix}-2 \\ 0 \\1 \\0\end{pmatrix}, \; \begin{pmatrix}-7 \\ 2 \\ 0 \\ 1\end{pmatrix}}. $$
The column space of $AP$ is actually the same as the column space of $A$. Consider a vector $v$ in the column space of $AP$, so there was some $x$ such that $(AP)x = v$. Then $v$ must also be in the column space of $A$ since $A(Px) = v$, so $C(AP) \subseteq C(A)$. Similarly, if a vector $w$ is in the column space of $A$, so that there is some $y$ such that $Ay = w$, then we see that $(AP)(P^{-1}y) = w$ and so $C(A) \subseteq C(AP)$. Thus the column spaces must be the same.
In class, we saw that invertible row operations (such as those occuring in Gaussian elimination) do not change the null space of a matrix.
(a) What happens if you do non-invertible row operations $B = EA$? Does that generally lead to $N(B) = N(A)$, or $N(B) \subseteq N(A)$ (a smaller nullspace), or $N(B) \supseteq N(A)$ (a larger nullspace), or none of these (neither nullspace contains the other)? Justify your answer.
(b) Illustrate your answer in (a) with example $A$, $E$, and $B$ matrices, and give bases for the nullspaces of your $A$ and $B$.
(a) Row operations (e.g. during elimination) corresponded to multiplying $A$ on the left by a square matrix $E$ to get $EA$. A non-invertible row-operation is simply a non-invertible square E.
For a non-invertible matrix $E$, and $B=EA$, we generally find that this enlarges the nullspace: $N(B) \supseteq N(A)$ (a superset or equal, i.e. $N(A)$ is a subspace of $N(B)$). Let $x$ be an element of $N(A)$. Then $$Ax = 0 \implies E(Ax) = 0 \implies x \in N(B).$$ So we know at least that any element of $N(A)$ must be an element of $N(B)$. However, it is possible for $N(B)$ to contain elements which are not in $N(A)$. We give such an example in (b).
(b) Consider matrices, $$A =I=\begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}, E =0I\begin{pmatrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{pmatrix}, B = EA = 0I^2 = 0I = \begin{pmatrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\end{pmatrix}.$$
The basis of $N(A)$ is the empty list as $N(A)$ is the zero vector space $\{ \vec{0} \}$. $N(E)$ is the entire space $\mathbb{R}^3$ of length-$3$ column vectors; one possible basis for $\mathbb{R}^3$ is simply the "Cartesian" basis $\begin{pmatrix}1 & 0 & 0\end{pmatrix}^T, \begin{pmatrix}0 & 1 & 0\end{pmatrix}^T, \begin{pmatrix}0 & 0 & 1\end{pmatrix}^T$. $B$ is equal to $E$ so $N(B) = N(E)$ and they have the same basis. It is clear here that $N(A) \subset N(B)$.
The example above is extreme for simplicity. There are many other examples, in fact any choice of $A$ an invertible square matrix and $E$ a non-invertible square matrix will yield $N(A) \subset N(B)$.
(from Strang, section 3.3)
Give examples of matrices $A$ for which the number of solutions to $Ax=b$ is
(a) 0 or 1, depending on $b$.
(b) $\infty$, regardless of $b$.
(c) 0 or $\infty$, depending on $b$
(d) 1, regardless of $b$.
(a) Here, $Ax=b$ may not have solutions, but if it does have solutions then they are unique. Uniqueness means that $A$ must have full column rank (its nullspace must be $\{ \vec{0} \}$). Possible non-existence of solutions means that $A$ must not have full row rank (its column space must not be the whole space).
Together, these conditions tell us that $A$ must be a "tall" matrix whose rank equals the number of columns, i.e. it must have independent columns and more rows than columns. There are many possibilities, e.g. some examples are:
$$A = \begin{pmatrix}1\\1\end{pmatrix}, A = \begin{pmatrix}1 & 0\\0 & 1 \\ 0 & 0\end{pmatrix}, A = \begin{pmatrix}1 & 0 & 0\\0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{pmatrix}, \mbox{ etc.}$$(b) Since the solutions always exist ("regardless of b"), $A$ must have full row rank. However, since the solutions are not unique, it must not have full column rank: its nullspace must contain nonzero vectors.
Together, these conditions tell us that $A$ must be a "wide" matrix whose rank equals the number of rows, i.e. it must have independent rows and more columns than rows. There are many possibilities, e.g. some examples are:
$$A = \begin{pmatrix}1 & 1\end{pmatrix}, A = \begin{pmatrix}1 & 0 & 0\\0 & 1 & 0\end{pmatrix}, A = \begin{pmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\end{pmatrix}, \mbox{ etc.}$$$
(c) The solutions may not exist and are not unique, which means that $A$ has neither full row nor full column rank: it must be rank deficient. There are many possibilities, e.g. some examples are:
$$A = \begin{pmatrix}0\end{pmatrix}, A = \begin{pmatrix}1 & 0 & 0\\2 & 0 & 0\end{pmatrix}, A = \begin{pmatrix}1 & 1\\0 & 0 \\ 0 & 0\end{pmatrix}, A = \begin{pmatrix}1 & 2 & 3\\2 & 4 & 6 \\ 2 & 4 & 6\end{pmatrix},\mbox{ etc.}$$(c) The solutions must exist and are unique, which means that $A$ has both full row and full column rank: it must be square and invertible. There are many possibilities, e.g. some examples are:
$$A = \begin{pmatrix}1\end{pmatrix}, A = \begin{pmatrix}1 & 0\\0 & 2\end{pmatrix}, A = \begin{pmatrix}1 & 2 & 3\\2 & 5 & 7 \\ 1 & 4 & 2\end{pmatrix},\mbox{ etc.}$$