Due Friday night March 4th

  1. Let A be an arbitrary square n x n matrix.

Are the singular values of $A^2$ necessarily the same as the squares of the singular values of A? (either find a counterexample by hand or with julia, or prove that it is always the case, or demonstrate with enough examples to be convicing with julia)

Answer: No. Counterexample: Let $A=\begin{pmatrix}0&0\\1&0\end{pmatrix}$ with the full SVD as $A=\begin{pmatrix}0\\1\end{pmatrix}(1)\begin{pmatrix}1&0\end{pmatrix}$. So the singular value of $A$ is $1$. $A^2=\begin{pmatrix}0& 0\\0&0\end{pmatrix}(1) =\begin{pmatrix}0\\1\end{pmatrix}(1)\begin{pmatrix}1&0\end{pmatrix} \begin{pmatrix}0\\1\end{pmatrix}(1)\begin{pmatrix}1&0\end{pmatrix} =\begin{pmatrix}0\\1\end{pmatrix}(0)\begin{pmatrix}1&0\end{pmatrix}$ and thus the singular value is $0$.

  1. Let A be an arbitrary m x n matrix.

Are the singular values of $A^TA$ necessarily the same as the squares of the singular values of A? (either find a counterexample by hand or with julia, or prove that it is always the case, or demonstrate with enough examples to be convicing with julia)

Answer: Yes. Let the SVD of A be $A=U\Sigma V^T$. Then $A^TA=V\Sigma^T U^TU\Sigma V^T= V\Sigma^T\Sigma V^T$. Since $\Sigma$ is diagonal, we have $\Sigma^T=\Sigma$ and $\Sigma^T\Sigma$ is a diagonal matrix with the diagonal entries being squres of the diagoal entries of $\Sigma$.

  1. Suppose we have the rank-r svd of a rank 1 matrix $A = U\Sigma V^T$. Describe the nullspace of $A$ in terms of possibly U, $\Sigma$, and V.

Answer: The nullspace of A is the same as the nullspace of $V^T$. Since $A$ is rank $1$, $V$ is a vector. So the nullspace of $V^T$ is a hyperplane given by $V^T x=0$, i.e., the space of all the vectors that are perpendicular to $V$.

  1. Let A be the matrix below with the full SVD (Note: numbers with an e-16 or e-15 may be considered to be 0)
In [3]:
A = [ 1 4 2;2 8 4; -1 -4 -2]
Out[3]:
3×3 Array{Int64,2}:
  1   4   2
  2   8   4
 -1  -4  -2
In [4]:
using LinearAlgebra
U,s,V =svd(A, full=true)
display(U)
display(s)
display(V)
3×3 Array{Float64,2}:
 -0.408248   0.912871  7.81735e-17
 -0.816497  -0.365148  0.447214   
  0.408248   0.182574  0.894427   
3-element Array{Float64,1}:
 11.22497216032183     
  4.845410522502476e-16
  0.0                  
3×3 Adjoint{Float64,Array{Float64,2}}:
 -0.218218  -0.9759    0.0     
 -0.872872   0.19518  -0.447214
 -0.436436   0.09759   0.894427

4a. What is the rank of this matrix?
4b. For which right hand sides is Ax=b solvable? (Find a condition on b₁,b₂,b₃)?

Answer: 4a: 1
4b: $Ax=b$ is solvable when $b$ is a scalar multiple of the first column of U. So the conditions are $b_3=-b_1$ and $b_2=2b_1$.

  1. Let A be the matrix below with the full SVD
In [6]:
A = [1 4; 2 9;-1 -4]
Out[6]:
3×2 Array{Int64,2}:
  1   4
  2   9
 -1  -4
In [7]:
using LinearAlgebra
U,s,V =svd(A, full=true)
display(U)
display(s)
display(V)
3×3 Array{Float64,2}:
 -0.377924   0.59764    0.707107   
 -0.84519   -0.534466  -1.38778e-15
  0.377924  -0.59764    0.707107   
2-element Array{Float64,1}:
 10.907941643728067  
  0.12964990174715935
2×2 Adjoint{Float64,Array{Float64,2}}:
 -0.224261   0.974529
 -0.974529  -0.224261

5a. What is the rank of this matrix?
5b. For which right hand sides is Ax=b solvable? (Find a condition on b₁,b₂,b₃)?

Answer: 5a: 2
5b: $Ax=b$ is solvable when $𝑏$ is a linear combination of the two columns of $U$. Thus the condition is $b_3=-b_1$.

(6) Explain why the set of singular matrices is not a subspace.

Answer: It is not a subspace since it is not closed under addition. Two singluar matrices sum up may get a nonsingular matrix. For example, $\begin{pmatrix}1&0\\0&0\end{pmatrix}+ \begin{pmatrix}0&0\\0&1\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}$.

(7) If the 9x12 system Ax=b is solvable for every b then the column space of A is .......?

Answer: The space of all the $9\times 1$ vectors.

(8) GS p143 3.2 15 done with the svd on a computer:

Construct a matrix for which N(A) = all combinations of (2,2,1,0) and (3,1,0,1)

Step 1: Find an orthogonal matrix whose first two columns are linear combinations of the given vectors:
Notice that we input a 4x2 matrix but Julia's QR returns a complete square orthgonal matrix who first two columns are the Q we saw in class.

In [ ]:
using LinearAlgebra
N = [2 3
     2 1
     1 0
     0 1];
Q, = qr(N) # we don't need R just the "Q"
W = Q[:,[3,4]] # take the last two columns of Q ( " [3,4] " means take column 3 and 4, note that the commas are needed)

Step 2: W' immediately gives a right answer. Let's check this.

In [20]:
W'N
Out[20]:
2×2 Array{Float64,2}:
 -1.11022e-16  -1.66533e-16
 -8.32667e-17   3.33067e-16
In [ ]:
using LinearAlgebra
U,s,V =svd(A, full=true)
display(U)
display(s)
display(V)

Understanding that the last two columns of Q are the completion of the left part to an orthogonal matrix explain why this worked.

Answer: Write the columns of $N$ as $N_1, N_2$. We want to show that $W^Tx=0$ if and only if $x$ is a linear combination of $N_1$ and $N_2$.
Write the columns of the orthogonal matrix $Q$ as $Q_1, \cdots, Q_4$. Note that the space of all the linear combination of $Q_1$ and $Q_2$ is the same as the space of all the linear combination of $N_1$ and $N_2$. So the above statement is equivalent to $W^Tx=0$ if and only if $x$ is a linear combination of $Q_1$ and $Q_2$.
1). If $x$ is a linear combination of $Q_1$ and $Q_2$, then $x= \begin{pmatrix}Q_1,Q_2\end{pmatrix}b$, where $b$ is $2\times 1$ vector. Note $Q$ is an orthogonal matrix. Then $Q_i\cdot Q_j=0$ for $i\neq j$, i.e., $Q_i^T Q_j=0$. Thus $W^Tx=\begin{pmatrix}Q_3^T\\ Q_4^T\end{pmatrix}(x)=\begin{pmatrix}Q_3^T\\ Q_4^T\end{pmatrix}\begin{pmatrix}Q_1,Q_2\end{pmatrix}b=0$.
2) For the other direction, assume $W^Tx=0$. Let M be $Q[:,[1,2]]$.Note that $ I=QQ^T=MM^T+WW^T$. So we have $x=(MM^T+WW^T)x=(M(M^Tx)+ W(W^Tx))= M(M^Tx)$ thus is a linear combination of $Q_1$ and $Q_2$.

(9) (Julia submit a screenshot problem) GS p143 3.2 16:
With Julia construct A so that the nullspace of A = all multiples of (4,3,2,1). Its rank is ..... ?

In [25]:
using LinearAlgebra
N = [4
     3
     2
     1]
# Please finish the computation following problem 8 as a template
Out[25]:
4-element Array{Int64,1}:
 4
 3
 2
 1
In [26]:
# please provide a screenshot of your check

Answer: 3.

In [1]:
using LinearAlgebra
N = [4
     3
     2
     1];
Q,=qr(N)
W = Q[:,[2,3,4]]
Out[1]:
4×3 Array{Float64,2}:
 -0.547723   -0.365148  -0.182574 
  0.826619   -0.115587  -0.0577936
 -0.115587    0.922942  -0.038529 
 -0.0577936  -0.038529   0.980735 
In [2]:
W'N
Out[2]:
3-element Array{Float64,1}:
 4.440892098500626e-16
 2.220446049250313e-16
 2.220446049250313e-16
In [3]:
using LinearAlgebra
U,s,V =svd(W, full=true)
display(s)
3-element Array{Float64,1}:
 1.0               
 1.0               
 0.9999999999999999

(10) Use the svd to explain why no 3x3 matrix have a nullspace that equals its column space.

Answer: Let $A$ be a $3\times 3$ matrix with the full SVD writen in block matrices: $A=\begin{pmatrix}U_1&U_2\end{pmatrix}\begin{pmatrix}\Sigma & 0\\0& 0\end{pmatrix} \begin{pmatrix}V^1 &V^2\end{pmatrix}^T$, where $r$ is the rank of $A$. The nullspace is given by $V_2$, which is generated by $3-r$ vectors. The column space is given by $U_1$, which is generated by $r$ vectors. Since $3-r\neq r$ for all integer $r$, the nullspace is not equal to the column space.