(1) Let A=UΣVᵀ be the compact svd of A. Write the projection matrix onto the column space of A in simplest terms using possibly U,Σ, or V.
(2) Let A be the matrix below with the full SVD (Note: numbers with an e-16 or e-15 may be considered to be 0)
A = [1 3 1;3 8 2;2 4 0]
3×3 Array{Int64,2}: 1 3 1 3 8 2 2 4 0
using LinearAlgebra
U,s,V =svd(A, full=true)
display(U)
display(s)
display(V)
3×3 Array{Float64,2}: -0.31854 -0.369631 -0.872872 -0.848437 -0.299463 0.436436 -0.422713 0.879599 -0.218218
3-element Array{Float64,1}: 10.335397940329974 1.0860706307708408 5.333338897353784e-16
3×3 Adjoint{Float64,Array{Float64,2}}: -0.358891 0.452251 0.816497 -0.912783 0.0127012 -0.408248 -0.195001 -0.8918 0.408248
A) argue that (b₁,b₂,b₃) is the column space if 4b₁-2b₂+b₃=0 (you can use Julia or eyeball and approximate the right numbers to two digits or so)
B) What combinations of rows of A gives the zero row?
C) Eyeballing the numbers some more, what combination of columns of A gives the zero column?
# D) What value of i would have made the answer to A and B easier?
# Type it in and execute.
i =
U[:,i]/U[i,i]
(3) In the below we practice finding the general solution to Ax=b, in the context of a floating point computation. You should be able to eyeball the solutions and write nice numbers, assuming that reasonable rounding and approximate guessing will work out.
A = [1 3 1 2;2 6 4 8; 0 0 2 4]
3×4 Array{Int64,2}: 1 3 1 2 2 6 4 8 0 0 2 4
# using LinearAlgebra
U,s,V =svd(A, full=true)
display(U)
display(s)
display(V)
3×3 Array{Float64,2}: -0.297548 -0.494771 0.816497 -0.903516 -0.130351 -0.408248 -0.308421 0.859191 0.408248
3-element Array{Float64,1}: 12.117224233268539 2.858824387871658 1.7206525645354256e-15
4×4 Adjoint{Float64,Array{Float64,2}}: -0.173685 -0.26426 -0.948683 -1.7461e-16 -0.521055 -0.792781 0.316228 5.1856e-17 -0.373721 0.245628 3.05311e-16 -0.894427 -0.747441 0.491255 6.10623e-16 0.447214
Suppose we calculate (and we ignore floating point effects)
U*U'*[-.5, 2, 3] #Careful this is the full U
3-element Array{Float64,1}: -0.5 1.9999999999999978 2.9999999999999987
A) what does the above say about whether Ax=b has a solution?
B)
Find the right r in the below. Fill it in and execute.
C) What does the below say about the solution to Ax=b?
D)Convert these decimals to a simple fraction and check by hand.
r =
(V[:,1:r]*((U[:,1:r]'*[-.5,2,3])./s[1:r]))
E) Write down the general solution to Ax=b by eyeballing the information in the svd.
(4) (based on p1. on p175 of GS) Use the svd to show A) that v₁,v₂,v₃ are independent but B) v₁,v₂,v₃,v₄ are dependent.
v₁ = [1,0,0]
v₂ = [1,1,0]
v₃ = [1,1,1]
v₄ = [2,3,4]
A = [v₁ v₂ v₃]
3×3 Array{Int64,2}: 1 1 1 0 1 1 0 0 1
svdvals(A)
3-element Array{Float64,1}: 2.246979603717467 0.8019377358048381 0.5549581320873711
B = [ ] #fill in the right values (Note you can copy-past subscripts or
type v\_1<tab>)
svdvals(B)
syntax: extra token "v" after end of expression
(5) From GS Problem 16 (page 176)
(6). From GS Problem 24 (page 177) True of False (with a reason based on the SVD)
(A) If the columns of a matrix are dependent, so are the rows.
(B) The column space of a 2x2 matrix is the same as its row space.
(C) The column space of a 2x2 matrix has the same dimension as the row space.
(D) The columns of a matrix are a basis for the column space.
(7). If A=QR, where R is non-singular, A) what is the projection P onto the column space of A in terms of possibly Q and R.
B) Write P$^T$ in terms of P.
C) Write P$^2$ in terms of P.
(8) (GS p217 problem 26) If an m by m matrix has $A^2=A$ and is rank m prove that A=I. (Hint: for any vector x, let w solve Aw=x. Now show Ax=x.)
(9) (A projection matrix is a symmetric matrix such that $P^2=P$.)
If P is a projection matrix, then show that I-P is as well.
(10) We will do an in class demo of recognizing digits (software a bit involved for a hw, but it would have been fun.) Here is the math.
Suppose in 784 dimensional space, we have a 1000 vectors collected in a 784x1000 matrix called apples. We have another 1000 vectors collected in a 784x1000 matrix called oranges. Each matrix has rank 784, but the best rank 50 approximation is very good for both the apples and oranges matrix.
(Note: much of science and engineering is about learning to deal with the inexact. We all find the exact so much more comforting, so this problem might take you out of your comfort zone, but only a little.)
(A) What is the size of the U matrix for the exact compact SVD for the apples (or the oranges) matrix?
(B) Suppose a new vector comes along and we want to decide if it's best classified as an apple or as an orange.
Would projecting it onto the column space of either U help? Why or why not?
(C) Consider the dot product of the new vector with the first column of the apple U and the orange U. We now have two numbers. What might you hope is true, if the dot product of the orange is much larger than that of the apple?
(D) Let's consider taking the first k columns of the apple U and also the first k columns of the orange U. Consider ||Uᵀ * (new vector)|| for the apple and orange. What might you expect as k goes from 1 to 784.