Suppose we have a $5\times 5$ diagonalizable matrix $A$ with diagonalization: $$ A = X \underbrace{\begin{pmatrix} 1 & & & & \\ & 2 & & & \\ & & 2 & & \\ & & & 3 & \\ & & & & 4 \end{pmatrix}}_\Lambda X^{-1} \, $$ and another non-diagonalizable matrix $B$ with the Jordan form: $$ B = Y \underbrace{\begin{pmatrix} 1 & & & & \\ & 2 & 1 & & \\ & & 2 & & \\ & & & 3 & \\ & & & & 4 \end{pmatrix}}_J Y^{-1} $$ You are given a function $$ f(\mu, y) = \Vert (M - \mu I)^{-1} y \Vert $$ where $M$ is either $A$ or $B$, but you aren't told which one.
Now, you pick a $y$ at random and evaluate $$\alpha = \frac{f(2.00001, y)}{f(2.0001, y)} \, $$
(a) What would you expect $\alpha$ to be, approximately, if $M = A$? (Imagine expanding $y$ in the basis of the …)
(b) What would you expect $\alpha$ to be, approximately, if $M = B$? (Imagine expanding $y$ in the basis of the …)
(c) Check your predictions from (a) and (b) in Julia for a random X = randn(5,5)
, a random Y = randn(5,5)
, and a random y = randn(5)
. For convenience, the matrices Λ
and J
are entered in Julia below.
using LinearAlgebra
Λ = Diagonal([1,2,2,3,4])
J = Bidiagonal([1,2,2,3,4],[0,1,0,0],:U)
Suppose $A$ is a positive Markov matrix, so that there is a steady-state eigenvalue $\lambda=1$ with algebraic and geometric multiplicity 1 and all other eigenvalues have $|\lambda|<1$, as stated in class.
Suppose that $A$ is defective, however. Does $A^n x$ still converge as $n \to \infty$ to a steady-state eigenvector for an arbitrary initial $x$? Why or why not? (If we expand $x$ in a basis of eigenvectors + generalized eigenvectors, how does each term behave when you multiply by $A^n$?)
$A$ is a Hermitian positive-definite matrix with eigenvalues (sorted in descending order) $\lambda_1 > \lambda_2 > \lambda_3 > \cdots > 0$.
We carry out 1000 steps of the power method (repeatedly computing $q \leftarrow \mathrm{normalize}(Aq)$ from a random starting vector, where we define $\mathrm{normalize}(x) = x / \Vert x \Vert$) and we get a pretty good estimate of an eigenvector $q_1$ of the biggest eigenvalue $\lambda_1$.
(a) If we pick another random starting vector $x$, compute $y = x - q_1 q_1^H x$, and apply 1000 steps of the power method, what would you expect to get if $q_1$ were exactly an eigenvector of $\lambda_1$ and the arithmetic is carried out exactly (no tiny roundoff errors in the 15th significant digit)? An eigenvector for which eigenvalue?
(b) If you carry out the steps of (a) but $q_1$ is only a (very good) approximation of an eigenvector (such as you might get from the power method) and/or you make tiny roundoff errors in each arithmetic step, what would you expect to get after 1000 iterations of the power method? An eigenvector for which eigenvalue?
(c) In practice, the power method after the first eigenvector $q_1$ is found (approximately) is better carried out by repeatedly computing $q \leftarrow \mathrm{normalize}((I-q_1q_1^T)Aq)$, a process called "deflation". This should converge to an eigenvector for which eigenvalue?
(d) Check your predictions in parts (b) and (c) numerically below for a random $A$ by filling in the ????
lines.
using LinearAlgebra
B = randn(5,5)
A = B'B # a random positive-definite Hermitian matrix
λ = reverse(eigvals(A)) # eigenvalues in descending order
# the power iteration to find q₁
q₁ = normalize(rand(5))
for i = 1:1000
q₁ = normalize(A*q₁)
end
# check that this is approximately an eigenvector of λ₁
# — the following difference should be almost zero:
A*q₁ - λ[1]*q₁
# the power iteration in part (b)
q⁽ᵇ⁾ = normalize((I - q₁*q₁')*rand(5))
for i = 1:1000
q⁽ᵇ⁾ = normalize(A*q⁽ᵇ⁾)
end
# check that this is approximately an eigenvector of which λ?
A*q⁽ᵇ⁾ - λ[????]*q⁽ᵇ⁾
# the power iteration in part (c)
q⁽ᶜ⁾ = normalize(rand(5))
for i = 1:1000
q⁽ᶜ⁾ = normalize((I - q₁*q₁')*A*q⁽ᶜ⁾)
end
# check that this is approximately an eigenvector of which λ?
A*q⁽ᶜ⁾ - λ[????]*q⁽ᶜ⁾