due Friday 5/10 at 11:59pm
(1) (GS prob 1, page 480) What is the steady state eigenvector of
A = [.9 .15
.1 .85]
2×2 Array{Float64,2}: 0.9 0.15 0.1 0.85
$A$ has eigenvalue $1$ with corresponding (steady state) eigenvector $(.6,.4)$
This you should do by hand and can check if you like with Julia.
using LinearAlgebra
λ,X = eigen(A)
X[:,1]/sum(X[:,1])
2-element Array{Float64,1}: 0.6000000000000001 0.39999999999999997
Show that the square of a positive Markov matrix is a positive Markov matrix. Also show that the square of a general Markov matrix is also Markov. (Hint: there is an easy way and a more difficult way.)
A matrix $A$ is a positive (resp. general) Markov matrix precisely when all its entries are positive (resp. non-negative) and $\mathbf{1} A=\mathbf{1}$, where $\mathbf{1}$ denotes the all ones vector of the appropriate size. Clearly, the product of two matrices with only positive (resp. non-negative) entries has only positive (resp. non-negative) entries. As $\mathbf{1} A^2=(\mathbf{1} A) A=\mathbf{1} A=\mathbf{1}$, we conclude that the square of a positive (resp. non-negative) Markov matrix is a positive (resp. non-negative) Markov matrix.
What can you say about the steady state of a symmetric positive Markov matrix?
If $A$ is a symmetric Markov matrix, then each row has sum $1$, and so $A$ has eigenvalue $1$ with corresponding eigenvector $\mathbf{1}$. As a positive Markov matrix has a unique eigenvalue $1$ of maximum magnitude, the steady state eigenvector is then $\frac{1}{n} \mathbf{1}$, where $A$ is $n \times n$.
Show that if $M$ is positive definite , so is $M^{-1}$.
By definition, $M$ is symmetric, say of dimension $n \times n$, and has eigenvalues $\lambda_1,\ldots,\lambda_n >0$. $M^{-1}$ exists, as $M$ does not have eigenvalue $0$. $M^{-1}$ is symmetric as $(M^{-1})^T=(M^T)^{-1}=M^{-1}$. Lastly, $M^{-1}$ has eigenvalues $\lambda_1^{-1},\ldots,\lambda_n^{-1}>0$. Therefore, $M^{-1}$ is positive definite.
(5a) We have mentioned that the image of the unit ball under A is an ellipsoid. This means that the set of all y=Ax such that ||x||=1 is an ellipsoid. Show that if A is square and invertible, the equation of this ellipsoid is $$y^T(AA^T)^{-1}y=1.$$
(5b) Turning this around if $M$ is positive definite, the equation $y^T M y$ is an ellipsoid. Show that it has semi-axes equal to the reciprocals of the eigenvalues of $M$ and the directions are the eigenvectors.
a) $y$ lies on the ellipsoid precisely when $1^2=||A^{-1}y||^2=(A^{-1} y)^T (A^{-1} y)=y^T (A^T)^{-1} A^{-1} y=y^T (AA^T)^{-1} y$.
b) As $M$ is positive definite, say of dimension $n \times n$, it has positive real eigenvalues $\lambda_1,\ldots,\lambda_n$ with corresponding orthonormal eigenvectors $v_1,\ldots,v_n$. For each $v_i$, the ellipsoid has a semi-axis $\alpha_i v_i$, where $\alpha_i$ is a real scalar such that $\alpha_i v_i$ lies on the ellipsoid, and therefore satisfies $1=(\alpha_i v_i)^T M (\alpha_i v_i)=\alpha_i^2 \lambda_i$. The semi-axes of the ellipsoid are then given by $\frac{1}{\sqrt{\lambda_i}} v_i$.
For which values of $c$ is the matrix below positive definite?
\begin{pmatrix} c & 1 & 1 \\ 1 & c & 1 \\ 1 & 1 & c \end{pmatrix}
As the above matrix is equal to $J+(c-1)I$, where $J$ is the all ones matrix, it has eigenvalues $c+2$ with multiplicity $1$ and $c-1$ with multiplicity $2$, and is therefore positive definite precisely when $c>1$.
If Interact happens to work you can try, if you like, playing with this in julia:
using Interact, LinearAlgebra
@manipulate for c=.5:.01:5
round.( eigvals( [c 1 1 ;1 c 1;1 1 c]) , digits=3)
end
The eigenvalues of this 3x3 matrix is readily found by first geting the eigenvalues of the 3x3 ones matrix and then doing what? This method requires almost no pencil and paper, and more importantly helps you understand why the eigenvalues are what they are.
In class and in the textbook, positive-definiteness was defined only for Hermitian matrices. However, it can be extended to arbitrary square matrices as follows:
The matrix $A$ is positive-definite if $A + A^H$ is positive-definite (in the sense defined in class and in the book: $A + A^H$ has positive eigenvalues, or equivalently $x^H(A+A^H)x \gt 0$ for all $x \ne 0$, or equivalently $A^H + A = B^H B$ where $B$ has full column rank). Show that if $A + A^H$ is positive definite, then the eigenvalues of $A$ have positive real parts.
(Hint: consider an eigenvector $Ax = \lambda x$. How can you use this in one of the positive-definite conditions for $A+A^H$ above?)
Consider an eigenvalue $\lambda$ of $A$ with corresponding eigenvector $x$ of unit norm. We have $x^H A x=\lambda x^H x=\lambda$ and $x^H A^H x=(x^H A x)^H=\overline{\lambda}$. As $0<x^H(A+A^H)x=\lambda+\overline{\lambda}=2 Re(\lambda)$, $\lambda$ has positive real part.
Suppose a symmetric positive definite matrix $A$ has a unique largest eigenvalue. While pure mathematicians would say that $A^{2019}$ has full rank, applied folks would say that it is very likely rank $1$ for all intents and purposes. Explain this with words or a Julia example (2019 may be too large on a computer).
As $A$ is positive definite, all its eigenvalues are real positive numbers. Write $\lambda_1$ for the unique largest eigenvalue, and $\lambda_2<\lambda_1$ for the second largest eigenvalue of $A$. Then $\lambda_2=\frac{\lambda_1}{r}$, for some positive real number $r$. As the largest and second largest eigenvalues of $A^{2019}$ are $\lambda_1^{2019}$ and $\lambda_2^{2019}=\frac{\lambda_1^{2019}}{r^{2019}}$, all eigenvalues of $A^{2019}$ other than the largest eigenvalue $\lambda_1^{2019}$ are very small compared to this largest eigenvalue, and so are, in an appropriate sense, approximately $0$.
Consider the function $f(x,y)=x^3−12xy+8y^3$. Show that the gradient vanishes at exactly two points, (0,0) and (2,1). One of these is a saddle point, and one a local minimum. Which ones and why?
We have $\nabla f(x,y)=(3x^2-12y,24y^2-12x)$. For the gradient to vanish, we must have $2y^2=x$ and $0=12y^4-12y$. As the second equation is satisfied precisely when $y=0$ or $y=1$, we see that the gradient vanishes at precisely the points $(0,0)$ and $(2,1)$. To determine if these points are minima, maxima, or saddle points, we compute the Hessian $H(x,y)=\begin{pmatrix} \frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\ \frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2} \end{pmatrix}=\begin{pmatrix} 6x & -12 \\ -12 & 48y \end{pmatrix}$. $H(2,1)$ is positive definite, as is easily seen by the fact that it has positive determinant and positive trace, which implies $(2,1)$ is a local minimum. On the other hand $H(0,0)$ has eigenvalues $\pm 12$, which implies $(0,0)$ is a saddle point.
Use the full svd to derive the so called "polar factorization" of a matrix: Every square matrix $A$ can be factored into $QS$ where $Q$ is orthogonal and $S$ is positive semi-definite. If $A$ is invertible further show $S$ is positive definite. (Hint: if $A=U\Sigma V^T$, the orthogonal matrix $Q$ will be $UV^T$. What do we need to complete the story.) (Hint: if you don't yet see the answer, the polar decomposition is on page 394 of Strang.)
We have $A=U\Sigma V^T=U(V^T V)\Sigma V^T=(UV^T)(V \Sigma V^T)=QS$, where $Q=UV^T$ and $S=V \Sigma V^T$. As $U$ and $V$ are orthogonal, so is $Q=UV^T$. $S=V \Sigma V^T$ is clearly symmetric. Moreover, as $V^T=V^{-1}$, $S$ is similar to $\Sigma$ and therefore has non-negative eigenvalues (as $\Sigma$ is a diagonal matrix with a non-negative diagonal) and so is positive semi-definite. If further $A$ is invertible, then $\Sigma$ has strictly positive diagonal, and so $S$ has strictly positive eigenvalues, and is, therefore, positive definite.