Suppose that $M$ is a diagonalizable $m \times m$ Markov matrix with all positive entries. For each of the following, say whether the ODE solutions at large times $t$ are expected to be exponentially growing, exponentially decaying, oscillating forever, or approaching a nonzero constant, for a randomly chosen initial condition $x(0)$. Justify your answers.
(a) $\frac{dx}{dt} = Mx$.
(b) $\frac{dx}{dt} = (M-I)x$.
(c) $\frac{dx}{dt} = (M^2-M-I)x$.
Recall that positive Markov matrices have one eigenvalue $\lambda_1=1$ and for all other eigenvalues, $|\lambda|<1$. If a matrix is diagonalizable, looking at what happens to the eigenvectors is enough to characterize all solutions of the ODE. Also recall from lecture that
$$\frac{dx}{dt}= Ax$$has solution given by
$$x(t)= x_1 e^{\lambda_1 t}c_1(0) + \dots x_m e^{\lambda_m t}c_m(0)$$where $x_1, \dots, x_m$ are an eigenbasis for $A$, $\lambda_1, \dots, \lambda_m$ are the corresponding eigenvalues, and the values $c_1(0), \dots, c_m(0)$ are determined by the initial condition $x(0)$.
(a) The solution to this ODE has the form
$$x(t)= x_1 e^{t}c_1(0) + \dots x_m e^{\lambda_m t}c_m(0)$$where $x_j, \lambda_j$ are eigenvector-eigenvalue pairs for $M$ and we've used the fact that $\lambda_1=1$. As $t \to \infty$, the first term will dominate the rest so the solution is exponentially increasing as $t \to \infty$. One way to see this is to write $\lambda_j = a_j + i b_j$. Notice that $|\lambda_j|^2 = a_j^2 + b_j^2<1$, so $a_j<1$ as well. Now, $e^{\lambda_j t}= e^{a_j t} e^{i b_j t}$. Factoring $e^t$ out of the expression for $x(t)$, we get $$x(t)= e^t(x_1 c_1(0) + x_2 e^{(a_2 -1)t} e^{i b_2 t}c_2(0)+ \dots x_m e^{(a_m -1)t} e^{i b_m t}c_m(0)).$$ All of the terms except the first are decaying as $t \to \infty$ because $a_j -1 <0$.
(b) The solution to this ODE is approaching a nonzero constant $x_1 c_1(0)$ as $t \to \infty$. Recall that the eigenvectors of $M-I$ are the same as the eigenvectors of $M$, and the eigenvalues are $\lambda_j -1$. So the solution to this ODE is
$$x(t)= x_1 c_1(0) + x_2 e^{(a_2 -1)t} e^{i b_2 t}c_2(0)+ \dots x_m e^{(a_m -1)t} e^{i b_m t}c_m(0).$$(This is almost exactly the same as the solution to the previous ODE, but we've subtracted $1$ from each eigenvalue, which is the same as multiplying the previous solution by $e^{-t}$.) As discussed above, $a_j-1<0$ for all $j$, so all terms but the first term decay as $t \to \infty$ and so the solution approaches $x_1 c_1(0)$. This vector is nonzero for random initial conditions $x(0)$.
(c) The behavior of the solution to this ODE as $t \to \infty$ can vary, depending on $M$. The eigenvectors of $M^2 -M -I$ are the same as the eigenvectors of $M$ and the eigenvalues are $\mu_j= \lambda_j^2 - \lambda_j -1$. Writing $\lambda_j = a_j + i b_j$ as above, we have
\begin{align} \mu_j=\lambda_j^2 - \lambda_j -1&= (a_j+ i b_j)^2 - a_j - ib_j -1\\ &= (a_j^2 -b_j^2 - a_j -1) + i (2 a_j b_j - b_j). \end{align}The real part of $\mu_j$ might be negative or positive. For example, say $M$ has an eigenvalue of $-0.8$. Then $$a_j^2-a_j=1.44 \qquad \text{and} \qquad b_j=1$$ so $\mathrm{Re}[\mu_j]= 0.44$. In this instance, the solution of the ODE exponentially increases as $t \to \infty$.
On the other hand, if $a_j>0$ (that is, if $\lambda_j$ has positive real part), then $0<a_j<1$, so $a_j^2<a_j$ and $a_j^2-a_j<0$. In this situation, $\mu_j$ has negative real part. So if all eigenvalues of $M$ have positive real parts, then all eigenvalues of $M^2-M-I$ have negative real parts and the solution exponentially decays.
Or, if you have an eigenvalue of $a_j=0.5 - \sqrt{1.25 + b_j^2}$ (from the quadratic formula), then $a_j^2 -b_j^2 - a_j -1 = 0$ and the solutions, assuming all of the other eigenvectors are decaying (except for the complex-conjugate eigenvalue $a_j - ib_j$), then the solution will either oscillate forever (if $b_j \ne 0$, noting that $a_j^2 + b_j^2 < 1$ if $|b_j| \lesssim 0.62481$) or go to a nonzero constant (if $b_j = a_j = 0$, i.e. if $M$ is singular by having e.g. two identical columns).
In class we showed that, for a complex-conjugate pair of eigenvalues $\lambda_1 = a+ib$ and $\lambda_2 = \overline{\lambda_1} = a-ib$, eigenvectors $x_1$ and $x_2 = \overline{x_1}$, and (scalar) coefficients $c_1$ and $c_2 = \overline{c_1}$, we can write
$$ c_1 e^{\lambda_1 t}x_1 + c_2 e^{\lambda_2 t}x_2 = c_1 e^{\lambda_1 t}x_1 + \overline{c_1 e^{\lambda_1 t}x_1} = 2\mathrm{Re}\left[ c_1 e^{\lambda_1 t}x_1 \right] = 2e^{at} \mathrm{Re}\left[ c_1 e^{ibt}x_1 \right] $$which turned into a vector of terms proportional to $r e^{at} \cos(bt + \phi)$, where the amplitude $r$ and phase $\phi$ depended on the coefficient $c_1$ and the eigenvector $x_1$ components.
Derive that we can alternatively write $$ c_1 e^{\lambda_1 t}x_1 + \overline{c_1 e^{\lambda_1 t}x_1} = e^{at} \left(v_1 \cos(bt) + v_2 \sin(bt) \right) $$ for some vectors $v_1 = \_\_\_\_\_, v_2 = \_\_\_\_\_$ in terms of $c_1$ and $x_1$. (Hint: break $c_1 x_1$ into its real and imaginary parts, i.e. write $c_1 x_1 = \text{(real part)} + i \text{(imag part)}$.)
We already know that
$$c_1 e^{\lambda_1 t}x_1 + \overline{c_1 e^{\lambda_1 t}x_1}=2e^{at} \mathrm{Re}\left[ c_1 e^{ibt}x_1 \right],$$so we just need to write the right-hand side in the desired form.
Following the hint, let $\boxed{y = \mathrm{Re}(c_1 x_1)}$ and $\boxed{z = \mathrm{Im}(c_1 x_1)}$ be the real and imaginary parts of $c_1 x_1$, so that $y, z$ are real vectors. Now:
$$ c_1 x_1 e^{at + ibt} = e^{at} \left( (y+iz) (\cos(bt) + i\sin(bt)) \right) = \\ e^{at} \left( y \cos(bt) - z \sin(bt) + i(\cdots) \right) $$So we have
$$ 2\mathrm{Re}\left[ c_1 e^{\lambda_1 t}x_1 \right] = 2 e^{at} \left(y \cos(bt) - z \sin(bt)\right) $$and we can immediately identify $\boxed{v_1 = 2 y}$ and $\boxed{v_2 = - 2z}$ by comparison with the desired formula.
Professor May Trix is trying to construct an 18.06 homework question in which $\frac{dx}{dt}=Ax$ has the solution $$x(t)=v_{1}e^{-3t}\cos(2t)+v_{2}e^{-t}+v_{3}e^{-3t}\sin(2t)$$ for some nonzero real constant vectors $v_{1},v_{2},v_{3}$, and some initial condition $x(0)$. Help May construct $A, v_{1}, v_{2}, v_{3},$ and $x(0)$:
(a) Write down a numerical formula for a possible real matrix $A$ such that $A$ is as small in size as possible and where $A$ contains no zero entries. Your formula can be left as a product of some matrices and/or matrix inverses — you don't need to multiply them out or invert any matrices, but you should give possible numeric values for all of the matrices in your formula. (You don't need to explicitly check that your $A$ has no zero entries as long as zero entries seem unlikely. e.g. the inverse of a matrix with no special structure probably has no zero entries. It wouldn't hurt to check in Julia, however.)
Hint: do problem 2 first.
(Note that there are many possible answers here, but they will all have certain things in common.)
(b) Using the numbers that you chose from the formula in your previous part, give possible corresponding (numeric) values for $x(0)$, $v_{1}$, $v_{2}$, and $v_{3}$.
(a) First, we identify what the eigenvalues of $A$ must be. Using Problem 2, we recognize part of
$$x(t)=v_{1}e^{-3t}\cos(2t)+v_{2}e^{-t}+v_{3}e^{-3t}\sin(2t)$$as coming from a sum of the form
$$c_1 e^{\lambda_1 t}x_1 + \overline{c_1 e^{\lambda_1 t}x_1} = e^{at} \left(u_1 \cos(bt) + u_2 \sin(bt) \right).$$In particular,
$$v_{1}e^{-3t}\cos(2t)+ v_{3}e^{-3t}\sin(2t)= e^{-3t}(v_1 \cos(2t) + v_3 \sin(2t))$$.
This tells us that two of the eigenvalues of $A$ are $-3 \pm 2i$. The remaining term in $x(t)$, which is $e^{-t}$, tells us that another eigenvalue of $A$ is $-1$.
So to obtain the desired solution, we need to choose a matrix $A$ with three nonzero eigenvalues $-1, -3 \pm 2i$. Since we want $A$ to be as small as possible, we should choose $A$ to be $3 \times 3$. We'll define $A$ by its diagonalization, so now we just need to pick the eigenvectors of $A$. Since we want $A$ to be real, the eigenvectors for $-3 \pm 2i$ must be conjugates of each other. Besides that, it doesn't matter exactly what eigenvectors we pick, as long as they're linearly independent. So for example, we could choose
$$A= X \Lambda X^{-1}= \begin{pmatrix} 2 & 1+i & 1-i\\ 1 & i & -i\\ 1 & -i & i \end{pmatrix} \begin{pmatrix} -1 & & \\ & -3 + 2i &\\ & & -3 -2i \end{pmatrix} X^{-1}. $$Because all of the entries of $X$ are nonzero and have no special structure (other than the complex-conjugate pairs), it's unlikely that we'll get any zero entries in $A$. Checking in Julia, we see that this choice gives: $$\begin{pmatrix} -5 & 6 & 2 \\ -2 & 1 & 2 \\ 2 & -2 & -3 \end{pmatrix}.$$
(b) For the $A$ we've chosen, we know solutions to $dx/dt= Ax$ have the form
$$x(t)= c_1 x_1 e^{-t} + 2e^{-3t} \mathrm{Re}\left[c_2 x_2 e^{2it}\right]$$where $x_1, x_2$ are eigenvectors of $A$ and the coefficients $c_1, c_2$ are determined by expanding the initial condition $x(0)$ in the basis of eigenvectors. We'll go about this backwards---we'll choose $c_1, c_2$ and then figure out what $x(0)$ must be. We can choose $c_1, c_2$ to be any nonzero complex numbers. Looking at the formula for $\mathrm{Re}\left[c_2 x_2 e^{2it}\right]$ we obtained above, we see that things simplify a lot if we choose $c_2$ to be real. So we choose $c_1=1$ and $c_2=1/2$ (to cancel out the 2). With this choice, we get
\begin{align} x(t)&= x_1 e^{-t} + e^{-3t} \mathrm{Re}\left[ x_2 e^{2it}\right]\\ &= \begin{pmatrix}2 \\1\\1 \end{pmatrix} e^{-t}+ e^{-3t}\left(\begin{pmatrix}1 \\0\\0 \end{pmatrix} \cos(2t) - \begin{pmatrix} 1 \\1\\-1 \end{pmatrix} \sin(2t)\right) \end{align}where in the second line, we are using Problem 2 and the fact that
$$x_2 = \begin{pmatrix}1 \\0\\0 \end{pmatrix} + i \begin{pmatrix}1 \\1\\-1 \end{pmatrix}. $$Written in this form, we see that $\boxed{v_1=[1, 0, 0]}, \boxed{v_2=[2, 1, 1]}$ and $\boxed{v_3=[-1, -1, 1]}$. Plugging in $t=0$, we get
$$\boxed{x(0)=\begin{pmatrix}2 \\1\\1 \end{pmatrix}+ \begin{pmatrix}1 \\0\\0 \end{pmatrix}= \begin{pmatrix}3 \\1\\1 \end{pmatrix}}.$$