(1) Suppose an mxn matrix A has rank r. What are the ranks of
(A) $A^T$? (B) $AA^T$ ? (C) $AA^T + \lambda I$? ($\lambda>0$) (D) $A^TAA^TA$?
Solution
(A) rank($A^T$)=dim(row($A^T$))=dim(col($A$))=rank($A$)=$r$.
(B) Let $A=U\Sigma V^T$ be a full SVD.
Then, $AA^T=(U\Sigma V^T)(U\Sigma V^T)^T=U\Sigma V^TV\Sigma^T U^T=U\Sigma \Sigma^T U^T=U\Sigma^2 U^T$
Thus, $U\Sigma^2 U^T$ is a SVD of $AA^T$. If $\Sigma$ has $r$ positive singular values then so will $\Sigma^2$. Therefore, the rank of $AA^T$ is $r$.
(C) Since $I_m=UU^T$, the equation above yields $AA^T+\lambda I=U\Sigma^2 U^T+\lambda I=U[\Sigma^2+\lambda I_m] U^T$.
Since $\Sigma^2+\lambda I=\text{diag}[\sigma_1^2+\lambda, \cdots,\sigma^2_r+\lambda, \lambda, \cdots,\lambda]$, the rank is $m$.
(D) $A^TAA^TA=(U\Sigma V^T)^T(U\Sigma V^T)(U\Sigma V^T)^T(U\Sigma V^T)=V\Sigma^T U^T U\Sigma V^TV\Sigma^T U^T U\Sigma V^T=V\Sigma^T\Sigma\Sigma^T\Sigma V^T=V\Sigma^4 V^T$.
$\Sigma^2$ has $r$ positive singular values as like $\Sigma$. Therefore, the rank is $r$.
(2) (Easy) Without indices show that trace(ABC) = trace(BCA) using trace(XY)=trace(YX) whenever the matrix products make sense.
Solution
trace(ABC)=trace(A[BC])=trace([BC]A)=trace(BCA).
(3) A reflector is defined as a matrix of the form $Q = I -2 uu^T$ where $\|u\|=1$. (A) Show that a reflector is orthogonal by showing that Q is symmetric and $Q^2=I$. (B) Explain briefly why this makes $Q$ orthogonal.
Solution
(A) Firstly we have that
$$Q^T=(I-2uu^T)^T=I^T-(2uu^T)^T=I-2(u^T)^Tu^T=I-2uu^T=Q.$$
Hence, $Q$ is symmetric.
Secondly we have that $$Q^2=(I-2uu^T)^2=I^2-2uu^T-2uu^T+4(uu^T)(uu^T)=I-4uu^T+4u (u^Tu) u^T.$$
We can now use the fact that $u^Tu=\|u\|^2=1$, to obtain that $Q^2=I-4uu^T+4u (u^Tu) u^T=I-4uu^T+4uu^T=I$.
(B) $I=Q^2=QQ=QQ^T=Q^TQ$, and so $Q$ is orthogonal.
(4) GS p203 13. Put bases for the subspaces V and W in the columns of matrices A and B. Show that V and
W are orthogonal subspaces if and only $A^TB$ is the zero matrix. To be clear
(A) Show that if V and W are orthogonal then $A^TB$ iz zero,
(B)and if $A^TB$ is the zero matrixm then V and W are orthogonal.
Solution
(A) Set $A=[v_1,\cdots,v_m]$ and $B=[w_1,\cdots,w_n]$.
Then, the $(i,j)$-entry of $A^TB$ is $v_i^Tw_j=0$, because $V$ and $W$ are orthogonal. Namely, $A^TB=0$.
(B) Since the $(i,j)$-entry of $A^TB$ is $v_i^Tw_j$, $A^TB=0$ implies $v_i^Tw_j=0$ for all $i,j$.
Given any vectors $v\in V$ and $w\in W$, there exist some constants $a_i,b_j$ such that $v=\sum_{i=1}^ma_iv_i$ and $w=\sum_{j=1}^nb_jw_j$. This follows from the fact that $\{v_1,\cdots,v_m\}$ and $\{w_1,\cdots,w_n\}$ are bases for $V$ and $W$, respectively.
Therefore, $v^Tw=(\sum_i a_iv_i)^T(\sum_j b_jw_j)=\sum_i\sum_j a_ib_j v_i^Tw_j=\sum_i\sum_j a_ib_j\times 0=0$.
(5) GS p204 23 : If a subspace $S$ is contained in a subspace $V$, then $S^\perp$ contains $V^\perp$.
Solution
Given any $w\in V^\perp$, we have $w^Tv=0$ for all $v\in V$.
Since $S\subset V$, we have $w^Ts=0$ for all $s\in S$, namely $w\in S^\perp$.
In short, every vector in $V^\perp$ belongs to $S^\perp$. Thus, $V^\perp \subset S^\perp$.
(6) GS p.407 Which of these transformations are not linear? The input is $v=(v_1,v_2)$:
(a) $T(v)=(v_2,v_1)$
(b) $T(v)=(v_1,v_1)$
(c) $T(v)=(0,v_1)$
(d) $T(v)=(0,1)$
(e) $T(v) = v_2 - v_1 $
(f) $T(v)=v_1v_2$
Solution
Answer: (d), (f).
(d) is not linear because of $T(0)\neq 0$.
(f) is not linear because $T(1,1)=1$ but $T(2,2)=4$.
(7) Which of the these transformations is a linear transformation of $ n \times n$ matrices $X$:
(Any matrices A and B are fixed n by n matrices.)
(a) $T(X) = X^T$
(b) $T(X) = X^TX$
(c) $T(X) = A.*X$
(d) $T(X) = BXA^T$
(e) $T(X) = AXA$
Solution
Answer: (a), (c), (d), (e).
(d) is not linear becasue $T(I)=I$ but $T(2I)=4I$.
(8) Suppose for polynomials of the form $P(x) = a+bx+cx^2 +dx^3$, we say that $P_1 \perp P_2$ when $\int_0^1 P_1(t) P_2(t) dt$ =0. (a) Explain why this is a natural genaralization of dot product. (b) Describe the polynomials that are orthogonal to the constants $P(x)=a$. (c) What is the dimensionality of the set of functions of the form a+bx+cx^2 that are orthogonal constants.
Solution
(A) It is commutative $\int_0^1 P(t)Q(t) dt=\int^1_0 Q(t)P(t) dt$,
and bilinear $\int_0^1 P(t)(aQ(t)+bR(t))dt=a\int_0^1 P(t)Q(t) dt+b\int^1_0 P(t)R(t) dt$.
In addition, $\int_0^1 P^2(t) dt \geq 0$ and the equality holds only if $P(t)=0$.
(B) For the constant function $Q(x)=M$, the orthogonal polynomial $P(x)$ satisfies $0=\int_0^1 MP(t)dt=M\int_0^1 P(t)dt$.
Namely, the polynomials $P(x)=a+bx+cx^2+dx^3$ satisfying $0=a+\frac12 b+\frac13 c+\frac14 d$.
(C) The set is $\{a+bx+cx^2: 0=a+\frac12 b+\frac13 c \}=\{ b(x-\frac12)+c(x^2-\frac13): b,c \in \mathbb{R} \}$.
Hence, $\{x-\frac12,x^2-\frac13\}$ form a basis. So, its dimension is $2$.
(9) Consider the graph and associated matrix from Strang's book p.453. You will not need to own the book, nor be familiar with circuits to do this problem.
A graph is a collection of n vertices (called nodes) and m directed line segments called edges. The information in a graph can be tabulated in matrix form. Row i of A corresponds to edge i, there is -1 in the outgoing vertex and a +1 for the incoming. For example edge 6 goes from node 3 to node 4, so there is a -1 in the (6,3) entry and a +1 in the (6,4) entry.
From a column viewpoint, column j has a +1 for all incoming edges, and a -1 for all outgoing edges.
From an imformation view, there is no difference between the picture on the left, and the matrix A.
Here is A in Julia, it's rank and rows 1,2,and 4.
A = [-1 1 0 0
-1 0 1 0
0 -1 1 0
-1 0 0 1
0 -1 0 1
0 0 -1 1]
6×4 Array{Int64,2}: -1 1 0 0 -1 0 1 0 0 -1 1 0 -1 0 0 1 0 -1 0 1 0 0 -1 1
using LinearAlgebra
rank(A)
3
A[ [1,2,4], :]
3×4 Array{Int64,2}: -1 1 0 0 -1 0 1 0 -1 0 0 1
(A) Argue that the rows above are independent without a computer.
(B) Find a nullspace vector for the matrix A and argue that the rank of A must therefore be 3.
(C) There are four conditions, one at each node, that one can check for a vector $b$ in $R^6$ to be in the left nullspace. If you speak the language of electrical current, say something about current "in" and current "out" at each node if $b$ is in the left-nullspace. Otherwise just write the four equations.
A loop is defined as a path that traverses each node no more than once. It can be encoded as vector of ±1 where +1 means the loop follows the edge in the direction of the arrow, and -1 is the opposite. The three loops encoded as vectors appear above.
(D) These three vectors are a basis for the __(fill in the blank) __space of A.
(E) Use the result of (D) to find three conditions that are equivalent to the four conditions in (C). (No worries if you haven't seen this before, but this is the rigorous approach to loop currents that you may have seen in a physics or EE class. Loop currents are a basis for a fundamental space and therefore are more efficient than examining every node.)
(F) Suppose v is in $R^4$, the result of $Av$ is physically known as the voltage drop vector. Argue that all voltage drop vectors are orthogonal to the three loop vectors above.
Solution
(A) Let $v_1,v_2,v_4$ be the row vectors above. Suppose $0=c_1v_1+c_2v_2+c_4v_4$.
Then we can see immediately that $c_1=c_2=c_4=0$. This shows that they are linearly independent.
(B) We can easily check $(1\, 1\, 1\, 1)^T \in \text{null}(A)$. Thus, $0<\text{dim}(\text{null}(A))=n-r=4-r$.
However, the result of (A) implies $r \geq 3$. Hence, $r=3$.
(C) The four conditions are that $$\boxed{0=-b_1-b_2-b_4, \;\; 0=b_1-b_3-b_5, \;\; 0=b_2+b_3-b_6, \;\; 0=b_4+b_5+b_6.}$$ This follows from the fact that the left null space is the orthogonal complement of the column space.
(D) These three vectors are a basis for the the left null space of $A$.
(E) Let us denote the three loop vectors above by $l_1,l_2,l_3$. Then, $b=c_1l_1+c_2l_2+c_3l_3$ for some constant $c_1,c_2,c_3$. Since only three of the columns of $A$ are linearly independent, only three of the four conditions in (C) are independent. For example we could use the conditions $$\boxed{0=-b_1-b_2-b_4, \;\; 0=b_1-b_3-b_5, \;\; 0=b_2+b_3-b_6.}$$ The fourth condition given in (C) is in fact implied by these three conditions
(F) We recall that a loop vector $l$ belongs to the left null space of $A$. Hence, $l^TA=0$.
Therefore, $0=(l^TA)v=l^T(Av)$, namely any voltage drop vector $Av$ is orthogonal to a loop vector $l$.
(10) Draw your own (connected) graph with five or more nodes and enough loops,
(A) draw the "A" matrix with one +1 and one -1 in each row
(B) argue that the rank of A is n-1
(C) write down the n-1 loop vectors for your graph, and argue they form a basis for
some fundamental subspace.
(Note: "connected" means that every node can get to every other node by following edges in the + or - direction)
Solution
(A) Example:
(B) In the example above, the first four row vectors are independent. Hence, $r\geq 4$.
On the other hand, $(1\,1\,1\,1\,1)^T\in \text{null}(A)$ implies $0<\text{dim}(\text{null}(A))=n-r=5-r$.
Therefore, $r=4=5-1=n-1$.
In general case, we draw a path from the node i to the node n. We remind that each row vector is defined by an edge.
So, we can obtain the vector $v_i$ whose i-th entry is $+1$, n-th entry is $-1$, and others are $0$ by adding or subtracting the row vectors of the edges in the path.
These vectors $v_1,\cdots,v_{n-1}$ are independent, and thus $r \geq n-1$.
On the other hand, $(1\, \cdots, 1)^T$ belongs to the null space, and so $r<n$. In conclusion, $r=n-1$.
(C) In the example, one can draw loop vectors of the four trangles.
Namely
$$\boxed{l_1=\begin{pmatrix} 1 \\ -1\\ 0\\ 0\\ 1\\0\\0\\0\end{pmatrix}, \;\;
l_2=\begin{pmatrix} 0\\ 1\\-1\\0\\0\\1\\0\\0 \end{pmatrix}, \;\;
l_3=\begin{pmatrix} 0\\ 0\\1\\-1\\0\\0\\1\\0\end{pmatrix}, \;\;
l_4=\begin{pmatrix} -1\\0\\0\\1\\0\\0\\0\\1 \end{pmatrix}.} $$