{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 18.06 Pset 7 (Due Wed 10/25 @ 11am)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 1 (10 points)\n", "\n", "Working with a basis that is *orthogonal* but not *orthonormal* (i.e. not normalizing vectors to length 1) is often convenient, since some other normalization might be convenient in some applications (e.g. in hand calculations square roots are annoying). In this problem, you will see what happens if you don't normalize an orthogonal basis to unit length.\n", "\n", "Suppose that we have a basis $v_1, v_2, \\ldots, v_n \\in \\mathbb{R}^m$ for some subspace $S \\subseteq \\mathbb{R}^m$, which are orthogonal but not normalized to unit lengths. That is:\n", "$$\n", "v_i^T v_j = \\begin{cases} 0 & \\mbox{if } i \\ne j \\\\ s_i & \\mbox{otherwise} \\end{cases}\n", "$$\n", "where $v_i^T v_i = \\Vert v_i \\Vert^2 = s_i > 0$ are the lengths² of the vectors.\n", "\n", "1. If $V = \\begin{pmatrix} v_1 & v_2 & \\cdots & v_n \\end{pmatrix}$ is the $m\\times n$ matrix whose columns are the basis vectors $v_i$, what is $V^T V$?\n", "\n", "2. In terms of $V$ and/or the vectors $v_i$, write out the projection matrix $P$ onto $S$ (a) as product of matrices and (b) as a sum of rank-1 matrices.\n", "\n", "3. Why is computing the projection $Pb$ of a vector $b$ much cheaper than computing $A(A^T A)^{-1}A^T b$ for an arbitray $m \\times n$ full-column-rank matrix $A$? (Be quantitative. The latter requires $\\sim mn + n^3$ operations; what does $Pb$ require using your expressions from the previous part?)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 2 (20 points)\n", "\n", "Often, in least-square fitting, one wants to *weight* each data point inversely with the (given) error estimates (variances) $\\sigma_i^2 > 0$. Data points with more error should *count for less* in the fit.\n", "\n", "That is, if we are trying to fit $y = Ax$ to \"observations\" $b \\in \\mathbb{R}^m$ with fit parameters $x \\in \\mathbb{R}^n$, then we try to minimize:\n", "$$\n", "E = \\sum_{i=1}^m \\frac{(y_i - b_i)^2}{\\sigma_i^2}\n", "$$\n", "\n", "It turns out that this process corresponds to using a *modified dot product* \"$\\cdot_\\sigma$\"\n", "$$\n", "y \\cdot_\\sigma z = \\sum_{i=1}^m \\frac{y_i z_i}{\\sigma_i^2} = y^T W z\n", "$$\n", "where $W$ is the $m \\times m$ diagonal matrix\n", "$$\n", "W = \\begin{pmatrix} \\sigma_1^{-2} & & & & \\\\\n", " & \\sigma_2^{-2} & & & \\\\\n", " & & \\sigma_3^{-2} & & \\\\\n", " & & & \\ddots & \\\\\n", " & & & & \\sigma_m ^{-2} \\end{pmatrix}\n", "$$\n", "We can also define the *modified length* $\\Vert y \\Vert_\\sigma = \\sqrt{y \\cdot_\\sigma y} = \\sqrt{y^T W y}$.\n", "\n", "1. Write the error $E$ from above as an expression in terms of $Ax$, $b$, and $\\cdot_\\sigma$ or $\\Vert \\cdots \\Vert_\\sigma$.\n", "\n", "2. What is the projection matrix $P_\\sigma$ that projects $b$ into a vector $p \\in C(A)$ so that the \"error\" $b - p$ is orthogonal to $C(A)$ under *our modified dot product*? i.e. what is the new form of orthogonal projection?\n", "\n", "3. Derive an equation, analogous (but not identical!) to the normal equations $A^TA\\hat{x}=A^Tb$, for the $\\hat{x}$ that minimizes $E$. (See e.g. the derivation of least-squares in class or in the book; the \"by algebra\" derivation in section 4.3 is particularly short.)\n", "\n", "4. Outline the steps of a Gram-Schmidt process that converts a sequence of vectors $a_1,a_2,\\ldots$ into vectors $q_1,q_2,\\ldots$ that are orthonormal under our modified dot product. i.e. it converts a matrix $A$ into a matrix $Q$ with $Q^T W Q = I$ and $C(A)=C(Q)$.\n", " - If $A = QR$, then give a formula for $R = \\cdots$ in terms of $A$, $Q$, and $W$. (This should give an upper-triangular $R$!)\n", " \n", "5. Apply your Gram-Schmidt process to the following 4 vectors and $W$ matrix in Julia (let Julia be your calculator for matrix/vector operations), and report the value of the *last* vector $q_4$. (At the end, check using the given code that `Q'*W*Q` is nearly I)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# our four vectors a1,a2,a3,a4 and our W matrix\n", "A = [ -1 1 -5 -4\n", " 4 1 -6 -2\n", " 2 -3 -7 1\n", " 0 1 -6 -8\n", " -7 1 0 5\n", " -5 8 -5 -5 ]\n", "a1 = A[:,1]\n", "a2 = A[:,2]\n", "a3 = A[:,3]\n", "a4 = A[:,4]\n", "W = diagm([1,2,3,4,5,6])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# functions for our modified dot product and norm\n", "mydot(x,y) = x'*W*y\n", "mynorm(x) = sqrt(mydot(x,x))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "q1 = a1 / mynorm(a1) # here's the first q vector for you\n", "q2 = ????\n", "q3 = ????\n", "q4 = ????" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check that $Q^T W Q \\approx I$:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Q = [q1 q2 q3 q4] # put your four vectors into a matrix\n", "Q' * W * Q" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem 3 (10 points)\n", "\n", "Suppose that we *reverse the order* of the columns of $A$, do Gram-Schmidt, and then *reverse the order* of the resulting basis to get back a matrix $Q$. Then $A = QS$ for some matrix $S$ — what is the expected pattern of nonzero entries in $S$, and why? (i.e. is $S$ upper-triangular or ...?)\n", "\n", "The following Julia code tries this process for a random matrix $A$ to get $Q$. Give a formula for $S = \\cdots$ and compute it in Julia to check your answer (or to give you a hint about what to look for):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# a randomly chosen 10×4 matrix with small integer entries\n", "A = [ 6 -5 0 6\n", " 9 0 -8 6\n", " 4 -8 -7 0\n", " -9 6 1 -1\n", " 7 3 -3 -4\n", " 1 4 4 6\n", " 8 4 0 9\n", " 7 0 -5 -5\n", " -2 8 -8 -3\n", " -1 7 -9 -1]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Arev = flipdim(A,2) # A with the columns in reverse order" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Qrev = qr(Arev)[1] # the Q from QR factorizing Arev (equivalent to Gram–Schmidt)\n", "Q = flipdim(Qrev,2) # reverse the columns to go back in the same order as A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 4 (10 points)\n", "\n", "(From Strang, section 4.4, problem 18.)\n", "\n", "Find orthogonal vectors $q_1, q_2, q_3$ by Gram-Schmidt from $a_1, a_2, a_3$ given by:\n", "$$\n", "a_1 = \\begin{pmatrix} 1 \\\\ -1 \\\\ 0 \\\\ 0 \\end{pmatrix}, \\;\n", "a_2 = \\begin{pmatrix} 0 \\\\ 1 \\\\ -1 \\\\ 0 \\end{pmatrix}, \\;\n", "a_3 = \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ -1 \\end{pmatrix},\n", "$$\n", "which are a basis for the vectors perpendicular to $d = \\begin{pmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\end{pmatrix}$. If you form the $4\\times 3$ matrix $Q = \\begin{pmatrix} q_1 & q_2 & q_3 \\end{pmatrix}$, then what is $Q^T d$?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Problem 5 (10 points)\n", "\n", "(From Strang, section 4.4, problem 37.)\n", "\n", "We know that $P = QQ^T $ is the projection onto $C(Q)$, where $Q$ is an $m \\times n$ matrix with orthonormal columns. Now add another column $a$ to produce $A = \\begin{pmatrix} Q & a \\end{pmatrix}$. Gram-Schmidt on $A$ replaces $a$ by what vector $q$? (Give a formula in terms of $a$ and $Q$ and/or $P$.)" ] } ], "metadata": { "kernelspec": { "display_name": "Julia 0.6.0", "language": "julia", "name": "julia-0.6" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "0.6.0" } }, "nbformat": 4, "nbformat_minor": 2 }