$$ \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\paren}[1]{\left(#1\right)} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\ang}[1]{\left\langle#1\right\rangle} \newcommand{\R}{\mathbb{R}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\N}{\mathbb{N}} $$

Math 18: Linear Algebra

Lecture C (Charlesworth)

Last modified: December 22, 2017.

Announcements

Course Information

Homework

Homework assignments will be available on this webpage throughout the term. All homework assignments must be submitted to the drop boxes in the basement of AP&M by 4:00 PM on the deadline.

Answers to these problems are now available here. Note that these contain only the broad strokes of a complete solution, and not necessarily as much detail as you are expected to show.

  1. Go to Wolfram Alpha's System of Linear Equations Problem Generator (or another similar site of your choosing), and practice solving systems of linear equations until you can consistently solve them quickly and correctly. You don't need to submit anything, but the odds of you needing to solve at least one system of equations on the midterm or final are high.

    Do a similar thing for sums, differences, and scalings of vectors.

    You may also find it helpful to work several of the problems in the textbook.

  2. Determine which of the following are systems of linear equations in variables \(x_1,\) \(x_2,\) \(x_3,\) \(x_4,\) and \(x_5.\)
    1. \[\left\{\begin{array}{rcl}x_1 + x_5 &=& \sin(5) \\ x_4 + x_5 &=& \cos(5)\end{array}\right.\]
    2. \[\left\{\begin{array}{rcl}|x_1| + |x_2| &=& 2 \\ |x_3| + |x_4| + |x_5| &=& 3 \\ |x_1| + |x_5| &=& 1\end{array}\right.\]
    3. \[\left\{\begin{array}{rcl}0 &=& 0\end{array}\right.\]
    4. \[\left\{\begin{array}{rcl}x_1 + \cos\left(\frac{\pi}{13}\right)x_2 + \sin\left(\frac{\pi}{5}\right)x_3 &=& 1 \\ 444x_1 + 555x_3 - 523514x_5 &=& -415 \\ e^{\frac{\pi}{6}i}x_3 - e^{\frac{\pi}{173}i+5}x_4 &=& 4+12i\\ x_2 - x_5 &=& 4 + 3 + 2 + 1\end{array}\right.\]
    5. \[\left\{\begin{array}{rcl}x_3 &=& 3 \\ x_5 &=& 5 \\ x_2 &=& -18 \\ x_4 &=& 9 \\ x_5 &=& 92 \end{array}\right.\]
    6. \[\left\{\begin{array}{rcl}x_1 + x_3 + x_5 &=& 8 \\ x_2 + x_4 + x_6 &=& 9 \\ x_3 + x_5 + x_7 &=& 4\end{array}\right.\]
    7. \[\left\{\begin{array}{rcl}x_1 + x_2 &=& 5 \\ x_1 - 3x_2 &=& -1 \\ x_1 + x_2 &=& 5 \\ x_1 - 3x_2 &=& -1 \\ x_1 + x_2 &=& 5 \\ x_1 - 3x_2 &=& -1 \end{array}\right.\]
  3. Suppose that $(s_1, s_2, s_3)$ and $(t_1, t_2, t_3)$ are solutions to the linear equation $a_1x_1 + a_2x_2 + a_3x_3 = b$. Show that $\left(\frac{s_1+t_1}{2}, \frac{s_2+t_2}{2}, \frac{s_3+t_3}{2}\right)$ is a solution to the same linear equation.
  4. Write down three different augmented matrices for linear systems whose solution set contains only the point $(1,2,3)$.
  5. Which of the following matrices are in row echelon form? Of those, which are in reduced row echelon form?
    1. \[\begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ \end{bmatrix}\]
    2. \[\begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}\]
    3. \[\begin{bmatrix} 0 & 1 & 2 & 3\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 3\\ \end{bmatrix}\]
    4. \[\begin{bmatrix} 1 & 5 & 0 & 0 & 2 & 0 & 3\\ 0 & 0 & 0 & 1 & -4 & 0 & 8\\ 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \end{bmatrix}\]
    5. \[\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}\]
    6. \[\begin{bmatrix} 1\\ \end{bmatrix}\]
  6. Determine the solution sets for each of the following systems of linear equations in the variables $x_1,$ $x_2,$ $x_3,$ and $x_4.$
    1. \[\left\{\begin{array}{rcl}x_1 + x_2 + x_3 + x_4 = 4 \\ x_3 - x_4 = 0\end{array}\right.\]
    2. \[\left\{\begin{array}{rcl}x_1 + 2x_2 = 9 \\ -x_1 + 3x_2 = 6\end{array}\right.\]
    3. \[\left\{\begin{array}{rcl}x_1 + 2x_2 + 3x_3 = 4\\ 5x_1 + 6x_2 + 7x_3 = 8 \\ 9x_1 + 10x_2 + 11x_3 = 20\end{array}\right.\]
  7. Let $\vec a$ and $\vec b$ be the vectors given as follows: \[\vec a = \begin{bmatrix}2\\-1\\2\end{bmatrix},\qquad\qquad\vec b = \begin{bmatrix}-2\\1\\1\end{bmatrix}.\] For each of the following vectors, determine if it is in $W = \mathrm{span}\left\{\vec a, \vec b\right\},$ and express those that are as linear combinations of $\vec a$ and $\vec b.$
    1. \[\begin{bmatrix} 0 \\ 0 \\ 9 \end{bmatrix}\]
    2. \[\begin{bmatrix} 4 \\ -2 \\ 3 \end{bmatrix}\]
    3. \[\begin{bmatrix} 0.6 \\ -0.3 \\ 2.1 \end{bmatrix}\]
    4. \[\begin{bmatrix} -2 \\ 1 \\ 1 \end{bmatrix}\]
    5. \[\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\]
Assignment 1, due October 6th, 2017.

Answers to these problems are now available here.

  1. Go to Wolfram Alpha's Vector Times Matrix Problem Generator (or another similar site of your choosing), and practice multiplying matrices and vectors until you can consistently do it quickly and correctly. You don't need to submit anything, but the odds of you needing to perform at least one computation of this type on the midterm or final are high.

    You may also find it helpful to work several of the problems in the textbook; in particular, practice switching between systems of linear equations, vector equations, and matrix equations.

  2. Let $A$ be the following $3\times3$ matrix: $$A = \begin{bmatrix}-3 & -9 & 8 \\ 3 & 3 & -1 \\ 1 & -1 & 2\end{bmatrix}.$$ Determine the set of vectors $\vec{b} \in \mathbb{R}^3$ for which the matrix equation $A\vec{x} = \vec{b}$ in the variable $\vec{x}$ has a solution.
  3. Suppose $T$ is the following $22\times14$ matrix: $$T = \begin{bmatrix} -56 & 59 & -97 & -62 & 72 & -82 & 7 & 60 & -79 & 6 & 51 & -14 & 2 & -21 \\ -22 & -85 & -84 & 88 & 19 & 41 & 18 & -29 & 39 & -25 & -78 & -95 & -90 & -71 \\ 15 & 86 & -49 & 16 & 22 & -65 & 85 & -76 & 76 & 10 & 89 & 62 & -37 & -50 \\ 21 & -12 & -60 & 42 & 44 & -45 & 24 & -19 & 37 & 69 & 5 & 46 & 47 & 25 \\ -99 & -38 & -93 & 79 & -94 & -44 & -47 & 11 & -26 & -53 & -4 & -7 & -20 & 94 \\ -23 & -18 & -89 & -59 & -91 & 96 & -24 & 27 & 57 & 26 & -9 & -41 & -81 & -2 \\ -17 & -98 & 90 & 29 & -1 & 0 & 4 & -52 & -36 & 71 & 80 & -61 & 33 & -28 \\ -16 & 93 & -64 & 81 & 78 & -48 & 77 & 99 & -72 & -31 & 100 & 35 & 56 & -100 \\ 45 & -73 & 63 & -57 & 9 & 54 & 74 & -80 & -77 & -34 & 98 & -35 & -75 & 53 \\ 31 & -43 & 50 & 48 & -87 & -96 & 67 & 13 & -69 & -15 & 23 & 8 & -58 & -46 \\ 58 & 32 & -51 & 30 & -3 & 34 & 1 & 36 & -42 & -55 & 87 & 84 & 20 & -70 \\ -42 & 55 & 34 & 58 & -52 & -18 & -15 & 5 & 49 & -95 & 70 & 19 & 59 & 39 \\ 2 & -48 & 12 & 24 & -54 & -36 & 27 & -10 & 53 & 98 & -7 & -45 & 43 & 23 \\ 80 & -72 & -99 & -57 & 36 & -53 & -58 & 95 & -5 & 30 & -87 & 17 & 31 & -30 \\ 76 & 47 & 82 & -11 & -37 & 73 & -86 & 29 & -60 & 48 & -24 & 28 & -67 & -85 \\ 38 & 37 & 7 & -100 & 77 & -25 & -1 & 46 & -14 & -31 & -74 & -75 & -78 & 4 \\ -41 & 65 & 91 & 13 & 100 & -68 & 18 & 94 & 84 & -97 & 74 & 68 & -38 & 66 \\ -84 & -23 & 71 & 86 & -76 & -12 & 45 & 51 & 81 & -77 & -90 & -92 & -35 & -29 \\ 11 & 69 & -70 & -88 & 40 & -56 & 44 & 50 & 67 & 87 & 16 & 21 & 0 & -21 \\ -17 & -96 & -3 & 99 & 25 & 79 & -89 & -40 & -9 & -33 & 85 & -51 & 89 & -28 \\ 3 & -71 & -32 & 56 & -44 & -65 & 42 & -55 & 41 & -19 & 54 & -13 & 72 & -98 \\ 35 & -2 & -79 & -20 & 60 & 14 & 52 & 88 & -16 & -22 & 90 & 57 & -27 & -63 \\ \end{bmatrix}.$$ Does the matrix equation $T$$\vec{x} = \vec{b}$ in the variable $\vec{x}$ have a solution for every $\vec{b} \in \mathbb{R}^{22}$?

    (Hint: you do not need to perform row reduction to solve this problem.)

  4. Consider the following system of linear equations: $$\left\{\begin{array}{rcl} 3x_1 + 6x_2 + 6x_3 + 14x_4 &=& 0 \\ -x_1 + 6x_2 -6x_3 -2x_4 &=& 0 \\ 2x_1 + 6x_2 + 3x_3 +10x_4 &=& 0 \end{array}\right..$$ Find a spanning set for the solution set of the system.
    1. Give a solution to the following system of equations in parametric form: $$\left\{\begin{array}{rcl} 5x_1+3x_2+7x_3-x_4&=&8\\-2x_1-x_2-3x_3+x_4&=&{\color{red}{-2}}\\2x_1+2x_2+2x_3+2x_4&=&8 \end{array}\right..$$
    2. Show that if $\vec{v}$ and $\vec{w}$ are solutions to the above system of equations, then $\vec{v}-\vec{w}$ is a solution to the homogeneous system $$\left\{\begin{array}{rcl} 5x_1+3x_2+7x_3-x_4&=&0\\-2x_1-x_2-3x_3+x_4&=&0\\2x_1+2x_2+2x_3+2x_4&=&0 \end{array}\right..$$
  5. Determine which of the following sets of vectors are linearly independent, and support your claim:
    1. $\left\{ \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\}$
    2. $\left\{ \begin{bmatrix} 8 \\ 2 \\ 2 \end{bmatrix}, \begin{bmatrix} -6 \\ 4 \\ 4 \end{bmatrix}, \begin{bmatrix} 5 \\ -5 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ -6 \end{bmatrix} \right\}$
    3. $\left\{ \begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ -2 \\ 3 \end{bmatrix}, \begin{bmatrix} 6 \\ -1 \\ 1 \end{bmatrix} \right\}$
    4. The circle $\{(x, y) \in \mathbb{R}^2 | x^2 + y^2 = 1\}$
  6. Give an example of three vectors $\vec{x}$, $\vec{y}$, and $\vec{z}$ in $\mathbb{R}^4$ such that the sets $\{\vec{x}, \vec{y}\},$ $\{\vec{y}, \vec{z}\},$ and $\{\vec{z}, \vec{x}\}$ are linearly independent, but the set $\{\vec{x}, \vec{y}, \vec{z}\}$ is not.
Assignment 2, due October 13th, 2017.

Answers to these problems are now available here.

  1. Suppose that $T, S : \mathbb{R}^2 \to \mathbb{R}^3$ are maps given by \[T(x_1, x_2) = (x_1 + 2x_2 + 3, 2x_1 + 3x_2, x_1 - x_2) \qquad\text{ and }\qquad S(x_1, x_2) = (2x_1 - x_2, 2x_2 - x_1, 6x_1 -3x_2).\]
    1. Is $T$ linear? If it is, determine if it is one-to-one, onto, both, or neither.
    2. Is $S$ linear? If it is, determine if it is one-to-one, onto, both, or neither.
  2. Let $\vec{v_1}, \ldots, \vec{v_4},$ and $\vec{w}$ be the vectors defined as follows: \[ \vec{v}_1 = \begin{bmatrix}1\\0\\0\\1\\\end{bmatrix},\qquad \vec{v}_2 = \begin{bmatrix}0\\1\\0\\1\\\end{bmatrix},\qquad \vec{v}_3 = \begin{bmatrix}0\\0\\1\\1\\\end{bmatrix},\qquad \vec{v}_4 = \begin{bmatrix}0\\0\\0\\1\\\end{bmatrix},\qquad \text{ and }\qquad \vec{w} = \begin{bmatrix}1\\2\\1\\0\end{bmatrix}.\] Suppose that $T : \mathbb{R}^4 \to \mathbb{R}^3$ is a linear map such that: \[ \qquad T\left(\vec{v}_1\right) = \begin{bmatrix}2\\0\\0\\\end{bmatrix} \qquad T\left(\vec{v}_2\right) = \begin{bmatrix}1\\1\\1\\\end{bmatrix} \qquad T\left(\vec{v}_3\right) = \begin{bmatrix}-3\\-1\\4\\\end{bmatrix} \qquad T\left(\vec{v}_4\right) = \begin{bmatrix}0\\-2\\2\\\end{bmatrix} .\qquad\]
    1. Compute $T(\vec{w})$. (Hint: begin by expressing $\vec{w}$ as a linear combination of the four vectors given above.)
    2. Notice that $\vec{e}_1 = \vec{v}_1-\vec{v}_4$, $\vec{e}_2 = \vec{v}_2-\vec{v}_4$, $\vec{e}_3 = \vec{v}_3-\vec{v}_4$, and $\vec{e}_4 = \vec{v}_4$. Find the standard matrix for $T$. (Notice that if you multiply the vector $\vec{w}$ by the matrix which is your answer to this part, you should get the vector you found in the first part.)
  3. Let $v_1, \ldots, v_k \in \mathbb{R}^n$, and suppose that $w \in \mathrm{span}\{v_1, \ldots, v_k\}.$ Suppose further that $T : \mathbb{R}^n \to \mathbb{R}^m$ is linear. Show that $T(w) \in \mathrm{span}\{T(v_1), \ldots, T(v_k)\}$.
  4. Let $A$ and $B$ be the following matrices: \[A = \begin{bmatrix} 1&1&-2\\ -2&4&-2\\ -2&1&1\\ \end{bmatrix} \qquad\text{ and }\qquad B = \begin{bmatrix} 1&1&1\\ 1&1&1\\ 1&1&1\\ \end{bmatrix}.\]
    1. Compute $AB$ and $BA$.
    2. Let $T_{AB}$ and $T_{BA}$ be the linear maps from $\mathbb{R}^3$ to $\mathbb{R}^3$ given by multiplication by $AB$ and $BA$ respectively (i.e., $T_{AB}(\vec{x}) = AB\vec{x},$ and $T_{BA}(\vec{x}) = BA\vec{x}$). Does $\mathrm{image}(T_{AB}) = \mathrm{image}(T_{BA})$? (Hint: if you computed $AB$ and $BA$ correctly, answering this question should require very little computation.)
  5. Consider the following matrices: \[A = \begin{bmatrix} \frac{1}{2}&1&\frac{3}{2}\\2&4&-1\\0&4&-4 \end{bmatrix}, \qquad R_1 = \begin{bmatrix} 2&0&0\\0&1&0\\0&0&1 \end{bmatrix}, \qquad R_2 = \begin{bmatrix} 1&0&0\\-2&1&0\\0&0&1 \end{bmatrix}, \qquad R_3 = \begin{bmatrix} 1&0&0\\0&0&1\\0&1&0 \end{bmatrix},\] \[ R_5 = \begin{bmatrix} 1&0&0\\0&\frac{1}{4}&0\\0&0&-\frac{1}{7} \end{bmatrix}, \qquad R_6 = \begin{bmatrix} 1&0&0\\0&1&1\\0&0&1 \end{bmatrix}, \qquad\text{ and }\qquad R_7 = \begin{bmatrix} 1&-2&-3\\0&1&0\\0&0&1 \end{bmatrix}, \qquad \] Compute the following matrices: $$\begin{align*}R_1A,\\ R_2R_1A,\\ R_3R_2R_1A,\\ R_5R_3R_2R_1A,\\ R_6R_5R_3R_2R_1A,\\ R_7R_6R_5R_3R_2R_1A.\end{align*}$$ Notice that each matrix is the product of one more matrix with the previous one, so to compute $R_5R_3R_2R_1A$, for example, it is easiest to multiply $R_5\cdot(R_3R_2R_1A)$. It turns out that row reduction can always be performed by multiplying by a sequence of "row operation matrices", like those above.

    Correction: An earlier version of this problem accidentally repeated the matrix $\begin{bmatrix}1&0&0\\0&0&1\\0&1&0\end{bmatrix}$ twice. Since the goal of this problem is to demonstrate row reduction as a sequence of matrix multiplications, its presence there defeated the point of the question. If you have already performed the computation with the original sequence of matrices, submitting it will also count as a correct solution to this problem.

Assignment 3, due October 23rd, 2017.

Answers to these problems are now available here.

  1. Determine which of the following matrices are invertible. Find the inverse of those which are.
    1. $\begin{bmatrix}1&2&0\\0&1&2\\3&9&-1\\4&-4&-2\end{bmatrix}$
    2. $\begin{bmatrix}6\end{bmatrix}$
    3. $\begin{bmatrix}9&8&7\\6&5&4\\3&2&1\end{bmatrix}$
    4. $\begin{bmatrix}0&1&-1\\-1&0&1\\2&-2&0\end{bmatrix}$
  2. Find all values of $k$ for which the following matrix is invertible. $$\begin{bmatrix}k & 0 & 2 \\ 3 & 3 & -2 \\ 3 & 2 & -2 \end{bmatrix}$$
  3. Let $\mathbb{P}$ be the set of polynomials with real coefficients. Which of the following sets are vector subspaces of $\mathbb{P}$?
    1. $W_1 = \left\{p \in \mathbb{P} : p(4) = 0\right\}$
    2. $W_2 = \left\{p \in \mathbb{P} : p(0) = 4\right\}$
    3. $W_3 = \left\{p \in \mathbb{P} : p \text{ has only terms of even degree}\right\}$
    Note that, for example, $p(x) = -x + 4$ is in both $W_1$ and $W_2$ since $p(0) = 4$ and $p(4) = 0$. However, $p \notin W_3$ since it has a term of odd degree, namely $-x$. On the other hand, $q(x) = 3x^4 - 6x^2 + 2 \in W_3$.
  4. None of the following are vector spaces. In each case, show that one of the properties of vector space is not satisfied.
    1. The integer lattice $\mathbb{Z}^3 := \left\{(x_1, x_2, x_3) \in \mathbb{R}^3 : x_1, x_2, x_3 \text{ are integers}\right\}$, with the usual multiplication and addition.
    2. The following set of points in the plane: $\left\{(x, y) \in \mathbb{R}^2 : xy \geq 0\right\}$, together with the usual multiplication and addition.
    3. The set of strings of zero or more letters, where "$+$" corresponds to concatenation and for any number $\lambda \in \mathbb{R}$ and any string of letters $\vec{w}$, $\lambda\vec{w} = \vec{w}$.

      For example, if $\vec{v} = \textrm{''}snow\textrm{''}$ and $\vec{w} = \textrm{''}flake\textrm{''}$, then $\vec{v} + \vec{w} = \textrm{''}snowflake\textrm{''}$. Similarly, $3\vec{v} = \textrm{''}snow\textrm{''}$. If $\vec{\varepsilon}$ is the string containing zero letters, and $\vec{x}$ is another string, then $\vec{\varepsilon} + \vec{x} = \vec{x} = \vec{x} + \vec{\varepsilon}$. So for example, $\textrm{''}\textrm{''} + \textrm{''}snow\textrm{''} = \textrm{''}snow\textrm{''} = \textrm{''}snow\textrm{''} + \textrm{''}\textrm{''}$.

  5. Find a spanning set for the null space of each of the following matrices.
    1. $\begin{bmatrix}1&4&7\end{bmatrix}$
    2. $\begin{bmatrix}0&1&3&0&2&5\\0&0&0&1&-2&4\\0&0&0&0&0&0\end{bmatrix}$
    3. $\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0\end{bmatrix}$
    4. $\begin{bmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}$
  6. Suppose that $T: \mathbb{R}^3 \to \mathbb{R}^7$ is a linear transformation, and $\begin{bmatrix}3\\-2\\1\end{bmatrix} \in \mathrm{ker}(T)$. Show that there is a non-zero linear map $S : \mathbb{R^2} \to \mathbb{R^3}$ so that $T(S(\vec{x})) = \vec{0}$ for all $\vec{x} \in \mathbb{R}^2$.
Assignment 4, due October 30th, 2017.

Answers to these problems are now available here.

  1. Convince yourself that you are comfortable solving problems similar to 5-26 (except 17(e)) in section 4.6 of the text. No submission is necessary.
  2. For each of the following pairs of matrices, the second is the reduced row echelon form of the first. Find a basis for the column space, row space, and null space of the first, non-row-reduced matrix. Then state its rank.
    1. $$\begin{bmatrix}-4&-1&-2&4&-2\\1&1&1&5&1\\-3&-3&-3&3&-4\end{bmatrix}\hspace{2in}\begin{bmatrix}1&0&1/3&0&1/6\\0&1&2/3&0&10/9\\0&0&0&1&-1/18\end{bmatrix}$$
    2. $$\begin{bmatrix}2&0&3&1\\4&5&7&1\\-2&-5&-4&2\\4&5&7&-5\end{bmatrix}\hspace{2in}\begin{bmatrix}1&0&3/2&0\\0&1&1/5&0\\0&0&0&1\\0&0&0&0\end{bmatrix}$$.
  3. Let us consider the vector space $\mathbb{P}_2$, of polynomials of degree at most $2$. Notice that the set $Q$ of polynomials with no linear term (i.e., those of the form $ax^2 + c$) is a subspace of $\mathbb{P}_2$. It turns out that any such polynomial $ax^2 + c \in Q$ can be written as a linear combination as follows: $$ax^2 + c = a(x^2 + x) - c(2x - 1) + (2c-a)x.$$ Is $\{x^2+x, 2x - 1, x\}$ a basis for $Q$? Why or why not?
  4. For each of the following subspaces, determine its dimension and provide a basis.
    1. $\left\{\begin{bmatrix}s+2t+3r\\4s+5t+6r\\7s+8t+9r\end{bmatrix} : s,t,r \in \mathbb{R}\right\} \subseteq \mathbb{R}^3.$
    2. $\left\{\begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix} \in \mathbb{R}^4 : x_1 + x_2 + x_3 + x_4 = 0\right\} \subseteq \mathbb{R}^4.$
  5. Suppose that $W \subseteq \mathbb{R}^n$ has a basis $\{\vec{v}_1, \ldots, \vec{v}_k\}$. Suppose further that $\vec{v}_{k+1}, \ldots, \vec{v}_{n}$ are chosen so that $\{\vec{v}_1, \ldots, \vec{v}_n\}$ is a basis for $\mathbb{R}^n$. Show that there is a linear map $T : \mathbb{R}^n \to \mathbb{R}^{\color{red}{n-k}}$ so that $\ker(T) = W$.
  6. Consider the space of polynomials $\mathbb{P}$. Let $D : \mathbb{P} \to \mathbb{P}$ and $S : \mathbb{P} \to\mathbb{P}$ be linear maps such that for all $k \in \mathbb{N}$, $D(x^k) = kx^{k-1}$ and $S(x^k) = \frac1{k+1}x^{k+1}$.

    1. Show that $D$ is onto but not one-to-one. (Hint: to show onto, you should show that the image of $D$ contains a basis for $\mathbb{P}$; why is this enough to conclude that $D$ is onto?)
    2. Show that $S$ is one-to-one but not onto. (Hint: show that $D\circ S$ is one-to-one first, and use this to conclude that $S$ itself must be one-to-one.)

    We know that linear maps between finite dimensional vector spaces are one-to-one if and only if they are onto (and that this occurs exactly when the standard matrix of such a map is invertible). What we have shown here is that this does not hold for vector spaces of infinite dimension.

  7. Given a finite set $S = \{s_1, \ldots, s_k\}$, the free vector space on $S$ is the set of formal linear combinations of elements of $S$: $$\mathcal{F}(S) = \left\{a_1s_1 + \cdots + a_ks_k : a_1, \ldots, a_k \in \mathbb{R}\right\}.$$ Addition is defined by collecting like terms, and scalar multiplication multiplies the weight of each element. $\mathcal{F}(S)$ should be thought of the vector space obtained by declaring $S$ to be a basis.

    Consider the free vector space on the set $Q = \{|0\rangle, |1\rangle\}$; then $\mathcal{F}(Q)$ is a 2-dimensional vector space, and all vectors are of the form $\alpha|0\rangle + \beta|1\rangle$. Define the vectors $|+\rangle$ and $|-\rangle$ by $$|+\rangle = \frac1{\sqrt2}|0\rangle + \frac1{\sqrt2}|1\rangle \hspace{2in} |-\rangle = \frac1{\sqrt2}|0\rangle - \frac1{\sqrt2}|1\rangle.$$

    1. Show that $\{|+\rangle, |-\rangle\}$ is a basis for $\mathcal{F}(Q)$.
    2. Let $\mathcal{B}$ be the ordered basis $\{|+\rangle, |-\rangle\}$. Find $\left[|0\rangle\right]_{\mathcal{B}},$ $\left[|1\rangle\right]_{\mathcal{B}},$ and $\left[\frac35|0\rangle + \frac45|1\rangle\right]_{\mathcal{B}}$.

    It turns out that in the vector space $\mathcal{F}_{\mathbb{C}}(Q)$ (which is like $\mathcal{F}(Q)$ with complex coefficients rather than real coefficients), the vectors $\alpha|0\rangle + \beta|1\rangle$ with $|\alpha|^2 + |\beta|^2 = 1$ describe the possible states of a qubit, or quantum bit. If $W$ is the set $\{|w\rangle : w \text{ is a binary string of length } k\}$, then the state of a quantum system with $k$ qubits can be described as the vectors in $\mathcal{F}_{\mathbb{C}}(W)$ with coefficients whose square absolute values sum to $1$. A quantum computer acts on such a quantum state by applying linear maps with the property that every quantum state is sent to another state (so, for example, such maps must have trivial kernel). There's a lot of fascinating stuff here which is well beyond the scope of this course; Wikipedia is a good place to start reading.

Assignment 5, due November 6th, 2017.

Answers to these problems are now available here.

  1. Consider the space of polynomials of degree at most $4$, $\mathbb{P}_4$. The standard basis for $\mathbb{P}_4$ is given by the monomials: $\mathcal{E} = \{1,x,x^2,x^3,x^4\}$. However, there are other bases for the polynomials which may be more convenient to work with in some settings; two are detailed below.

    The Chebyshev polynomials (of the second type) are useful in approximation theory and polynomial interpolation, and are defined by the conditions $U_0(x) = 1$, $U_1(x) = 2x$, and for $n > 1$, $U_{n+1}(x) = 2xU_n(x) - U_{n-1}(x)$. In particular, the first five Chebyshev polynomials are as follows: $$\mathcal{B} = \{1, 2x, 4x^2-1, 8x^3-4x, 16x^4-12x^2+1\}.$$

    The Hermite polynomials likewise have many uses, being related to the behaviour of a quantum harmonic oscillator and random matrix theory. Their general form is not as easily presented, but the first five Hermite polynomials are the following: $$\mathcal{C} = \{1, x, x^2-1, x^3-3x, x^4-6x^2+3\}.$$

    Compute the following change of basis matrices.

    1. $[I]_{\mathcal{B}}^{\mathcal{E}}$, the change of basis matrix from $\mathcal{B}$-coordinates to $\mathcal{E}$-coordinates.
    2. $[I]_{\mathcal{C}}^{\mathcal{E}}$, the change of basis matrix from $\mathcal{C}$-coordinates to $\mathcal{E}$-coordinates.
    3. $[I]_{\mathcal{C}}^{\mathcal{B}}$, the change of basis matrix from $\mathcal{C}$-coordinates to $\mathcal{B}$-coordinates.

  2. Suppose that $V$ is a vector space, and $\mathcal{B}, \mathcal{C}$, and $\mathcal{D}$ are bases of $V$. Furher, suppose that the following change of basis matrices, converting from $\mathcal{B}$-coordinates to $\mathcal{C}$- and $\mathcal{D}$-coordinates, are known: $$[I]_{\mathcal{B}}^{\mathcal{C}} = \begin{bmatrix} -2 & -1 & -1 \\ -3 & 1 & -1 \\ -2 & 1 & 0 \end{bmatrix} \hspace{2in} [I]_{\mathcal{B}}^{\mathcal{D}} = \begin{bmatrix} -2 & -4 & 2 \\ 2 & -1 & -2 \\ 2 & -2 & 4 \end{bmatrix}.$$
    1. What is $\dim(V)$? How do you know?
    2. Suppose that $\vec{x} \in V$, and $[\vec{x}]_{\mathcal{B}} = \begin{bmatrix}2 \\ -1 \\ 2\end{bmatrix}$. What is $[\vec{x}]_{\mathcal{C}}$?
    3. Suppose that $\vec{y} \in V$, and $[\vec{y}]_{\mathcal{D}} = \begin{bmatrix}1 \\ -3 \\ -1\end{bmatrix}$. What is $[\vec{y}]_{\mathcal{B}}$?
    4. Suppose that $\vec{z} \in V$, and $[\vec{z}]_{\mathcal{D}} = \begin{bmatrix}20 \\ 0 \\ 5\end{bmatrix}$. What is $[\vec{z}]_{\mathcal{C}}$?
    1. Give an example of two $3\times3$ matrices $A$ and $B$ so that $\det(A+B) \neq \det(A) + \det(B)$.
    2. Give an example of a $3\times 3$ matrix $A$ and a scalar $c \in \mathbb{R}$ so that $\det(cA) \neq c\det(A)$.
  3. Let $A$ be the following $3\times3$ matrix: $$A = \begin{bmatrix} -11/2 & 5 & -5/2 \\ -3/2 & 3 & -3/2 \\ 2 & 2 & -1 \end{bmatrix}.$$ For which values of $k$ is the following matrix invertible? $$A - kI_3 = \begin{bmatrix} -11/2 - k & 5 & -5/2 \\ -3/2 & 3 - k & -3/2 \\ 2 & 2 & -1 - k \end{bmatrix}$$
  4. An $n\times n$ matrix is said to be orthogonal if $AA^T = A^TA = I_n$; that is, if $A^T = A^{-1}$. (It turns out that in two or three dimensions, the orthogonal matrices describe rotations and reflections.) Suppose that $T : \mathbb{R}^3 \to \mathbb{R}^3$ is a linear transformation with standard matrix $A$, and $A$ is orthogonal. Show that $T$ preserves volume: if $S \subset \mathbb{R}^3$ is a solid with volume $V$, then $T(S)$ also has volume $V$.
Assignment 6, due November 15th, 2017.

Answers to these problems are now available here.

  1. For each of the following matrices, given the eigenvalues, find a basis for each of the corresponding eigenspaces.
    1. $\begin{bmatrix}2&2\\2&-1\end{bmatrix}$ has eigenvalues $3$ and $-2$.
    2. $\begin{bmatrix}4&1&0&0\\0&4&0&0\\0&0&4&0\\0&0&16&4\end{bmatrix}$ has only the eigenvalue $4$.
    3. $\begin{bmatrix}19&8&-18&14\\-24&-9&37&-31\\24&12&6&-9\\24&12&8&-11\end{bmatrix}$ has eigenvalues $3, -2,$ and $1$.
  2. Consider the matrix $$Q = \begin{bmatrix}2&2&2&2\\2&2&2&2\\2&2&2&2\\2&2&2&2\end{bmatrix}.$$
    1. Find a non-zero eigenvalue and a corresponding eigenvector for $Q$.
    2. Find the rank of $Q$.
    3. Without further computation, find the characteristic polynomial of $Q$.
  3. Suppose that $A^2$ is the zero matrix. Show that $0$ is an eigenvalue of $A$, and explain why $A$ can have no other eigenvalues.
  4. Suppose that $0 \leq \theta < 2\pi$, let $R$ be the matrix $$R = \begin{bmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{bmatrix}.$$
    1. For what values of $\theta$ does $R$ have a non-trivial (real) eigenvector?
    2. Notice that $R$ corresponds to rotation by an angle $\theta$ about the origin; explain why your answer above makes sense geometrically.
  5. Consider the following matrices: $$A = \begin{bmatrix}2&1\\0&2\end{bmatrix} \hspace{2in} B = \begin{bmatrix}2&0\\0&2\end{bmatrix}.$$
    1. Find the characteristic polynomials of $A$ and $B$. (This requires very little work.)
    2. Find a non-trivial eigenvector for $A$, and two linearly independent eigenvectors for $B$.
    3. Is there an eigenvector of $A$ which is linearly independent from the one you found in Part B? If so, find one; if not, explain why not.
  6. Let $M$ be the following matrix: $$M = \begin{bmatrix}6&4&2\\-5&-3&-1\\-2&-2&-2\end{bmatrix}.$$
    1. Find the eigenvalues of $M$.
    2. Find a linearly independent set of three eigenvectors of $M$.
    3. Write down matrices $P$ and $D$ so that $D$ is diagonal, $P$ is invertible, and $M = PDP^{-1}$.
    4. Compute $M^3$.
Assignment 7, due November 27th, 2017.

Answers to these problems are now available here.

  1. Compute the following things:
    1. $\norm{\,\begin{bmatrix}4\\-1\\2\\2\\1\end{bmatrix}\,}$
    2. The distance from $\begin{bmatrix}2\\3\\-\frac7{12}\\1\end{bmatrix}$ to $\begin{bmatrix}-1\\3\\2\\\frac85\end{bmatrix}$
    3. A unit vector in the same direction as $\begin{bmatrix}8\\-2\\4\end{bmatrix}$
    4. $\ang{\begin{bmatrix}2\\-1\\3\end{bmatrix}, \begin{bmatrix}-1\\5\\-1\end{bmatrix}}$
  2. Consider the vectors $u, v, w \in \R^5$ given as follows: $$u = \begin{bmatrix}3\\-1\\0\\2\\7\end{bmatrix}\hspace{1in} v = \begin{bmatrix}-1\\5\\3\\6\\0\end{bmatrix}\hspace{1in} w = \begin{bmatrix}1\\-1\\1\\-1\\1\end{bmatrix}.$$
    1. Write down a system of linear equations whose solution space is precisely the set of vectors in $\R^5$ which are orthogonal to $u$, $v$, and $w$ (i.e., find a system of equations describing their orthogonal complement $\{u, v, w\}^\perp$).
    2. The vectors $u$, $v$, and $w$ are linearly independent. What is the dimension of $\{u, v, w\}^\perp$?
    3. Suppose $x_1, \ldots, x_k \in \R^n$ are linearly independent. What is the dimension of $\{x_1, \ldots, x_k\}^\perp$?
  3. Compute the following:
    1. The projection of the vector $\begin{bmatrix}4\\2\\-1\end{bmatrix}$ onto the line spanned by $\begin{bmatrix}1\\1\\1\end{bmatrix}$.
    2. The projection of the vector $\begin{bmatrix}2\\0\\-1\\1\end{bmatrix}$ onto the space spanned by the orthogonal vectors $\begin{bmatrix}3\\1\\0\\-2\end{bmatrix}$ and $\begin{bmatrix}-1\\1\\3\\-1\end{bmatrix}$.
    3. The coefficients needed to express the vector you found in part B as a linear combination of $\begin{bmatrix}3\\1\\0\\-2\end{bmatrix}$ and $\begin{bmatrix}-1\\1\\3\\-1\end{bmatrix}$.
  4. Suppose $W \subseteq \R^6$ is a subspace with basis $\set{\begin{bmatrix}1\\5\\-1\\6\\787\\0\end{bmatrix},\begin{bmatrix}-34\\4\\4\\4\\4\\8\end{bmatrix},\begin{bmatrix}2\\3\\4\\5\\6\\2\end{bmatrix}, \begin{bmatrix}-5\\123\\-4\\12\\-3\\1\end{bmatrix}}$, and let $P : \R^{\color{red}6}\to\R^{\color{red}6}$ be the orthogonal projection onto $W$. The following do not require very much computation.
    1. What is the rank of $P$? How do you know?
    2. What is the dimension of the $1$-eigenspace of $P$? How do you know?
    3. What is the dimension of the null space of $P$? How do you know?
    4. Explain why $P$ must be similar to a diagonal matrix, and find a diagonal matrix it is similar to (note: you are not being asked to find an invertible matrix $Q$ so that $P = QDQ^{-1}$).
  5. Consider the space $\mathbb{P}_3$ of polynomials of degree at most $3$, equipped with the inner product defined by $$\ang{p,q} = \frac14\paren{p(-1)q(-1) + p(0)q(0) + p(1)q(1) + p(2)q(2)}.$$
    1. Confirm that the constant polynomial $1$ has length $1$; that is, that $\norm{1} = 1$.
    2. Find real numbers $a$ and $b$ so that the polynomial $ax+b$ is a unit vector orthogonal to the polynomial $1$.
    3. Compute the projection of $x^3-1$ onto the space spanned by $1$ and the vector you found in part B, i.e., the subspace of polynomials of degree at most $1$. (What you've accomplished is to find the best linear approximation to the data points $(-1, -2)$, $(0,-1)$, $(1, 0)$, and $(2, 7)$. You may find it enlightening to plot these points and the line you've computed, but this is not necessary.)
    4. Use this to find coefficients $j$ and $k$ so that $x^3+jx+k$ is orthogonal to the subspace of polynomials of degree at most $1$.
  6. Nothing should be submitted for the following exercise, and it should be considered only for interest's sake. Feel free to ignore it if you wish. We will try to gain some insight into the following question: what do the eigenvalues of a matrix chosen at random look like? You should use matlab or another tool of your choice for the following.

    It turns out that any matrix with real entries which is equal to its own transpose is diagonalizable, i.e., similar to a diagonal matrix, and therefore there is a basis of $\R^n$ consisting of eigenvectors for the matrix. We will make use of that in what follows.

    1. Construct a $25\times25$ upper triangular matrix in matlab with entries chosen independantly at random from a Gaussian distribution of variance one; you may find the matlab functions triu and randn useful for this.
    2. Use this to construct a symmetric $25\times25$ matrix (one which is equal to its own transpose) whose upper triangular entries are independent random Gaussian variables. Arrange so that the diagonal entries have either variance 1 or $\sqrt2$, whichever you prefer.
    3. Multiply the matrix above by the constant $\frac1{\sqrt5}$ (this causes the rows of the matrix to have, on average, a length of $1$).
    4. Compute the eigenvalues of the above matrix, and plot them on a histogram. You may find the matlab commands eig and histogram useful for this; you may want something like the command histogram(e, -2:.1:2); this post on stackoverflow may also be helpful.
    5. Repeat the above a few times and see if you can divine a pattern.
    6. Now do the same for a much larger matrix, normalizing by $\frac1{\sqrt{N}}$ instead of $\frac1{\sqrt5}$; what do the data look like? Keep trying larger and larger matrices until your computer has a hard time handling it, but try to get at least to $10000\times10000$ matrices (you may need to wait a minute or two between commands for matrices of this size, but not too long).
    7. Now do the same, except use a symmetric matrix whose entries are $\pm1$ chosen at random instead of Gaussian. You may find the matlab function randn helpful.
    It turns out that there is actually a lot of structure lurking within the eigenvalues of a random matrix.
Assignment 8, due December 4th, 2017.

Instructional Staff

NameRoleOfficeOffice hoursE-mail
Ian Charlesworth Instructor AP&M 5880C Tu10-12, F10-11 ilc@math.ucsd.edu
Xindong Tang Teaching Assistant AP&M 6436 M10-11, F2-3 xit039@ucsd.edu
Matthew Ung Teaching Assistant AP&M 5218 W9-10 m2ung@ucsd.edu

My Friday office hour is dedicated to this class, while the time on Tuesday is open both to this course and the other course I am teaching this quarter.

We will be communicating with you and making announcements through an online question and answer platform called Piazza. We ask that when you have a question about the class that might be relevant to other students, you post your question on Piazza instead of emailing us. That way, everyone can benefit from the response. Posts about homework or exams on Piazza should be content based. While you are encouraged to crowdsource and discuss coursework through Piazza, please do not post complete solutions to homework problems there. Questions about grades should be brought to the instructors, in office hours. You can also post private messages to instructors on Piazza, which we prefer over email.

If emailing us is necessary, please do the following:

Class Meetings

DateTimeLocation
Lecture C00 (Charlesworth) Mondays, Wednesdays, Fridays11:00am - 11:50amCENTR 119
Discussion C01 (Xindong Tang) Thursdays2:00pm - 2:50pmAP&M 2301
Discussion C02 (Xindong Tang) Thursdays3:00pm - 3:50pmAP&M 2301
Discussion C03 (Xindong Tang) Thursdays4:00pm - 4:50pmAP&M 2301
Discussion C04 (Xindong Tang) Thursdays5:00pm - 5:50pmAP&M 2301
Discussion C05 (Matthew Ung) Thursdays6:00pm - 6:50pmAP&M 2301
Discussion C06 (Matthew Ung) Thursdays7:00pm - 7:50pmAP&M 2301
Final Exam Tuesday, Dec 1211:30am - 2:30pmCENTR 119

Top

Syllabus

Course:  Math 18

Title:  Linear Algebra

Credit Hours:  4  (Students may not receive credit for both Math 18 and 31AH.)

Prerequisite:  Math Placement Exam qualifying score, or AP Calculus AB score of 2, or SAT II Math Level 2 score of 600 or higher, or Math 3C, or Math 4C, or Math 10A, or Math 20A, or consent of instructor.

Catalog Description:  Matrix algebra, Gaussian elimination, determinants, Linear and affine subspaces, bases of Euclidean spaces. Eigenvalues and eigenvectors, quadratic forms, orthogonal matrices, diagonalization of symmetric matrices. Applications. Computing symbolic and graphical solutions using Matlab. See the UC San Diego Course Catalog.

Textbook: Linear Algebra and its Applications, by David C. Lay, Steven R. Lay, and Judi J. McDonald; published by Pearson (Addison Wesley).

Subject Material:  We will cover parts of chapters 1-7 of the text.

Lecture:  Attending the lecture is a fundamental part of the course; you are responsible for material presented in the lecture whether or not it is discussed in the textbook.  You should expect questions on the exams that will test your understanding of concepts discussed in the lecture.

Reading:  Reading the sections of the textbook corresponding to the assigned homework exercises is considered part of the homework assignment; you are responsible for material in the assigned reading whether or not it is discussed in the lecture.

Calendar of Lecture Topics:   The following calendar is subject to revision during the term. The section references are only a guide; our pace may vary from it somewhat.

Week Monday Tuesday Wednesday Thursday Friday
0 Sep 25 Sep 26 Sep 27 Sep 28 Sep 29
1.1 Systems of linear equations
1 Oct 2
1.2 Row reduction &amp; echelon forms
Oct 3 Oct 4
1.3 Vector equations
Oct 5
Discussion
Oct 6
1.4 Matrix equation \(A\vec{x} = \vec{b}\)
2 Oct 9
1.5 Solution sets
Oct 10 Oct 11
1.7 Linear independence
Oct 12
Discussion
Oct 13
1.8 Linear transformations
3 Oct 16
1.9 The matrix of a linear transformation
Oct 17 Oct 18
2.1 Matrix operations
Oct 19
Discussion
Oct 20
Mid-term exam
4 Oct 23
2.2, 2.3 Inverse of a matrix
Oct 24 Oct 25
4.1 Vector spaces and subspaces
Oct 26
Discussion
Oct 27
4.2 Null spaces &amp; column spaces
5 Oct 30
4.3 Linear independent sets; bases
Oct 31 Nov 1
4.5 Dimension
Nov 2
Discussion
Nov 3
4.6 Rank
4.4 Coordinate systems
6 Nov 6
4.7 Change of basis
Nov 7 Nov 8
3.1, 3.2 Determinants
Nov 9
Discussion
Nov 10
Veterans Day
7 Nov 13
3.3 Determinants and volume
Nov 14 Nov 15
5.1 Eigenvectors and eigenvalues
Nov 16
Discussion
Nov 17
Mid-term exam
8 Nov 20
5.2 Characteristic polynomial
Nov 21 Nov 22
5.3 Diagonalization
Nov 23
Thanksgiving
Nov 24
Post-Thanksgiving
9 Nov 27
6.1, 6.7 Inner product, length, & orthogonality
Nov 28 Nov 29
6.2 Orthogonal sets
Nov 30
Discussion
Dec 1
6.3 Orthogonal projections
10 Dec 4
6.4 Gram-Schmidt Orthogonalization
Dec 5 Dec 6
7.1 Spectral Theorem
Dec 7
Discussion
Dec 8
Review
11 Dec 11 Dec 12 Dec 13 Dec 14 Dec 15

Homework:  Homework is a very important part of the course and in order to fully master the topics it is essential that you work carefully on every assignment and try your best to complete every problem. Homework assignments will be made available on the course webpage, above. Your homework can be submitted to the dropbox with your TA's name on it in the basement of the AP&M building. Homework is officially due at 4:00 PM on the due date.

MATLAB:   In applications of linear algebra, the theoretical concepts that you will learn in lecture are used together with computers to solve large scale problems.  Thus, in addition to your written homework, you will be required to do homework using the computer language MATLAB.  The Math 18 MATLAB Assignments page contains all information relevant to the MATLAB component of Math 18. The first assignment is due in week 2 of the course.  You can do the homework on any campus computer that has MATLAB.  Questions regarding the MATLAB assignments should be directed to the TAs.  There are also tutors available beginning Thursday or Friday of the first week of classes in B432 of AP&M.  Please turn in your homework via Gradescope, as described on the MATLAB page, by 11:59pm on the indicated due date (as indicated on the Math 18 MATLAB Assignments page); note that late MATLAB homework will not be accepted, but in case you have to miss one MATLAB assignment, your lowest MATLAB homework score will be dropped.  There will be a MATLAB quiz at the end of the quarter.

Midterm Exams:  There will be two midterm exams given during the quarter. No calculators, phones, or other electronic devices will be allowed during the midterm exams.   You may bring at most three four-leaf clovers, horseshoes, maneki-neko, or other such talismans for good luck. There will be no makeup exams.

Final Examination:  The final examination will be held at the date and time stated above.

Administrative Links:    Here are two links regarding UC San Diego policies on exams:

Regrade Policy:  

Administrative Deadline:  Your scores for all graded work will be posted to TritonEd.

Grading: Your course grade will be determined by your cumulative average at the end of the term and will be based on the following scale:

A+ A A- B+ B B- C+ C C-
97 93 90 87 83 80 77 73 70
Your cumulative average will be the best of the following two weighted averages:

In addition,  you must pass the final examination in order to pass the course. Note: Since there are no makeup exams, if you miss a midterm exam for any reason, then your course grade will be computed with the second option. There are no exceptions; this grading scheme is intended to accommodate emergencies that require missing an exam.

Your single worst homework score will be ignored.

Academic Integrity:  UC San Diego's code of academic integrity outlines the expected academic honesty of all studentd and faculty, and details the consequences for academic dishonesty. The main issues are cheating and plagiarism, of course, for which we have a zero-tolerance policy. (Penalties for these offenses always include assignment of a failing grade in the course, and usually involve an administrative penalty, such as suspension or expulsion, as well.) However, academic integrity also includes things like giving credit where credit is due (listing your collaborators on homework assignments, noting books or papers containing information you used in solutions, etc.), and treating your peers respectfully in class. In addition, here are a few of our expectations for etiquette in and out of class.

Accommodations:

Students requesting accommodations for this course due to a disability must provide a current Authorization for Accommodation (AFA) letter issued by the Office for Students with Disabilities (OSD) which is located in University Center 202 behind Center Hall. Students are required to present their AFA letters to Faculty (please make arrangements to contact me privately) and to the OSD Liaison in the department in advance (by the end of Week 2, if possible) so that accommodations may be arranged. For more information, see here.

Top

Resources

Here are some additional resources for this course, and math courses in general.

Lecture Notes

Any remarks pertaining to particular lectures will be posted here throughout the term.

Notes from October 11, 2017
We wish to show that if $\{\vec{v}_1, \ldots, \vec{v}_k\}$ is linearly independent and $\{\vec{v}_1, \ldots, \vec{v}_k, \vec{v}\}$ is linearly dependent, then $\vec{v} \in \mathrm{span}\{\vec{v}_1, \ldots, \vec{v}_k\}$. If $\{\vec{v}_1, \ldots, \vec{v}_k, \vec{v}\}$ is linearly dependent, then there is a non-trivial solution $(a_1, \ldots, a_k, a)$ to the homogeneous equation \[\begin{bmatrix}\vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_k & \vec{v}\end{bmatrix}\vec{x} = \vec{0}.\] However, there is no non-trivial solution to \[\begin{bmatrix}\vec{v}_1 & \vec{v}_2 & \cdots & \vec{v}_k\end{bmatrix}\vec{y} = \vec{0};\] therefore in this non-trivial solution to the first equation, $a \neq 0$. That means we have \[a_1\vec{v}_1 + \ldots + a_k\vec{v}_k + a\vec{v} = 0.\] Rearranging, we have \[\vec{v} = \frac{-a_1}{a}\vec{v}_1 + \ldots + \frac{-a_k}{a}\vec{v}_k,\] where we were allowed to divide by $a$ because it was non-zero. We conclude that $\vec{v}$ must be a linear combination of $\vec{v}_1, \ldots, \vec{v}_k$.
A note on dynamical systems and eigenvalues from November 20, 2017

Imagine we are interested in studying some system which can be in two possible states, and evolves over time, moving to a new state every time step randomly but depending on its current state. A simple example with two states may be a traffic light which can be either red or green at any given time (ignoring yellow lights, as most California drivers do). If a green light turns red with probability $\frac34$ and a red light turns green with probability $\frac12$ every twenty seconds, then this system can be described by the following matrix: $$A = \begin{bmatrix}\frac14&\frac12\\\frac34&\frac12\end{bmatrix}.$$ If a light is currently green with probability $p$ and red with probability $1-p$, it can be represented by the vector $\begin{bmatrix}p\\1-p\end{bmatrix}$, and the probability of it being in each state after twenty seconds is given by $A\begin{bmatrix}p\\1-p\end{bmatrix}$, after forty seconds by $A^2\begin{bmatrix}p\\1-p\end{bmatrix}$, after sixty seconds by $A^3\begin{bmatrix}p\\1-p\end{bmatrix}$, and so on.

What we'd like to understand is what the probability of the system being in each state after a very long time has passed is. We can accomplish this using eigenvectors and eigenvalues. A bit of computation will show that for the matrix above, the eigenvalues are $1$ and $-\frac14$, with corresponding eigenvectors $\begin{bmatrix}2\\3\end{bmatrix}$ and $\begin{bmatrix}1\\-1\end{bmatrix}$. Suppose that we know our system begins with equal chance of being in either state: $x = \begin{bmatrix}\frac12\\\frac12\end{bmatrix}$. We can write $\begin{bmatrix}\frac12\\\frac12\end{bmatrix} = \frac15\begin{bmatrix}2\\3\end{bmatrix} + \frac1{10}\begin{bmatrix}1\\-1\end{bmatrix}$. Then if we apply $A^k$, we see $$A^kx = \frac15A^k\begin{bmatrix}2\\3\end{bmatrix} + \frac1{10}A^k\begin{bmatrix}1\\-1\end{bmatrix} = \frac15(1)^k\begin{bmatrix}2\\3\end{bmatrix} + \frac1{10}\left(-\frac14\right)^k\begin{bmatrix}1\\-1\end{bmatrix}.$$ As $k$ becomes very large, this quickly becomes very close to $\begin{bmatrix}\frac25\\\frac35\end{bmatrix}$. So as the system runs for a long time, its probability of being in the first state approaches $\frac25$, and its probability of being in the second approaches $\frac35$. This tells us something we already know, which is that most traffic lights are red when you reach them.

The same idea can be applied to much more interesting systems. In any system with a finite number of states, such that the probability of moving from each state to the next is known, one can always right down a transitiion matrix with $(i,j)$-entry equal to the probability of moving from state $j$ to state $i$. Then all the entries of this matrix will be between $0$ and $1$, and the entries of each column will sum to one. Further, one will always be an eigenvalue of such a matrix (think for a little bit about why $A-Id$ will not be invertible), and all the eigenvalues will be between $-1$ and $1$. If $-1$ is not an eigenvalue, then the state of the system from any initial configuration will grow closer and closer to an eigenvector with eigenvalue one.

This is the same sort of math that is going on behind Google's page rank algorithm. They construct a very large matrix whose entries represent the probabilities of moving from any one web page to another, and find the eigenvectors of this matrix. These represent the probability of a user being on a certain site after a long time spent browsing, and sites with a high probability of having people be on them are ranked more highly in searches.

Below are scans of my notes from each class. Be aware that these are prepared for my use: they may contain errors, or differ from the material actually presented in lecture. Use at your own risk.

CSS and page template greatfully taken from Todd Kemp's earlier offering of this course.