Exercise 1

  • Briefly describe an efficient algorithm for solving an upper triangular system. Roughly how does the number of steps depend on $n$? In contrast, roughly how many steps does Gaussian elimination take?

  • The scipy.linalg module has a function lu which takes an array and returns the $P, L, U$ decomposition of it. In addition, the module provides a function solve_triangular(T, b, lower=True|False) which efficiently solves triangular matrices. Try combining these and what we've seen so far to solve a linear system.

  • Another very useful decomposition is the $QR$ decomposition. This decomposes the matrix $A$ into an orthogonal matrix $Q$ and an upper triangular matrix $R$. One way to imagine constructing this is by carefully running Gram-Schmidt orthonormalization on the columns of $A$ and keeping track of the relationship between the original columns and the resulting orthonomal ones. Try using the qr function in scipy.linalg. Verify that the resulting $Q$ is orthogonal and the resulting $R$ is upper triangular.

Note: When playing with small examples, you don't always want to go through all decomposition stuff. The scipy.linalg module also provides a solve(A, b) method you can use instead. Try using it on a random matrix to see how it works.

By the way, why do I keep using random matrices as examples? Should I really worry that one of them is singular? For the skeptic, try generating a whole bunch of random square matrices and checking if they are singular.

Exercise 2

  • Construct a $5 \times 5$ discrete Laplace matrix $A$. (You may find the helper function eye in numpy useful! You can build identity matrices using eye(n) and off-diagonal identity matrices using eye(n, k=...|-1|0|1|...).

  • Implement Jacobi iteration for this matrix and try solving for a vector of all ones. Run the interation for 50 steps and print out the norm of the residual as you go.

Exercise 3

  • Using the sparse version of eye from scipy.sparse, write a function to construct an $n \times n$ sparse version of the discrete 1D Laplacian.
  • This time, use the conjugate gradient solver cg from scipy.sparse.linalg to solve the system where $b$ is a vector of ones. Try printing out the residual at each step using the optional argument callback=. This argument takes a function f(x) which is called during each step and provides the user with the current guess $x$.
  • Estimate the difference in memory required to represent a COO version of the Laplacian. How about CSR? Are these always better than a dense representation?

Exercise 4

  • Think of a couple reasons why having well-condition matrices is important for scientific computing - especially with regard to data collected from an experiment.
  • Intuitively, why are unitary matrices the best in regard to conditioning?
  • Suppose that $A$ is diagnolizable by orthogonal matrices. How is the condition number related to the eigenvalues of $A$?