In this exercise you will compute the number $e$ using the
formula:
$$
e = \lim\limits_{n \to \infty} \left(1 + \frac{1}{n} \right)^n \, .
$$
Compute $e$ from the formula above for various values of $n$
of the form $2^k$ where $k
= 1, 10, 20, 30, 40, 50, 60, 70, 80$. Determine the absolute error in the
computation using above formula for each $n$ by comparing that value
with the output of `math.exp`

.
Plot the absolute error as a function of $k$ using
`matplotlib.pyplot.semilogy`

.

You will notice that the error first decreases, then jumps to a large value and stays constant. Approximately determine the value of $k$ (and hence $n$) from your plot above. Then adapt your computation by including more values of $k$ around this value and determine the exact value of $n$ at which the jump in error occurs.

Create a random matrix $A$ of order 1000 and an upper triangular matrix $B$ obtained by setting the entries below the diagonal in $A$ to 0. For a randomly chosen RHS vector $b$ solve $A\, x = b$ in two ways using `scipy.linalg.solve`

and using `scipy.linalg.solve_triangular`

. Measure and print the times taken by the two methods. Be sure to make the measurements over several runs of each.

Consider the following matrix: $$ A= \left[ \begin{array}{cc} 100 & 100 \\ 100 & 100 + \epsilon \end{array} \right] $$

Use the cond function from `numpy.linalg`

to estimate the condition number of $A$ as you change $\epsilon$. Solve $A\, x = b$ for a known $x$ and compute the norm of the residual and error for various values of $\epsilon$.

In this problem we will explore convergence of CG. Create a 500 x 500 sparse matrix $A$ as follows. The diagonal entries are 1 and the above diagonal entries are selected uniformly at random from [-1, 1]. Reflect the entries across the diagonal to get a symmetric matrix. Select a number $\tau < 1$. Replace each off-diagonal entry whose magnitude is greater than $\tau$ by 0. For $\tau$ close to 0 the result is a well-conditioned matrix whose density of nonzero entries is approximately $\tau$. As $\tau$ increases the condition number and sparsity worsens.

`scipy.sparse.linalg.cg`

and plot the norm of the residual at end of iteration $k$ versus $k$ for the values $\tau = 0.01, 0.05, 0.1, 0.2$. You should notice that for $\tau = 0.2$ there is no convergence. Use log scale on the vertical axis. To obtain the residual after iteration $k$ use the the `callback(xk)`

function which will allow `cg`

to pass back the current guess `xk`

at end of iteration $k$.

Problem taken from the book *Numerical Linear Algebra* by Trefethen and Bau.