In [1]:

```
a = 1.0
while a > 0.0:
a = a/2.0
print a
```

In [2]:

```
a = 1.0
while a + 1.0 > 1.0:
a = a/2.0
print a
```

In [1]:

```
# Smallest IEEE DP number
print '%g' % 2.**-1074
```

Anything smaller underflows (is zero). The above number is sometimes called UFL (for underflow).

In [2]:

```
print '%g' % 2.**-1075
```

`inf`

, `NaN`

etc.) the number is $(-1)^s 1.\,f \times 2^{E-1023}$. For numbers smaller than $2^{-1022}$ with some exceptions the number is $0.\,f \times 2^{-1022}$.

In [24]:

```
(1. + (1 - 2.**-52)) * 2**1023
```

Out[24]:

Anything larger overflows (yields `inf`

).

In [25]:

```
(1. + (1 - 2.**-53)) * 2**1023
```

Out[25]:

In [1]:

```
2**-52
```

Out[1]:

*Truncation error* is due to approximations obtained by truncating infinite series expansions or terminating an iterative sequence before it has converged.

*Rounding error* is caused by the finite representation of and arithmetic operations on real numbers.

*Forward error* is the difference between true and computed values.

*Backward error* is the change in input that would produce the observed output in an exact computation.

*relative error*) or as an absolute quantity (*absolute error*). Even if the computation and representation is exact the solution to a problem may be sensitive to perturbations in input data. This is the concept of *conditioning* of a problem and it is a property of the problem itself. On the other hand the concept of *stability* is a property of the algorithm. One situation to be careful in is the subtraction of nearly equal numbers, which results in loss of significant digits. This is seen in the exercise below.

- Run the second piece of code above using
`pdb`

and step through the loop and make sure you understand the behavior. - Write a function to compute an approximate value for the derivative of a function using the finite difference formula
$$
f'(x) \approx \frac{f(x+h) - f(x)}{h}\, .
$$
Test your function by computing the derivative of $\sin(x)$ at $x=1$. Compute the absolute value of the error by using the
`math.cos`

function to compare with. Use a value of $h=0.1$. - On a log-log plot plot the error versus $h$ for $h = 10^{-k}$ for $k=0, \ldots, 16$.

`mpmath`

provides the capabilities for computing at any desired precision. Higher the precision, slower the computation will be most likely since the standard single and double precision artithmetic is implemented in hardware while arbitrary precision arithmetic is done in software. But sometimes the tradeoff between precision and speed is unavoidable.

In [2]:

```
import mpmath as mpm
```

In [3]:

```
mpm.mp.dps
```

Out[3]:

In [4]:

```
print(mpm.pi)
```

In [5]:

```
mpm.mp.dps = 100
```

In [6]:

```
print mpm.pi
```

In [7]:

```
mpm.mpf(1)
```

Out[7]:

In [8]:

```
mpm.exp(1)
```

Out[8]: