In [2]:

```
%matplotlib inline
```

*objective function*). The available optimization methods are divided into linear, quadratic and general nonlinear methods based on the objective function. For general nonlinear objective functions one division might be made between those that are functions of a *single* variable and *multivariate* objective functions.

*constraints*, on whether the constraints are *linear constraints* or *nonlinear constraints*, and whether they are *equality* or *inequality* constraints. If the objective function is linear and the constraints are linear equality or inequality constraints one is in the realm of *linear programming*, one of the oldest and most widely used of the computational optimization areas. The other extreme of general nonlinear objective function with general nonlinear equality and inequality constraints is sometimes called *nonlinear programming* in contrast and is perhaps the hardest but also very useful in applications.

*local* and *global* ooptimization. The methods for linear programming find global optimizer but most of the standard methods of nonlinear optimization are local in nature. Global optimizers are their own big area and techniques like *simulated annealing*, *evolutionary algorithms* are sometimes used.

`SciPy`

functionality available for optimization is the tutorial on the official `SciPy`

document pages and then the accompanying reference guide. The same places also have a good coverage for solving nonlinear equations because some methods for optimization are based on root finding.

`SciPy`

`optimize`

package provides a number of functions for optimization and nonlinear equations solving. One such function is `minimize`

which provides a unified access to the many optimization packages available through `scipy.optimize`

. Another way is to call the individual functions, each of which may have different arguments.

- Recall Newton's method for solving $f(x) = 0,\; f:\mathbb{R}\rightarrow \mathbb{R}$. Needs derivate $f'(x)$
- Recall Newton's method for solving $f(x) = 0,\; f:\mathbb{R}^n\rightarrow\mathbb{R}^n$. Needs Jacobian $J_f(x)$

- Newton's method for optimization solves $\nabla f(x) = 0,\; f:\mathbb{R}^n\rightarrow\mathbb{R}$. Needs Hessian $H_f(x)$.
- Secant update type methods like BFGS maintain an approximate Hessian.
- Nonlinear conjugate gradient (CG) do not maintain approximate Hessian, hence suitable for large problems.
- Truncated (also known as inexact) Newton methods use few iterations of an iterative solver to solve for the Newton step.

See the `scipy.optimize`

tutorial.

- Pick a special function or your choice from
`scipy.special`

and find minimizers and maximizers in a few intervals using the`minimize_scalar`

in`scipy.optimize`

. Use the argument`method='brent'`

. - Modify the above exercise by placing bounds by using
`minimize_scalar`

and using the argument`method=bounded`

.

`scipy.optimize.leastsq`

which is a wrapper for modified Levenberg-Marquardt type methods implemented in a standard well-used old FORTRAN library called MINPACK.

For nonlinear least squares with bounds on the variables use `least_squares`

in `scipy.optimize`

.

Suppose a signal defined on $[-5, 10]$ is measured at 100 points and we known that the signal (i.e. the function) is produced as a sum of two Gaussians with unknown parameters. In particular the signal is known to be of the form

$$ A_1 \exp\bigg(-\frac{(x-\mu_1)^2}{2\sigma_1^2}\bigg) + c_1 + A_2 \exp\bigg(-\frac{(x-\mu_2)^2}{2\sigma_2^2}\bigg) + c_2 $$for some unknown paramter values $A_i$, $\mu_i$, $\sigma_i$, $c_i$.

Here is a sample such function with a particular choice of parameters.

In [19]:

```
from math import sqrt, pi
from numpy import exp, zeros_like, linspace
import matplotlib.pylab as plt
def N(x, A, c, mu, sigma):
return A * exp(-(x-mu)**2/(2.*sigma**2)) + c
def s(x, parameters):
out = zeros_like(x)
for (A, c, mu, sigma) in parameters:
out += N(x, A, c, mu, sigma)
return out
true_parameters = [(2, 2, -1, 1), (5, 1, 2, 1.5)]
x = linspace(-5, 10, 100)
sum = s(x, true_parameters)
plt.plot(x, sum)
plt.figure()
plt.show()
```

In [20]:

```
for (A, c, mu, sigma) in true_parameters:
plt.plot(x, N(x, A, c, mu, sigma))
plt.plot(x, sum)
plt.show()
```

`scipy.optimize.leastsq`

. The parameters thus discovered are used to draw the signal as the sum of the discovered Gaussians below.

In [21]:

```
from scipy.optimize import leastsq
def residuals(parametersflat, y, x):
parameters = [tuple(r) for r in parametersflat.reshape((2,4))]
return y - s(x, parameters)
y_meas = s(x, true_parameters)
# True parameters = [(2, 2, -1, 1), (5, 1, 2, 1.5)]
p0 = [1.9, 2.2, -0.8, 1.3, 4.8, 1.3, 1.8, 1.8]
p = leastsq(residuals, p0, args=(y_meas, x))[0]
fit = s(x, [tuple(r) for r in p.reshape((2,4))])
plt.plot(x, fit)
plt.show()
```

The actual and computed parameters are displayed below.

In [26]:

```
print 'Actual paramaters'
print true_parameters; print
print 'Computed parameters'
print p
```

- First copy the example in the tutorial page on nonlinear least squares. Once you have it working, play with increasing the noise in the signal to see how bad can you make it while still recovering reasonable values for the parameters. (You can decide what is reasonable. Plot the reconstructed function and decide whether you find it acceptable.)
- Try the above exercise using a function constructed as a sume of a number of instances of a parametrized function of your choosing.

See tutorial.

- Using the minimum function, compute the minimum of the scalar function $f(x,y) = (x−1)^2 + (y−2)^2 − 3x + 2y + 1$ in two ways: with and without providing the Jacobian.
- Try to maximize the function $f(x,y) = xy$ subject to the equality constraints $$ \begin{align} 2 x^2 + y^2 &= 1\\ x^2 + 3y^2 &=1 \end{align} $$

In [46]:

```
X, Y = np.meshgrid(np.linspace(-1, 1, 256), np.linspace(-1, 1, 256))
plt.figure()
plt.pcolormesh(X, Y, (2*X**2 + Y**2 <= 1) & (X**2 + 3*Y**2 <= 1))
plt.contour(X, Y, X*Y)
plt.show()
```

`SciPy`

`optimize`

library did not have any linear programming. Now there is `scipy.optimize.linprog`

which we will use. Outside of `SciPy`

you can also consider cvxopt package by S. Boyd and L. Vandenberghe, the authors of the book *Convex Optimization*. This package can do much more than just linear programming (LP) and has its own sparse matrix formats.

`awk`

in UNIX and is the Kernighan in the famous Kernighan and Ritchie book on C programming. The authors of AMPL have won numerous awards for it from the professional optimization societies.

`cvxopt`

implementation. Your job in the exercise below will be to solve this using `scipy.optimize.linprog`

and compare the solution to the one obtained by `cvxopt`

. Note that a linear program may not have a unique solution.

In [33]:

```
from cvxopt import matrix
from cvxopt.solvers import lp
A = matrix([[-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0]]) # constraints matrix
b = matrix([1.0, -2.0, 0.0, 4.0 ]) # constraint RHS vector
c = matrix([2.0, 1.0 ]) # cost function coefficients
sol = lp(c, A, b)
```

In [34]:

```
print(sol['x'])
```

Note that the `matrix`

here is not a `numpy`

matrix:

In [35]:

```
type(A)
```

Out[35]:

`cvxopt matrix`

can however be easily created from `numpy`

matrices:

In [37]:

```
import numpy as np
A1 = matrix(np.array([[-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0]]))
type(A1)
```

Out[37]:

`A`

and `A1`

above are transposes of each other even though they were created using similar looking input arguments.

In [42]:

```
print A
print A1
```

To create `A1`

to be same as `A`

we should do instead:

In [43]:

```
A1 = matrix(np.array([[-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0]]).T)
print A
print A1
```

`cvxopt`

also has a sparse matrix format similar to the `scipy.sparse.coo_matrix`

. So it is easy to go back and forth between those:

In [44]:

```
from cvxopt import spmatrix
from scipy.sparse import coo_matrix
I = [0, 2, 3]; J = [1, 4, 5]; data=[11, 13, 17]
S = spmatrix(data, I, J, tc='d') # tc='d' means typecode is double (float)
S1 = coo_matrix((data, (I, J)), dtype=float)
```

In [45]:

```
print S; print S1.todense()
```

- Solve the above example using
`scipy.optimize.linprog`

. - Using
`scipy.optimize.linprog`

solve the linear program: maximize $x+y$ subject to $$ \begin{align} 50x + 24y &\le 2400\\ 30x + 33y &\le 2100 \end{align} $$