## Optimization

### Volledige vakbeschrijving

In everyday life we are surrounded with applications of optimization. A common drive of human activity is to make things better, to enhance performance, and to carry out the best possible actions in given situations. Often the essentials of a situation can be captured by a mathematical description (a model, with or without constraints) and the value of a proposed action by a function (an optimization criterion). Then the goal becomes to optimize the criterion for the given model under the associated constraints (if any). Depending on the nature of the model, the constraints, and the optimization function, many different mathematical techniques are available to characterize and compute optima. In this course we address the most important areas in optimization and we study the most common techniques. First, we consider the optimization of unconstrained continuous functions in several variables. Some notions we will come across are: partial derivatives; the gradient and the Hessian; stationary points; minima, maxima and saddle points; local and global optima. Techniques to compute optima range from analytical and algebraic techniques (i.e., solving systems of equations) to iterative and approximate numerical techniques (e.g., gradient methods and hill climbing, Newton and quasi-Newton methods, and several others). We will focus on a selection of these. An important class of functions to consider is that of least squares criteria. We will consider both linear and nonlinear least squares problems and suitable iterative techniques to solve them. Linear least squares problems are often encountered in the context of fitting a model to measurement data. They also allow one to rephrase the problem of solving a nonlinear system of equations as an optimization problem, while the converse is possible too. Second, we address optimization problems subject to a given set of constraints. A well-known such class consists of linear optimization functions subject to linear equality or inequality constraints: the class of linear programs. The problem of fitting a linear model to measurement data using the criterion of least absolute deviations, can be reformulated as a linear program. Several methods are available to solve such problems, including active set methods and the simplex algorithm, but also interior point methods and primal-dual methods. We discuss the Kuhn-Tucker conditions for optimality. For the optimization of nonlinear functions subject to nonlinear constraints we address the Lagrange multiplier method. To demonstrate the various optimization problems and solution techniques, we will provide many examples and exercises. To demonstrate the wide range of applicability, these are taken from different fields of science and engineering.### Doelstellingen van dit vak

•To discuss a diversity of optimization problems and solution techniques, and their relation to real-world problems.### Voorwaarden

SCI2018 Calculus and SCI2019 Linear Algebra.### Aanbevolen literatuur

•Hand-outs will be distributed during the course. Recommended literature: •Hillier & Lieberman (2010): Introduction to Operations Research (9th edition). McGraw-Hill, ISBN 978-007-126767-0. •R.J. Fletcher, Practical Methods of Optimization (2nd ed.), Wiley, 1987 (2nd ed.: 2004).**Instructional Format**

Lectures and exercises. Some exercises will be worked out in computer classes using Matlab.

**Examinations**

A written midterm and a written final exam.

SCI3003

Periode 1

4 sep 2023

27 okt 2023

Studiepunten:

5.0Coördinator:

Onderwijsmethode:

Lecture(s)