Optimization
Full course description
In everyday life we are surrounded with applications of optimization. A common drive of human activity is to make things better, to enhance performance, and to carry out the best possible actions in given situations. Often the essentials of a situation can be captured by a mathematical description (a model, with or without constraints) and the value of a proposed action by a function (an optimization criterion). Then the goal becomes to optimize the criterion for the given model under the associated constraints (if any). Depending on the nature of the model, the constraints, and the optimization function, many different mathematical techniques are available to characterize and compute optima. In this course we address the most important areas in optimization and we study the most common techniques.
First, we consider the optimization of unconstrained continuous functions in several variables. Some notions we will come across are: partial derivatives; the gradient and the Hessian; stationary points; minima, maxima and saddle points; local and global optima. Techniques to compute optima range from analytical and algebraic techniques (i.e., solving systems of equations) to iterative and approximate numerical techniques (e.g., gradient methods and hill climbing, Newton and quasi-Newton methods, and several others). We will focus on a selection of these. An important class of functions to consider is that of least squares criteria. We will consider both linear and nonlinear least squares problems and suitable iterative techniques to solve them. Linear least squares problems are often encountered in the context of fitting a model to measurement data. They also allow one to rephrase the problem of solving a nonlinear system of equations as an optimization problem, while the converse is possible too.
Second, we address optimization problems subject to a given set of constraints. A well-known such class consists of linear optimization functions subject to linear equality or inequality constraints: the class of linear programs. The problem of fitting a linear model to measurement data using the criterion of least absolute deviations, can be reformulated as a linear program. Several methods are available to solve such problems, including active set methods and the simplex algorithm, but also interior point methods and primal-dual methods. We discuss the Kuhn-Tucker conditions for optimality. For the optimization of nonlinear functions subject to nonlinear constraints we address the Lagrange multiplier method.
To demonstrate the various optimization problems and solution techniques, we will provide many examples and exercises. To demonstrate the wide range of applicability, these are taken from different fields of science and engineering. To become acquainted with optimization techniques, one computer class is organized in which the basics of the software package Matlab are presented.
Course objectives
- To become familiar with the basic concepts and methods of optimization.
- To understand how techniques from calculus and linear algebra are useful for optimization.
- To become familiar with a diversity of optimization problems and solution techniques.
- To be able to cast certain real-world problems into the form of optimization problems.
- To be able to solve certain optimization problems with software (Matlab).
Prerequisites
SCI2018 Calculus and SCI2019 Linear Algebra.
Recommended reading
- Hand-outs will be distributed during the course
Recommended literature:
- F.S. Hillier and G.J. Lieberman: Introduction to Operations Research (10th edition). McGraw-Hill, 2015 ISBN 978-0-07-352345-3.
- A.D. Belegundu and T.R. Chandrupatla: Optimization Concepts and Applications in Engineering (2nd ed.). Cambridge university Press, 2011.
- Martin T. Hagan et al.: Neural Network Design (2nd edition), available as free ebook.
- M. Salvioli