Calculus Optimization Methods
| A reader requests expansion of this book to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.
A key application of calculus is in optimization: finding maximum and minimum values of a function, and which points realize these extrema.
Context[edit | edit source]
Formally, the field of mathematical optimization is called mathematical programming, and calculus methods of optimization are basic forms of nonlinear programming. We will primarily discuss finite-dimensional optimization, illustrating with functions in 1 or 2 variables, and algebraically discussing n variables. We will also indicate some extensions to infinite-dimensional optimization, such as calculus of variations, which is a primary application of these methods in physics.
Techniques[edit | edit source]
Basic techniques include the first and second derivative test, and their higher-dimensional generalizations.
A more advanced technique is Lagrange multipliers, and generalizations as Karush–Kuhn–Tucker conditions and Lagrange multipliers on Banach spaces.
Applications[edit | edit source]
Optimization, particularly via Lagrange multipliers, is particularly used in the following fields:
- Particularly the Lagrangian formulation of classical mechanics.
- Neoclassical economics
Terminology[edit | edit source]
- Input points, output values
- Maxima, minima, extrema, optima
- Stationary point, critical point; stationary value, critical value
- Objective function
- Constraints – equality and inequality
- Especially sublevel sets
- Feasible region, whose points are candidate solutions
Statement[edit | edit source]
This tutorial presents an introduction to optimization problems that involve finding a maximum or a minimum value of an objective function subject to a constraint of the form .
Maximum and minimum[edit | edit source]
Finding optimum values of the function without a constraint is a well known problem dealt with in calculus courses. One would normally use the gradient to find stationary points. Then check all stationary and boundary points to find optimum values.
Example[edit | edit source]
has one stationary point at (0,0).
The Hessian[edit | edit source]
Second derivative test[edit | edit source]
The Second derivative test determines the optimality of stationary point according to the following rules :
- If at point x then has a local minimum at x
- If at point x then has a local maximum at x
- If has negative and positive eigenvalues then x is a saddle point
- Otherwise the test is inconclusive
In the above example.
Therefore has a minimum at (0,0).
Sections[edit | edit source]
- Optimization on a Finite Set
- Optimization on an Interval
- Optimization on a Cube
- Constrained Optimization
- Lagrange Multipliers
References[edit | edit source]
-  T.K. Moon and W.C. Stirling. Mathematical Methods and Algorithms for Signal Processing. Prentice Hall. 2000.