CHAPTER XIII: STATEMENT OF THE PROBLEM. DERIVATION OF THE NECESSARY CONDITIONS.
- 179 The general problem stated.
- 180 Existence of substitutions by which one integral remains unchanged while the other is caused to vary. An exceptional case.
- 181 Case of two variables. Convergence of the series that appear.
- 182 The nature of the substitutions that have been introduced.
- 183 Formation of certain quotients which depend only upon the nature of the curve.
- 184 Generalization, in which several integrals are to retain fixed values.
- 185 The quotient of two definite integrals being denoted by
, it is shown that
has the same constant value for the whole curve.
- 186 The differential equation
.
- 187 Extension of the theorem of Article 97.
- 188 Discontinuities, etc.
- 189 The second variation: the three conditions formulated in Article 135 are also necessary here.
Article 179.
The nature of many problems whicli arise in the Calculus of Variations presents subsidiary conditions which limit the arbitrariness that we have hitherto employed in the indefinitely small variations of the analytical structure. Such problems are the most difficult and at the same time the most interesting that occur. These last conditions which enter into the requirement for a maximum or a minimum are in general of a double nature. On the one hand, it may be proposed that among the variables there are to exist equations of condition, as indicated in Arts. 176 and 177. On the other hand, we may require that the maximum or the minimum in question satisfy a further condition, viz., it must cause another given integral to have a prescribed value. Such cases are usually called Relative Maxima and Minima.
If we limit our discussion to the region of two variables, then the problem which we have to consider may be expressed as follows (cf. Art. 17):
Let
and
be two functions of the same nature as the function
hitherto treated. The variables
and
are to be so determined as one-valued functions of
that the curve defined through the equations
will cause the integral

to be a maximum or a minimum, while at the same time for the same equations the integral

will have a prescribed value; that is, for every indefinitely small variation of the curve for which the second integral retains its sign unaltered, the first integral, according as a maximum or a minimum is to enter, must be continuously smaller or continuously greater than it is for the curve
.
Article 180.
We must first show that it is possible to represent analytically the variations of a curve for which the integral
retains a constant value.
In the place of the variables
let us make the substitution
. The variation of the second integral is accordingly

where
denotes that the terms within the brackets are of the second and higher dimensions in
.
We have so to determine
and
; that
. For this purpose we write

where
are arbitrary constants and the functions
are functions similar to the quantities
of the preceding Chapters and vanish for
and
. Now write

and

Hence, from 3) we have

If we write

it follows that

The functions
are completely determined as soon as definite values are given to
; and, in order that
, it is necessary that

If any of the quantities
, for example
, are different from zero, we are able to express
in a power-series of the remaining
's, when these quantities have been chosen sufficiently small.[1] The equation
may consequently be satisfied for sufficiently small systems of values of the
's.
Substitute one of these systems of values in 4) and it is seen that indefinitely small variations of the curve
exist for which the integral
remains unaltered. These variations may be analytically represented (see the next Article).
This proof is deficient in the case where all the quantities
are zero for all values of
, however
may have been chosen. When this is the case,
must be zero along the whole curve. But this is one of the necessary conditions that the integral
have a maximum or a minimum value.
If, then, for the curve which is derived through the solution of the differential equation
there also enters a maximum or a minimum value of the integral
and consequently
, it is in general not possible so to vary the curve that the second integral remains unaltered.
This case is excluded from the present discussion, and is left for special investigation in each particular problem.
Article 181.
Let us limit ourselves for the present to the simplest case where

and if we denote the integrals in the expansion of
that are associated with the coefficients
by
, the equation correresponding to (A) of the last article is

which series we suppose convergent for sufficiently small values of
and
.
Suppose next we express
in terms of
by the series

Then, when this value of
is substituted in
, by equating the coefficients of the different powers of
to zero, we have


.....................................
Hence, denoting the quotients
by
, where
, we have


.............................
Further, the equation
may be written

Let us compare this series with the series

![{\displaystyle \qquad =g\left[{\frac {\epsilon }{r}}+\left({\frac {\epsilon }{r}}+{\frac {\epsilon _{1}}{r_{1}}}\right)^{2}+\left({\frac {\epsilon }{r}}+{\frac {\epsilon _{1}}{r_{1}}}\right)^{3}+\cdots \right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a171de9035803992dae157619b7f8562ff6721a8)
Suppose from this series we have
expressed in terms of
in the form

where the
s have been derived from the coefficients of powers of
and
as the
s in
are formed from the coefficients
in
.
The series
is convergent for

If, then, the coefficients
of
are in absolute value less than the corresponding coefficients in
, the coefficients
in
are less in absolute value than the coefficients
in
, and therefore the series
is convergent.
Now the coefficients of
in
and
are respectively
and 
where the symbol
denotes
. Hence for sufficiently small values of
and
, if

and

the series
is convergent, and when substituted in the expression for
causes this expression to vanish.
Article 182.
The expression for
as a function of
is had from the relation

Hence, it follows that

or
![{\displaystyle {\frac {\epsilon _{1}}{r_{1}}}={\frac {1}{2}}\left[{\frac {r_{1}}{r_{1}+g}}-{\frac {\epsilon }{r}}\pm {\sqrt {\left({\frac {\epsilon }{r}}-{\frac {r_{1}}{r_{1}+g}}\right)^{2}-{\frac {4g\epsilon }{r(r_{1}+g)}}}}\right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cd6cf39e47d966700e8069b4998c1a07f2074a64)
Of the two roots we choose the one with the lower sign in order that
equal zero with
. This root may be written
![{\displaystyle {\frac {\epsilon _{1}}{r_{1}}}={\frac {1}{2}}\left[{\frac {r_{1}}{r_{1}+g}}-{\frac {\epsilon }{r}}-\left({\frac {r_{1}}{r_{1}+g}}-{\frac {\epsilon }{r}}\right){\sqrt {1-\left({\frac {4g\epsilon }{r(r_{1}+g)}}\right)\left({\frac {r_{1}}{r_{1}+g}}-{\frac {\epsilon }{r}}\right)^{-2}}}\right]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/220b93347769e483efd5631925f552941bc3536c)
It is seen that the expression under the radical is finite, continuous and one-valued for values of
such that
and 
Article 183.
Returning to the substitutions

we assume that the functions
become zero at the endpoints (or limits) of the curve and are so chosen that
does not vanish within the limits of integration. We have then at once from
the power-series

where the power-series
vanishes with
.
From this we have

If we subject the integral
to the same variation, we have [cf. formula
]

and consequently

If then, the integral
is to have a maximum or a minimum value, it is necessary that

be equal to zero.
We have, therefore, the necessary condition

From this it is seen that the quotient
, is independent of the arbitrary functions
, since it does not vary if we write for
as functions of
other functions
. Consequently it follows that the value of the above quotient depends only upon the nature of the curve
.
Article 184.
We might generalize the problem treated above by requiring the curve
which minimizes or maximizes the integral

while at the same time the following integrals have a prescribed value:


...............................................

the functions
being of the same nature as the function
defined in Chapter I.
We must now consider the deformation of the curve caused by the variations

We have, then, if we write
, and suppose that the
's and
's vanish for
and



.................................................

By means of the last
equations, if the determinant

is different from zero, we may, for sufficiently small values of
, express these quantities as convergent power-series in
[2]
These power-series when substituted in
cause it to have the form

where

In order that the integral
have a maximum or a minimum value, it is therefore necessary that

This determinant, when expanded, may be written in the form
![{\displaystyle \int _{t_{0}}^{t_{1}}[\lambda _{0}G^{(0)}+\lambda _{1}G^{(1)}+\cdots +\lambda _{\mu }G^{(\mu )}]w{\text{d}}t=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a80fa0a96be8bf99ca5dfdd5e73e5229c602b42d)
where
is the first minor of
in the determinant
.
Hence, as before (cf. Art. 79, where we had
), we have here

Article 185.
Similarly, if in Art. 183 we denote the quotient
by
and then give to
and
their values, we have

From this it follows that

We may prove a very important theorem regarding the constant
, viz: -it has one and the same value for the whole curve; i. e., we always have the same value of
, whatever part of the curve
we may vary. Consider the values of
laid off on a straight line, and suppose that the constant
has a definite value for, say, the interval
which also corresponds to a certain portion of curve. This value (see Art. 183) is independent of the manner in which the portion of curve
has been varied. Next consider an interval
which includes the interval
; then, there belongs to all the possible variations of the interval
, also that variation by which
and
remain unchanged and only
, varies. As
has a definite value for this interval and is independent of the manner in which the curve has been varied, it must have the same value for
.
Article 186.
The differential equation
is the same as the one we would have if we require that the integral

have a maximum or a minimum value, where
is written for the function

Through this differential equation (See Art. 90) ;
and
are expressible in terms of
and
and two constants of integration
and
in the form

The curve represented by these equations is a solution of the problem, when indeed a solution is possible.
Article 187.
We prove next a very important theorem which often gives a criterion whether a sudden change in direction can take place or not within a stretch where the variation is unrestricted (cf. Art. 97). Suppose that on a position
, where the variation is unrestricted, a sudden change in direction is experienced. On either side of
take two points
and
so near to
that within the intervals
and
a similar discontinuity in change of direction is not had. Among the possible variations there is one such that the whole curve remains unchanged except the interval
, which is, of course, varied in such a way that the integral
retains its value. The variation of the integral
depends then only upon the variation of the sum of integrals

We cause a variation in the stretch
by writing

where we assume that
(A)
are all zero for
and 
are zero for 
for 
We may then always determine
as a power-series in
so that
.
If by
we denote an expression of the form
, we have (Art. 79)
![{\displaystyle \Delta I^{(0)}=\epsilon \int _{t_{1}}^{t'}Gw{\text{d}}t+epsilon\int _{t'}^{t_{2}}Gw{\text{d}}t+\epsilon \left[(\xi -\lambda \xi _{1}){\frac {\partial F}{\partial x'}}+(\eta -\lambda \eta _{1}){\frac {\partial F}{\partial y'}}\right]_{t_{1}}^{t'}+\epsilon \left[(\xi -\lambda \xi _{1}){\frac {\partial F}{\partial x'}}+(\eta -\lambda \eta _{1}){\frac {\partial F}{\partial y'}}\right]_{t'}^{t_{2}}+\epsilon (\epsilon )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/13ae7b737551dfc9bb289f0d7297f782c7d173c6)
If the curve
minimizes or maximizes the integral
, it is necessary that the coefficient of
on the right-hand side of the above expression be zero. Since
for unrestricted variation, it follows from the assumption (A) that
![{\displaystyle \eta _{t'}\left[\left({\frac {\partial F}{\partial y'}}\right)_{t'}^{-}-\left({\frac {\partial F}{\partial y'}}\right)_{t'}^{+}\right]=0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11cc5af3b8a2682455444e70a61e74923169304e)
If in the assumptions (A) we assume for
that
and
, we have an analogous equation for
.
It therefore follows (cf. Art. 97) that
![{\displaystyle \left[{\frac {\partial (F^{(0)}-\lambda F^{(1)})}{\partial x'}}\right]_{t'}^{-}=\left[{\frac {\partial (F^{(0)}-\lambda F^{(1)})}{\partial x'}}\right]_{t'}^{+}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d8b3a7dd95da69b7e25c5ea1dc764c3bf1bbfb99)
![{\displaystyle \left[{\frac {\partial (F^{(0)}-\lambda F^{(1)})}{\partial y'}}\right]_{t'}^{-}=\left[{\frac {\partial (F^{(0)}-\lambda F^{(1)})}{\partial y'}}\right]_{t'}^{+}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/45a99a34ac7623922a84f2f27197be63a8b7dce1)
We have then the theorem : Along those positions which are free to vary of the curve which satisfies the differential equation
, the quantities
and
vary everywhere in a continuous manner, even on such positions of the curve where a sudden change in its direction takes place.
Article 188.
It is obvious that these discontinuities may all be avoided, if we assume that
vanish at such points. This we may suppose has been done. We may also impose many other restrictions upon the curve ; for example, that it is to go through certain fixed points, or that it is to contain certain given portions of curve, or that it is to pass through a certain limited region. In all these cases there are points on the curve which cannot vary in a free manner. But whatever condition may be imposed upon the curve, the following theorem is true.
All points which are free to vary and there always exist such points must satisfy the differential equation
, and for all such points the constant
has the same value.
Article 189.
The second variation. We assume that the variations at the limits and at all points of the curve where there is a discontinuity in the direction, vanish. We also suppose that the variations
have been so chosen that
.
We then have (cf. Art. 115):
![{\displaystyle \Delta I^{(0)}=\epsilon \delta I^{(0)}+{\frac {\epsilon ^{2}}{2}}\int _{t_{0}}^{t_{1}}\left[F_{1}^{(0)}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+F_{2}^{(0)}w^{2}\right]{\text{d}}t+(\epsilon )_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4ea77f8ccc6cbf546c16d369892628fc46f4f75a)
![{\displaystyle 0=\epsilon \delta I^{(1)}+{\frac {\epsilon ^{2}}{2}}\int _{t_{0}}^{t_{1}}\left[F_{1}^{(1)}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+F_{2}^{(1)}w^{2}\right]{\text{d}}t+(\epsilon )_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e0ef5bfb28269106c0e05ce0adbbe856d9791465)
and consequently
![{\displaystyle \Delta I^{(0)}=\epsilon [\delta I^{(0)}-\lambda \delta I^{(1)}]+{\frac {\epsilon ^{2}}{2}}\int _{t_{0}}^{t_{1}}\left[F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+F_{2}w^{2}\right]{\text{d}}t+(\epsilon )_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c590a389c7203319ad7aa1893e4015a511c3788b)
Since

it follows that
![{\displaystyle \Delta I^{(0)}={\frac {\epsilon ^{2}}{2}}\int _{t_{0}}^{t_{1}}\left[F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+F_{2}w^{2}\right]{\text{d}}t+(\epsilon )_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/662dcb8474b3794d04e91cc0bc4c7935a14e99a1)
This last integral may be written at once (Art. 119) in the form

where
is determined from the differential equation (Art. 118)

It follows here as a necessary condition for the existence of a maximum or a minimum that
for all portions of the curve at which there is free variation, must in the first case be everywhere negative' and in the second case everywhere positive' and must also be different from 0 and
. In order that this transformation of the integral be possible the equation
must admit of being integrated in such a way that
is different from zero on all portions of curve, which vary freely (Art. 128).
We shall determine in Chapter XVII whether the three necessary conditions thus formulated are also sufficient for a maximum or a minimum value of the integral
. By means of the example in the next Chapter, we shall also show that if there exists a curve, for which the first integral has a maximum or a minimum value while the second integral retains a given value, then the curve is determined through the three conditions, which are the same here as those formulated in Art. 135. The behavior of the
-function is then decisive regarding whether there in reality exists a maximum or a minimum.
- ↑ Cf. Lectures on the Theory of Maxima and Minima, etc., p. 20.
- ↑ Cf. Lectures on the Theory of Maxima and Minima of Functions of Several Variables, p. 21.