Calculus of Variations/CHAPTER VIII

CHAPTER VIII: THE SECOND VARIATION; ITS SIGN DETERMINED BY THAT OF THE FUNCTION ${\displaystyle F_{1}}$.

• 111 Nature and existence of the substitutions introduced.
• 112 The total variation.
• 113,114 The second variation of the function ${\displaystyle F}$.
• 115 The second variation of the integral ${\displaystyle I}$. The sign of the second variation in the determination of maximum or minimum values.
• 116 Discontinuities.
• 117 The sign of the second variation made to depend upon that of ${\displaystyle F_{1}}$.
• 118 The admissibility of a transformation that has been made. The differential equation ${\displaystyle J=0}$.
• 119 A simple form of the second variation.
• 120 A general property of a linear differential equation of the second order.
• 121 The second variation and the function ${\displaystyle F_{1}}$. The function ${\displaystyle F{1}}$ cannot change sign and must be different from ${\displaystyle 0}$ and ${\displaystyle \infty }$ in order that there may be a maximum or a minimum.

Article 111.
The substitution ${\displaystyle x+\epsilon \xi }$, ${\displaystyle y+\epsilon \eta }$ for ${\displaystyle x}$,${\displaystyle y}$ causes any point of the original curve to move along a straight line, which makes an angle with the ${\displaystyle X}$-axis whose tangent is ${\displaystyle {\frac {\eta }{\epsilon }}}$.

This deformation of the curve is insufiEcient, if we require that the point move along a curve other than a straight line.

To avoid this inadequacy we make the more general substitution (by which the regular curve remains regular):

${\displaystyle x\rightarrow x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots }$, ${\displaystyle \qquad y\rightarrow y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots }$

where, like ${\displaystyle \xi }$, ${\displaystyle \eta }$ in our previous development (Art. 75), the quantities ${\displaystyle \xi _{1}}$, ${\displaystyle \eta _{1}}$, ${\displaystyle \xi _{2}}$, ${\displaystyle \eta _{2}\ldots }$ are functions of ${\displaystyle t}$, finite, continuous one-valued and capable of being differentiated (as far as necessary) between the limits ${\displaystyle t_{0}\ldots t_{1}}$. These series are supposed to be convergent for values of ${\displaystyle \epsilon }$ such that ${\displaystyle |\epsilon |<1}$.

That such substitutions exist may be seen as follows:

Since the curve is regular, the coordinates of consecutive points to ${\displaystyle P_{0}}$ and ${\displaystyle P_{1}}$ may be expressed by series in the form, say,

${\displaystyle ({\text{A}})\qquad x_{0}+\epsilon a_{0}^{(1)}+{\frac {\epsilon ^{2}}{2!}}a_{0}^{(2)}+\cdots }$, ${\displaystyle \qquad y_{0}+\epsilon b_{0}^{(1)}+{\frac {\epsilon ^{2}}{2!}}b_{0}^{(2)}+\cdots }$
${\displaystyle ({\text{B}})\qquad x_{1}+\epsilon a_{1}^{(1)}+{\frac {\epsilon ^{2}}{2!}}a_{1}^{(2)}+\cdots }$, ${\displaystyle \qquad y_{1}+\epsilon b_{1}^{(1)}+{\frac {\epsilon ^{2}}{2!}}b_{1}^{(2)}+\cdots }$

where the coefficients of the powers of ${\displaystyle \epsilon }$ are constants and the series are convergent.

Suppose, now, that we seek to determine the functions of ${\displaystyle t}$

${\displaystyle ({\text{C}})\qquad x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots }$, ${\displaystyle \qquad y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots }$

such that for ${\displaystyle t=t_{0}}$ and ${\displaystyle t=t_{1}}$, the expressions (C) will be the same as (A) and (B).

This may be done, for example, by writing

${\displaystyle \xi _{1}=t^{2}+\alpha _{1}t+\alpha _{2}}$,
${\displaystyle \eta _{1}=t^{2}+\beta _{1}t+\beta _{2}}$,

and then determine ${\displaystyle \alpha _{1}}$, ${\displaystyle \alpha _{2}}$, ${\displaystyle \beta _{1}}$, ${\displaystyle \beta _{2}}$ in such a way that

${\displaystyle t_{0}^{2}+\alpha _{1}t_{0}+\alpha _{2}=a_{0}^{(1)}}$; ${\displaystyle \qquad t_{0}^{2}+\beta _{1}t_{0}+\beta _{2}=b_{0}^{(1)}}$,
${\displaystyle t_{1}^{2}+\alpha _{1}t_{1}+\alpha _{2}=a_{1}^{(1)}}$; ${\displaystyle \qquad t_{1}^{2}+\beta _{1}t_{1}+\beta _{2}=b_{1}^{(1)}}$.

From this it is seen that

${\displaystyle \alpha _{1}=-(t_{1}+t_{0})+{\frac {a_{1}^{(1)}-a_{0}^{(1)}}{t_{1}-t_{0}}}}$, etc.

In the same way we may determine quadratic expressions in ${\displaystyle t}$ for ${\displaystyle \xi _{2}}$, ${\displaystyle \eta _{2}}$, etc.

The substitutions thus obtained are of the nature of those which we have assumed to exist, and may evidently be constructed in an infinite number of different ways.

Article 112.
Making the above substitutions in the integral

${\displaystyle I=\int _{t_{0}}^{t_{1}}F(x,y,x',y')~{\text{d}}t}$,

it is seen that

${\displaystyle \Delta I=\int _{t_{0}}^{t_{1}}\left[F\left(x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots ,y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots ,x'+\epsilon \xi _{1}'+{\frac {\epsilon ^{2}}{2!}}\xi _{2}'+\cdots ,y'+\epsilon \eta _{1}'+{\frac {\epsilon ^{2}}{2!}}\eta _{2}'+\cdots \right)-F(x,y,x',y')\right]~{\text{d}}t}$
${\displaystyle =\epsilon \delta I+{\frac {\epsilon ^{2}}{2!}}\delta ^{2}I+{\frac {\epsilon ^{3}}{3!}}\delta ^{3}I+\cdots }$.

By Taylor's Theorem we have

${\displaystyle F\left(x+\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots ,y+\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots ,x'+\epsilon \xi _{1}'+{\frac {\epsilon ^{2}}{2!}}\xi _{2}'+\cdots ,y'+\epsilon \eta _{1}'+{\frac {\epsilon ^{2}}{2!}}\eta _{2}'+\cdots \right)-F(x,y,x',y')}$
${\displaystyle =\left[\left(\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots \right){\frac {\partial }{\partial x}}+\left(\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots \right){\frac {\partial }{\partial y}}+\left(\epsilon \xi _{1}'+{\frac {\epsilon ^{2}}{2!}}\xi _{2}'+\cdots \right){\frac {\partial }{\partial x'}}+\left(\epsilon \eta _{1}'+{\frac {\epsilon ^{2}}{2!}}\eta _{2}'+\cdots \right){\frac {\partial }{\partial y'}}\right]F}$
${\displaystyle +{\frac {1}{2!}}\left[\left(\epsilon \xi _{1}+{\frac {\epsilon ^{2}}{2!}}\xi _{2}+\cdots \right){\frac {\partial }{\partial x}}+\left(\epsilon \eta _{1}+{\frac {\epsilon ^{2}}{2!}}\eta _{2}+\cdots \right){\frac {\partial }{\partial y}}+\left(\epsilon \xi _{1}'+{\frac {\epsilon ^{2}}{2!}}\xi _{2}'+\cdots \right){\frac {\partial }{\partial x'}}+\left(\epsilon \eta _{1}'+{\frac {\epsilon ^{2}}{2!}}\eta _{2}'+\cdots \right){\frac {\partial }{\partial y'}}\right]^{2}F}$
${\displaystyle +{\frac {1}{3!}}\left[\cdots \right]^{3}F+\cdots }$.

The coefficient of ${\displaystyle \epsilon }$ in this expression is the integrand of ${\displaystyle \delta I}$ and is zero; while the coefficient of ${\displaystyle {\frac {\epsilon ^{2}}{2!}}}$ involves terms that are the first partial derivatives of ${\displaystyle F}$, and also those that are the second partial derivatives of ${\displaystyle F}$.

The first partial derivatives of ${\displaystyle F}$ that belong to this coefficient, when put under the integral sign, may be written in the form

${\displaystyle \int _{t_{0}}^{t_{1}}\left[{\frac {\partial F}{\partial x}}\xi _{2}+{\frac {\partial F}{\partial x'}}\xi _{2}'+{\frac {\partial F}{\partial y}}\eta _{2}+{\frac {\partial F}{\partial y}}\eta _{2}'\right]~{\text{d}}t=\int _{t_{0}}^{t_{1}}G(y'\xi _{2}-x'\eta _{2})~{\text{d}}t+\left[\right]_{t_{0}}^{t_{1}}}$

(see Art. 79), and this expression is also zero, if we suppose that the end-points remain fixed.

Article 113.
The coefficient of ${\displaystyle \epsilon ^{2}}$ in the preceding development of ${\displaystyle F}$ by Taylor's Theorem is, neglecting the factor ${\displaystyle {\frac {1}{2!}}}$, denoted by ${\displaystyle \delta ^{2}F}$.

We have then

${\displaystyle 1)\qquad \delta ^{2}F={\frac {\partial ^{2}F}{\partial x^{2}}}\xi _{1}^{2}+2{\frac {\partial ^{2}F}{\partial x\partial y}}\xi _{1}\eta _{1}+{\frac {\partial ^{2}F}{\partial y^{2}}}\eta _{1}^{2}+{\frac {\partial ^{2}F}{\partial x'^{2}}}\xi _{1}'^{2}+2{\frac {\partial ^{2}F}{\partial x'\partial y'}}\xi _{1}'\eta _{1}'+{\frac {\partial ^{2}F}{\partial y'^{2}}}\eta _{1}'^{2}+2\left({\frac {\partial ^{2}F}{\partial x\partial x'}}\xi _{1}\xi _{1}'+{\frac {\partial ^{2}F}{\partial y\partial y'}}\eta _{1}\eta _{1}'+{\frac {\partial ^{2}F}{\partial x\partial y'}}\xi _{1}\eta _{1}'+{\frac {\partial ^{2}F}{\partial y\partial x'}}\eta _{1}\xi _{1}'\right)}$.

The subscripts may now be omitted and the formula simplified by the introduction of the function ${\displaystyle F_{1}}$, which (Art. 73) was defined by the relations:

${\displaystyle 2)\qquad {\frac {\partial ^{2}F}{\partial x'^{2}}}=y'^{2}F_{1}}$, ${\displaystyle \qquad {\frac {\partial ^{2}F}{\partial x'\partial y'}}=-x'y'F_{1}}$, ${\displaystyle \qquad {\frac {\partial ^{2}F}{\partial y'^{2}}}=x'^{2}F_{1}}$;

and by introducing the new notation:

${\displaystyle 3)\qquad L={\frac {\partial ^{2}F}{\partial x\partial x'}}-y'y''F_{1}}$, ${\displaystyle \qquad M={\frac {\partial ^{2}F}{\partial x\partial y'}}+x'y''F_{1}={\frac {\partial ^{2}F}{\partial x'\partial y}}+y'x''F_{1}}$ (owing to the equation ${\displaystyle G=0}$), ${\displaystyle \qquad N={\frac {\partial ^{2}F}{\partial y\partial y'}}-x'x''F_{1}}$;

where ${\displaystyle x''}$, ${\displaystyle y''}$ are used for ${\displaystyle {\frac {{\text{d}}^{2}x}{{\text{d}}t^{2}}}}$,${\displaystyle {\frac {{\text{d}}^{2}y}{{\text{d}}t^{2}}}}$.

We have then

${\displaystyle \delta ^{2}F={\frac {\partial ^{2}F}{\partial x^{2}}}\xi ^{2}+2{\frac {\partial ^{2}F}{\partial x\partial y}}\xi \eta +{\frac {\partial ^{2}F}{\partial y^{2}}}\eta ^{2}+F_{1}(y'^{2}\xi '^{2}-2x'y'\xi '\eta '+x'^{2}\eta '^{2})+2F_{1}(y'y''\xi \xi '+x'x''\eta \eta '-x'y''\xi \eta '-y'x''\eta \xi ')+2(L\xi \xi '+M(\xi \eta '+\eta \xi ')+N\eta \eta ')}$.

To get an exact differential as a part of the right-hand member of this formula, we write

${\displaystyle 4)\qquad R=L\xi ^{2}+2M\xi \eta +N\eta ^{2}}$,

an expression which, differentiated with respect to ${\displaystyle t}$, becomes

${\displaystyle 2[L\xi \xi '+M(\xi \eta '+\eta \xi ')+N\eta \eta ']={\frac {{\text{d}}R}{{\text{d}}t}}-{\frac {{\text{d}}L}{{\text{d}}t}}\xi ^{2}-2{\frac {{\text{d}}M}{{\text{d}}t}}\xi \eta -{\frac {{\text{d}}N}{{\text{d}}t}}\eta ^{2}}$.

We further write

${\displaystyle 5)\qquad w=y'\xi -\eta x'}$,

where (see Art. 81) ${\displaystyle w}$ is, neglecting the factor ${\displaystyle {\frac {1}{\sqrt {x'^{2}+y'^{2}}}}}$, the amount of the sliding of a point of the curve in the direction of the normal.

Differentiating with respect to ${\displaystyle t}$, we have

${\displaystyle {\frac {{\text{d}}w}{{\text{d}}t}}=y''\xi -x''\eta +y'\xi '-x'\eta '}$,

from which it follows that

${\displaystyle c=(y''\xi -x''\eta )^{2}+(y'\xi '-x'\eta ')^{2}+2(y'y''\xi \xi '+x'x''\eta \eta '-x'y''\xi \eta '-y'x''\eta \xi ')}$.

Then the expression for the second variation becomes

${\displaystyle \delta ^{2}F={\frac {\partial ^{2}F}{\partial x^{2}}}\xi ^{2}+2{\frac {\partial ^{2}F}{\partial x\partial y}}\xi \eta +{\frac {\partial ^{2}F}{\partial y^{2}}}\eta ^{2}+F_{1}\left[\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}-(y''\xi -x''\eta )^{2}\right]+{\frac {{\text{d}}R}{{\text{d}}t}}-\left({\frac {{\text{d}}L}{{\text{d}}t}}\xi ^{2}+2{\frac {{\text{d}}M}{{\text{d}}t}}\xi \eta +{\frac {{\text{d}}N}{{\text{d}}t}}\eta ^{2}\right)}$.

If further we write in this expression

${\displaystyle 6)\qquad L_{1}={\frac {\partial ^{2}F}{\partial x^{2}}}-F_{1}y''^{2}-{\frac {{\text{d}}L}{{\text{d}}t}}}$, ${\displaystyle \qquad M_{1}={\frac {\partial ^{2}F}{\partial x\partial y}}+F_{1}x''y''-{\frac {{\text{d}}M}{{\text{d}}t}}}$, ${\displaystyle \qquad N_{1}={\frac {\partial ^{2}F}{\partial y^{2}}}-F_{1}x''^{2}-{\frac {{\text{d}}N}{{\text{d}}t}}}$,

we have finally

${\displaystyle \delta ^{2}F=F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+L_{1}\xi ^{2}+2M_{1}\xi \eta +N_{1}\eta ^{2}+{\frac {{\text{d}}R}{{\text{d}}t}}}$.

Article 114.
It follows from 3) that

${\displaystyle Lx'+My'=x'{\frac {\partial ^{2}F}{\partial x\partial x'}}+y'{\frac {\partial ^{2}F}{\partial x\partial y'}}}$.

Owing to the homogeneity of the function ${\displaystyle F}$ (Chap. IV), it is seen from Euler's Theorem that

${\displaystyle F=x'{\frac {\partial F}{\partial x'}}+y'{\frac {\partial F}{\partial y'}}}$,

and consequently,

${\displaystyle {\frac {\partial F}{\partial x}}=x'{\frac {\partial ^{2}F}{\partial x\partial x'}}+y'{\frac {\partial ^{2}F}{\partial x\partial y'}}}$;

and therefore

${\displaystyle {\frac {\partial F}{\partial x}}=Lx'+My'}$.

In a similar manner we have

${\displaystyle {\frac {\partial F}{\partial y}}=Mx'+Ny'}$.

Differentiating with regard to ${\displaystyle t}$, the above expression becomes

${\displaystyle {\frac {\text{d}}{{\text{d}}t}}\left({\frac {\partial F}{\partial y}}\right)={\frac {\partial ^{2}F}{\partial x^{2}}}x'+{\frac {\partial ^{2}F}{\partial x\partial y}}y'+{\frac {\partial ^{2}F}{\partial x\partial x'}}x''+{\frac {\partial ^{2}F}{\partial x\partial y'}}y''={\frac {{\text{d}}L}{{\text{d}}t}}x'+{\frac {{\text{d}}M}{{\text{d}}t}}y'+Lx''+My''}$,

which, owing to 3) is

${\displaystyle x'\left({\frac {\partial ^{2}F}{\partial x^{2}}}-F_{1}y''^{2}-{\frac {{\text{d}}L}{{\text{d}}t}}\right)+y'\left({\frac {\partial ^{2}F}{\partial x\partial y}}+F_{1}y''x''-{\frac {{\text{d}}M}{{\text{d}}t}}\right)=0}$;

or from 6)

${\displaystyle x'L_{1}+y'M_{2}=0}$.

In an analagous manner it may be shown that

${\displaystyle x'M_{1}+y'N_{1}=0}$.

From these expressions we have at once

${\displaystyle {\frac {L_{1}}{y'^{2}}}=-{\frac {M_{1}}{x'y'}}={\frac {N_{1}}{x'^{2}}}=F_{2}}$,

where ${\displaystyle F_{2}}$ is the factor of proportionality.

It follows that

${\displaystyle 7)L_{1}=y'^{2}F_{2},\qquad M_{1}=-x'y'F_{2},\qquad N_{1}=x'^{2}F_{2}}$.

The quantity ${\displaystyle F_{2}}$ is defined through these three equations and plays an essential role in the treatment of the second variation.

Owing to the relation 7)

${\displaystyle L_{1}\xi ^{2}+2M_{1}\xi \eta +N_{!}\eta ^{2}}$ becomes ${\displaystyle F_{2}w^{2}}$,

and consequently,

${\displaystyle \delta ^{2}F=F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)+F_{2}w^{2}+{\frac {{\text{d}}R}{{\text{d}}t}}}$.

Article 115.
The second variation of the integral has therefore the form

${\displaystyle 8)\qquad \delta ^{2}I=\int _{t_{0}}^{t_{1}}\left(F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)+F_{2}w^{2}\right)~{\text{d}}t+\int _{t_{0}}^{t_{1}}{\frac {{\text{d}}R}{{\text{d}}t}}~{\text{d}}t}$.

We suppose that the end-points are fixed so that at these points ${\displaystyle \xi =0=\eta }$, and we further assume that the curve subjected to variation consists of a single regular trace, along which then

${\displaystyle R=L\xi ^{2}+2M\xi \eta +N\eta ^{2}}$

is everywhere continuous, so that

${\displaystyle {\Big [}R{\Big ]}_{t_{0}}^{t_{1}}=0}$.

Consequently the above integral may be written

${\displaystyle 8^{*})\qquad \int _{t_{0}}^{t_{1}}\left(F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)+F_{2}w^{2}\right)~{\text{d}}t}$.

If the integral ${\displaystyle I=\int _{t_{0}}^{t_{1}}F(x,y,x',y')~{\text{d}}t}$ is to be a maximum or a minimum for the curve ${\displaystyle G=0}$, it is necessary, when the curve is subjected to an indefinitely small variation, that the variation ${\displaystyle \Delta I}$, which is caused to exist therefrom, have always the same sign, in whatever manner ${\displaystyle \xi }$,${\displaystyle eta}$ are chosen; and consequently the second variation ${\displaystyle \delta ^{2}I}$ must have continuously the same sign as ${\displaystyle \Delta I}$.

We have repeatedly seen that

${\displaystyle \Delta I={\frac {\epsilon ^{2}}{2!}}\delta ^{2}I+{\frac {\epsilon ^{3}}{3!}}\delta ^{3}I+\cdots }$,

and for any other value of ${\displaystyle \epsilon _{1}}$ for example, ${\displaystyle \epsilon _{1}}$,

${\displaystyle \Delta _{1}I={\frac {\epsilon _{1}^{2}}{2!}}\delta ^{2}I+{\frac {\epsilon _{1}^{3}}{3!}}\delta ^{3}I+\cdots }$.

If, further, ${\displaystyle \delta ^{2}I}$ is negative while ${\displaystyle \Delta I}$ is positive, then we may take ${\displaystyle \epsilon _{1}}$ so small that the sign of ${\displaystyle \Delta _{1}I}$ depends only upon the first term on the right in the above expansion, and consequently is negative. Therefore the integral ${\displaystyle I}$ cannot be a maximum or a minimum, since the variation of it is first positive and then negative.

Hence, neglecting for a moment the case when ${\displaystyle \delta ^{2}I=0}$, we have the following theorem:

If the integral ${\displaystyle I}$ is to be a maximum or a minimum, its second variation must be continuously negative or continuously positive.

When ${\displaystyle \delta ^{2}I}$ vanishes for all possible values of ${\displaystyle \xi }$, ${\displaystyle \eta }$, it is necessary also that ${\displaystyle \delta ^{2}I}$ vanish, since the integral ${\displaystyle I}$ is to be a maximum or a minimum, and, as in the Theory of Maxima and Minima, we would then have to investigate the fourth variation. In this case the conditions that have to be satisfied are so numerous that a mathematical treatment is very complicated and difficult.

Hence, it is seen that after the condition ${\displaystyle \delta I=0}$ is satisfied, it follows that

for the possibility of a maximum, ${\displaystyle \delta ^{2}I}$ must be negative, and
for the possibility of a minimum, ${\displaystyle \delta ^{2}I}$ must be positive.

These conditions are necessary, but not sufficient.

Article 116.
In Art. 75 we assumed that ${\displaystyle \xi }$,${\displaystyle \eta }$,${\displaystyle \xi '}$,${\displaystyle \eta '}$ were continuous functions of ${\displaystyle t}$ between the limits ${\displaystyle t_{0}\ldots t_{1}}$. Owing to the assumed existence of ${\displaystyle \xi '}$,${\displaystyle \eta '}$, we must presuppose the existence of the second derivatives of ${\displaystyle x}$ and ${\displaystyle y}$ with respect to ${\displaystyle t}$ (see Art. 23). From this it also follows that the radius of curvature must vary in a continuous manner. These assumptions have been tacitly made in the derivation of the equation 8) in the preceding article. We shall now free ourselves from the restriction that ${\displaystyle \xi '}$ and ${\displaystyle \eta '}$ are continuous functions of ${\displaystyle t}$, retaining, however, the assumptions regarding the continuity of the quantities ${\displaystyle x,y,\xi ,\eta ,x',y',x'',y''}$.

The theorem that ${\displaystyle {\frac {\partial F}{\partial x'}}}$ and ${\displaystyle {\frac {\partial F}{\partial y'}}}$ vary in a continuous manner for ox oy the whole curve (Art. 97) in most cases gives a handy means of determining the admissibility of assumptions regarding the continuity of ${\displaystyle x'}$ and ${\displaystyle y'}$. If, at certain points of the curve ${\displaystyle G=0}$, ${\displaystyle x'}$ and ${\displaystyle y'}$ are not continuous, it is always possible to divide the curve into such portions that ${\displaystyle x'}$ and ${\displaystyle y'}$ are continuous throughout each portion. Yet we cannot even then say that ${\displaystyle x''}$ and ${\displaystyle y''}$ are continuous within such a portion, as has been assumed to be true in the above development. If, however, ${\displaystyle x''}$ and ${\displaystyle y''}$ within such a portion of curve are discontinuous, we have only to divide the curve into other portions so that within these new portions ${\displaystyle x''}$ and ${\displaystyle y''}$ no longer suffer any sudden springs. In each of these portions of curve the same conclusions may be made as before in the case of the whole curve, and consequently the assumption regarding the continuous change of ${\displaystyle x''}$,${\displaystyle y''}$ throughout the whole curve is not necessary. But if we had limited ourselves to the consideration of a part of the curve in which ${\displaystyle x,y,x',y',x'',y''}$ vary in a continuous manner, the continuity of ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ in the integration of the integral

${\displaystyle \int {\frac {{\text{d}}R}{{\text{d}}t}}~{\text{d}}t}$

would have been assumed. These assumptions need not necessarily be fulfilled, since the variation of the curve is an arbitrary one, and it is quite possible that such variations may be introduced, where ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ become discontinuous, as often as we please. We may, however, drop these assumptions without changing the final results, if only the first named conditions are satisfied. Since the quantities ${\displaystyle L}$, ${\displaystyle M}$, ${\displaystyle N}$ depend only upon ${\displaystyle x,y,x',y',x'',y''}$, and since these quantities are continuous, it follows that the introduction of the integral ${\displaystyle \int {\frac {{\text{d}}R}{{\text{d}}t}}~{\text{d}}t}$ the form given above is always admissible. For if ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ were not continuous for the whole trace of the curve, which has been subjected to variation, we could suppose that this curve has been divided into parts, within which the above derivatives varied in a continuous manner, and the integral would then become a sum of integrals of the form

${\displaystyle \int _{t_{\beta }}^{t_{\beta +1}}{\frac {{\text{d}}R}{{\text{d}}t}}~{\text{d}}t=\left[L\xi ^{2}+2M\xi \eta +N\eta ^{2}\right]_{t_{\beta }}^{t_{\beta +1}}}$,

where ${\displaystyle t_{\beta },t_{\beta +1},\ldots }$ are the coordinates of the points of division of corresponding values of ${\displaystyle t}$. But since ${\displaystyle \xi }$, ${\displaystyle \eta }$ vary in a continuous manner, we have through the summation of these quantities exactly the same expression

${\displaystyle \left[L\xi ^{2}+2M\xi \eta +N\eta ^{2}\right]_{t_{0}}^{t_{1}}}$

as before. The quantities ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ are also found under the sign of integration in the right-hand side of 8); but owing to the conception of a definite integral, we may still write it in this form even when these quantities vary in a discontinuous manner ; however, in performing the integration, we must divide the integral corresponding to the positions at which the discontinuities enter into partial integrals. Therefore, we see that the possible discontinuity of ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ remains without influence upon the result, if only ${\displaystyle x,y,x',y',x'',y'',\xi ,\eta }$ are continuous. Consequently any assumptions regarding the continuity of ${\displaystyle \xi '}$, ${\displaystyle \eta '}$ are superfluous; however, in an arbitrarily small portion of the curve which is subjected to variation, the quantities ${\displaystyle \xi '}$ and ${\displaystyle \eta '}$ must not become discontinuous an infinite number of times since such variation of the curve has been necessarily, once for all excluded.

Article 117.
Following the older mathematicians, Legendre, Jacobi, etc., we may give the second variation a form in which all terms appearing under the sign of integration will have the same sign (plus or minus).

To accomplish this, we add an exact differential ${\displaystyle {\frac {\text{d}}{{\text{d}}t}}(w^{2}v)}$ at under the integral sign in 8), and subtract it from ${\displaystyle R}$, the integral thus becoming

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}\left(F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}\right)^{2}+2vw{\frac {{\text{d}}w}{{\text{d}}t}}+\left(F_{2}+{\frac {{\text{d}}v}{{\text{d}}t}}\right)w^{2}\right)~{\text{d}}t+{\Big [}R-vw^{2}{\Big ]}_{t_{0}}^{t_{!}}}$.

The expression under the sign of integration is an integral homogeneous quadratic form in ${\displaystyle w}$ and ${\displaystyle {\frac {{\text{d}}w}{{\text{d}}t}}}$. We choose the quantity ${\displaystyle v}$ so that this expression becomes a perfect square; that is,

${\displaystyle 9)\qquad v^{2}-F_{1}\left(F_{2}+{\frac {{\text{d}}v}{{\text{d}}t}}\right)=0}$,
${\displaystyle }$

and consequently,

${\displaystyle 10)\qquad \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}+w{\frac {v}{F_{1}}}\right)^{2}~{\text{d}}t+{\Big [}R-vw^{2}{\Big ]}_{t_{0}}^{t_{1}}}$.

We shall see that it is possible to determine a function ${\displaystyle v}$, which is finite one-valued and continuous within the interval ${\displaystyle t_{0}\ldots t_{1}}$, and which satisfies the equation 9). The integral 10) becomes accordingly, if the end-points remain fixed,

${\displaystyle 10^{a}\qquad \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}+w{\frac {v}{F_{1}}}\right)^{2}~{\text{d}}t}$

Hence the second variation has the same sign as ${\displaystyle F_{1}}$, and it is clear that for the existence of a maximum ${\displaystyle F_{1}}$ must be negative, and for a minimum this function must be positive within the interval ${\displaystyle t_{0}\ldots t_{1}}$ and in case there is a maximum or a minimum, ${\displaystyle F_{1}}$ cannot change sign within this interval.

This condition is due to Jacobi. Legendre had previously concluded that we have a maximum when a certain expression corresponding to ${\displaystyle F_{1}}$ was negative, and a minimum when it was positive. It is questionable whether the diflferential equation for ${\displaystyle v}$ is always integrable. Following Jacobi we shall show that such is the case.

Article 118.
Before we go farther, we have yet to prove that the transformation, which we have introduced, is allowable. In spite of the simplicity of the equation 9) we cannot make conclusions regarding the continuity of the function v, which is necessary for the above transformation. ¢ It is therefore essential to show that the equation 9) may be reduced to a system of two linear differential equations, which may be reverted into a linear differential equation of the second order, since for this equation we have definite criteria of determining whether a function which satisfies it remains finite and continuous or not.

Write

${\displaystyle v={\frac {u_{1}}{u}}}$,

where ${\displaystyle u_{1}}$ and ${\displaystyle u}$ are continuous functions of ${\displaystyle t}$, and ${\displaystyle u\neq 0}$ within the interval ${\displaystyle t_{0}\ldots t_{1}}$.

Equation 9) becomes then

${\displaystyle <{\frac {u_{1}^{2}}{u^{2}}}-F_{1}\left(F_{2}+{\frac {u{\frac {{\text{d}}u_{1}}{{\text{d}}t}}-u_{1}{\frac {{\text{d}}u}{{\text{d}}t}}}{u^{2}}}\right)=0/math>,or:[itex]F_{1}u\left({\frac {{\text{d}}u_{1}}{{\text{d}}t}}+F_{2}u\right)-u_{1}\left(F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}+u_{1}\right)=0}$.

Since one of the functions ${\displaystyle u}$, ${\displaystyle u_{1}}$ may be arbitrarily chosen, we take ${\displaystyle u}$ so that

${\displaystyle 11)\qquad F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}+u_{1}=0}$;

then, since ${\displaystyle u\neq 0}$, we have

${\displaystyle 12)\qquad {\frac {{\text{d}}u_{1}}{{\text{d}}t}}+F_{2}u=0}$.

From 11) and 12) it follows that

${\displaystyle 12^{a})\qquad {\frac {\text{d}}{{\text{d}}t}}\left(F_{1}{\frac {{\text{d}}u}{{\text{d}}t}}\right)-F_{2}u=0}$,

or

${\displaystyle 13)\qquad F_{1}{\frac {{\text{d}}^{2}u}{{\text{d}}t^{2}}}+{\frac {{\text{d}}F_{1}}{{\text{d}}t}}{\frac {{\text{d}}u}{{\text{d}}t}}-F_{2}u=0}$,

where ${\displaystyle F_{1}}$ and ${\displaystyle F_{2}}$ are to be considered as given functions of ${\displaystyle t}$. We shall denote this difFerential equation by ${\displaystyle J=0}$. After ${\displaystyle u}$ has been determined from this Equation, ${\displaystyle u_{1}}$ may be determined from 11), and from ${\displaystyle {\frac {u_{1}}{u}}=v}$ we have ${\displaystyle v}$ as a definite function of ${\displaystyle t}$.

Article 119.
The expression which has been derived for ${\displaystyle v}$ seems to contain two arbitrary constants, while the equation 9) has only one. The two constants in the first case, however, may be replaced by one, since the general solution of 13) is

${\displaystyle u=c_{1}\phi _{1}(t)+c_{2}\phi _{2}(t)}$,

and hence from 11)

${\displaystyle v={\frac {u_{1}}{u}}=-F_{1}{\frac {c_{1}\phi _{1}'(t)+c_{2}\phi _{2}'(t)}{c_{1}\phi _{1}(t)+c_{2}\phi _{2}(t)}}}$,

an expression which depends only upon the ratio of the two constants.

It follows from the above transformation that

${\displaystyle 14)\qquad \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}-{\frac {w}{u}}{\frac {{\text{d}}u}{{\text{d}}t}}\right)~{\text{d}}t}$;

but this transformation has a meaning only when it is possible to find a function ${\displaystyle u}$ within the interval ${\displaystyle t_{0}\ldots t_{1}}$ which is different from zero, and which satisfies the differential equation ${\displaystyle J=0}$.

Article 120.
If we have a linear differential equation of the second order

${\displaystyle {\frac {{\text{d}}^{2}y}{{\text{d}}x^{2}}}=P(x){\frac {{\text{d}}y}{{\text{d}}x}}+Q(x)y=0}$,

and if ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are a fundamental system of integrals of this equation, then we have the well known relation due to Abel (see Forsyth's Differential Equations, p. 99)

${\displaystyle y_{1}{\frac {{\text{d}}y_{2}}{{\text{d}}x}}-y_{2}{\frac {{\text{d}}y_{1}}{{\text{d}}x}}=Ce^{-\int P(x){\text{d}}x}}$,

or

${\displaystyle \Delta ={\begin{vmatrix}y_{1}&y_{2}\\{\frac {{\text{d}}y_{1}}{{\text{d}}x}}&{\frac {{\text{d}}y_{2}}{{\text{d}}x}}\end{vmatrix}}=Ce^{-\int P(x){\text{d}}x}}$.

If ${\displaystyle \Delta =0}$, then we would have ${\displaystyle y_{1}=cy_{2}}$, and the system is no longer a fundamental system of integrals. This determinant can become zero only at such positions for which ${\displaystyle P(x)}$ becomes infinitely large; or a change of sign for this determinant can enter only at such positions where ${\displaystyle P(x)}$ becomes infinite.

In the differential equationy ${\displaystyle J=0}$ we have ${\displaystyle P={\frac {\text{d}}{{\text{d}}t}}\ln(F_{1})}$, and if ${\displaystyle u_{1}}$, ${\displaystyle u_{2}}$ form a fundamental system of integrals of this differential equation, then

${\displaystyle \Delta =u_{1}{\frac {{\text{d}}u_{2}}{{\text{d}}t}}-u_{2}{\frac {{\text{d}}u_{1}}{{\text{d}}t}}={\frac {C}{F_{1}}}}$.

It follows that ${\displaystyle F_{1}}$ cannot become infinite or zero within the interval under consideration or upon the boundaries of this interval. Hence, it is again seen that ${\displaystyle F_{1}}$ cannot change sign within the interval ${\displaystyle t_{0}\ldots t_{1}}$.

If ${\displaystyle F_{1}}$ and ${\displaystyle F_{2}}$ are continuous within the interval ${\displaystyle t_{0}\ldots t_{1}}$, we have, through differentiating the equation ${\displaystyle J=0}$, all higher derivatives of ${\displaystyle u}$ expressed in terms of ${\displaystyle u}$ and ${\displaystyle {\frac {{\text{d}}u}{{\text{d}}t}}}$. Hence, if values of ${\displaystyle u}$ and ${\displaystyle {\frac {{\text{d}}u}{{\text{d}}t}}}$ are given for a definite value of ${\displaystyle t}$, say ${\displaystyle t'}$, we have a power-series ${\displaystyle P(t-t')}$ for ${\displaystyle u}$ (see Art. 79), which satisfies the equation ${\displaystyle J=0}$.

Article 121.
Suppose that ${\displaystyle F_{1}}$ has a definite, positive or negative value for a definite value ${\displaystyle t'}$ of ${\displaystyle t}$ situated within the interval ${\displaystyle t_{0}\ldots t_{1}}$, then on account of its continuity it will also be positive or negative for a certain neighborhood of ${\displaystyle t'}$, say ${\displaystyle t'-\tau _{1}\ldots t'+\tau _{2}}$. We may vary the curve in such a manner that within the interval ${\displaystyle t'-\tau _{1}\ldots t'+\tau _{2}}$ it takes any form while without this region it remains unchanged.

Consequently the total variation, and therefore also the second variation of ${\displaystyle I}$, depends only upon the variation within the region just mentioned, and in accordance with the remarks made above since we may find a function ${\displaystyle u}$ of the variable ${\displaystyle t}$, which is continuous within the given region, which satisfies the differential equation ${\displaystyle J=0}$, and which is of such a nature that ${\displaystyle u}$ and ${\displaystyle {\frac {{\text{d}}u}{{\text{d}}t}}}$ have given at values for ${\displaystyle t=t'}$, it follows that the transformation which was introduced is admissible, and we have

${\displaystyle \delta ^{2}I=\int _{t_{0}}^{t_{1}}F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}-{\frac {{\text{d}}u}{{\text{d}}t}}{\frac {w}{u}}\right)^{2}~{\text{d}}t}$.

This quantity is evidently positive when ${\displaystyle F_{1}}$ is positive and negative when ${\displaystyle F_{1}}$ is negative, so long as

${\displaystyle {\frac {{\text{d}}w}{{\text{d}}t}}-{\frac {{\text{d}}u}{{\text{d}}t}}{\frac {w}{u}}\neq 0\qquad }$ (Art. 132).

We have then for the total variation

${\displaystyle \Delta I={\frac {\epsilon ^{2}}{2!}}\int _{t_{0}}^{t_{1}}F_{1}\left({\frac {{\text{d}}w}{{\text{d}}t}}-{\frac {{\text{d}}u}{{\text{d}}t}}{\frac {w}{u}}\right)^{2}~{\text{d}}t+{\frac {\epsilon ^{3}}{3!}}\int _{t_{0}}^{t_{1}}(\xi ,\eta ,\xi ',\eta ')~{\text{d}}t}$,

where ${\displaystyle (\xi ,\eta ,\xi ',\eta ')}$ denotes an expression of the third dimension in the quantities included within the brackets.

For small values of ${\displaystyle \epsilon }$ it is seen that ${\displaystyle \Delta I}$ has the same sign as the first term on the right-hand side of the above equation. We have, therefore, the following theorem :

The total variation ${\displaystyle \Delta I}$ the integral ${\displaystyle I}$ is positive when ${\displaystyle F_{1}}$ is positive, and negative when ${\displaystyle F_{1}}$ is negative throughout the whole interval ${\displaystyle t_{0}\ldots t_{1}}$.

If ${\displaystyle F_{1}}$ could change sign for any position within the interval ${\displaystyle t_{0}\ldots t_{1}}$, then there would be variations of the curve for which ${\displaystyle \Delta I}$ is positive and others for which ${\displaystyle \Delta I}$ is negative. Hence, for the existence of a maximum or a minimum of ${\displaystyle I}$ we have the following necessary condition :

In order that there exist a maximum or a minimum of the integral ${\displaystyle I}$ taken over the curve ${\displaystyle G=0}$ within the interval ${\displaystyle t_{0}\ldots t_{1}}$, it is necessary that ${\displaystyle F_{1}}$ have always the same sign within this interval; in the case of a maximum ${\displaystyle F_{1}}$ must be continuously negative, and in the case of a minimum this function must be continuously positive.

In this connection it is interesting to note a paper by Prof. W. F. Osgood in the Transactions of the American Mathematical Society, Vol. II, p. 273, entitled:

"On a fundamental property of a minimum in the Calculus of Variations and the proof of a theorem of Weierstrass's."

This paper, which is of great importance, may be much simplified.