# Introduction to Mathematical Physics/Differentials and derivatives

## Definitions

Definition:

Let ${\displaystyle E}$ and ${\displaystyle F}$ two normed vectorial spaces on ${\displaystyle R}$ or ${\displaystyle C}$.and ${\displaystyle f}$ a map defined on an open ${\displaystyle U}$ of ${\displaystyle E}$ into ${\displaystyle F}$. ${\displaystyle f}$ is said differentiable at a point ${\displaystyle x_{0}}$ of ${\displaystyle U}$ if there exists a continuous linear application ${\displaystyle g}$ from ${\displaystyle E}$ into ${\displaystyle F}$ such that ${\displaystyle \|f(x)-f(x_{0})-g(x-x_{0})\|}$ is negligible with respect to ${\displaystyle \|x-x_{0}\|}$.

The notion of derivative is less general and is usually defined for function for a part of ${\displaystyle R}$ to a vectorial space as follows:

Definition:

Let ${\displaystyle I}$ be an interval of ${\displaystyle R}$ different from a point and ${\displaystyle E}$ a vectorial space normed on ${\displaystyle R}$. An application ${\displaystyle f}$ from ${\displaystyle I}$ to ${\displaystyle E}$ admits a derivative at the point ${\displaystyle x}$ of ${\displaystyle I}$ if the ratio:

${\displaystyle {\frac {f(x+h)-f(x)}{h}}}$

admits a limit as ${\displaystyle h}$ tends to zero. This limit is then called derivative of ${\displaystyle f}$ at point ${\displaystyle x}$ and is noted ${\displaystyle f^{\prime }(x)}$.

We will however see in this appendix some generalization of derivatives.

## Derivatives in the distribution's sense

### Definition

Derivative\index{derivative in the distribution sense} in the usual function sense is not defined for non continuous functions. Distribution theory allows in particular to generalize the classical derivative notion to non continuous functions.

Definition:

Derivative of a distribution ${\displaystyle T}$ is distribution ${\displaystyle T'}$ defined by:

${\displaystyle \forall \phi \in {\mathcal {D}},=-}$

Definition:

Let ${\displaystyle f}$ be a summable function. Assume that ${\displaystyle f}$ is discontinuous at ${\displaystyle N}$ points ${\displaystyle a_{i}}$, and let us note ${\displaystyle \sigma _{a_{i}}=f(a_{i}^{+})-f(a_{i}^{-})}$ the jump of ${\displaystyle f}$ at ${\displaystyle a_{i}}$. Assume that ${\displaystyle f'}$ is locally summable and almost everywhere defined. It defines a distribution ${\displaystyle T_{f'}}$. Derivative ${\displaystyle (T_{f})'}$ of the distribution associated to ${\displaystyle f}$ is:

${\displaystyle (T_{f})'=T_{f'}+\sum \sigma _{a_{i}}\delta _{a_{i}}}$

One says that the derivative in the distribution sense is equal to the derivative without precaution augmented by the Dirac distribution multiplied by the jump of ${\displaystyle f}$. It can be noted:

${\displaystyle f'=\{f'\}+\sum \sigma _{a_{i}}\delta _{a_{i}}}$

### Case of distributions of several variables

secdisplu

Using derivatives without precautions, the action of differential operators in the distribution sense can be written, in the case where the functions on which they are acting are discontinuous on a surface ${\displaystyle S}$:

${\displaystyle {\frac {\partial f}{\partial x_{i}}}=\{{\frac {\partial f}{\partial x_{i}}}\}+n_{i}\sigma _{f}\delta _{S}}$

${\displaystyle {\mbox{ grad }}f=\{{\mbox{ grad }}f\}+n\sigma _{f}\delta _{S}}$

${\displaystyle {\mbox{ div }}a=\{{\mbox{ div }}f\}+n\sigma _{a}\delta _{S}}$

${\displaystyle {\mbox{ rot }}a=\{{\mbox{ rot }}f\}+n\wedge \sigma _{a}\delta _{S}}$

where ${\displaystyle f}$ is a scalar function, ${\displaystyle a}$ a vectorial function, ${\displaystyle \sigma }$ represents the jump of ${\displaystyle a}$ or ${\displaystyle f}$ through surface ${\displaystyle S}$ and ${\displaystyle \delta _{S}}$, is the surfacic Dirac distribution. Those formulas allow to show the Green function introduced for tensors. The geometrical implications of the differential operators are considered at next appendix chaptens

Example: Electromagnetism The fundamental laws of electromagnetism are the Maxwell equations:\index{passage relation}

${\displaystyle {\mbox{ rot }}E=-{\frac {\partial B}{\partial t}}}$

${\displaystyle {\mbox{ rot }}H=j+{\frac {\partial D}{\partial t}}}$

${\displaystyle {\mbox{ div }}D=\rho }$

${\displaystyle {\mbox{ div }}B=0}$

are also true in the distribution sense. In books on electromagnetism, a chapter is classically devoted to the study of the boundary conditions and passage conditions. The using of distributions allows to treat this case as a particular case of the general equations. Consider for instance, a charge distribution defined by:

${\displaystyle \rho =\rho _{v}+\rho _{s}}$

where ${\displaystyle \rho _{v}}$ is a volumic charge and ${\displaystyle \rho _{s}}$ a surfacic charge, and a current distribution defined by:

${\displaystyle j=j_{v}+j_{s}}$

where ${\displaystyle j_{v}}$ is a volumic current, and ${\displaystyle j_{s}}$ a surfacic current. Using the formulas of section secdisplu, one obtains the following passage relations:

${\displaystyle {\begin{matrix}n_{12}\wedge (E_{2}-E_{1})&=&0\\n_{12}\wedge (H_{2}-H_{1})&=&j_{s}\\n_{12}(D_{2}-D_{1})&=&\rho _{s}\\n_{12}(B_{2}-B_{1})&=&0\end{matrix}}}$

where the coefficients of the Delta surfacic distribution ${\displaystyle \delta _{s}}$ have been identified (see ([#References

Example: Electrical circuits

As Maxwell equations are true in the distribution sense (see previous example), the equation of electricity are also true in the distribution sense. Distribution theory allows to justify some affirmations sometimes not justified in electricity courses. Consider equation:

${\displaystyle U(t)=L{\frac {di}{dt}}+Ri}$

This equation implies that even if ${\displaystyle U}$ is not continuous, ${\displaystyle i}$ does. Indeed , if ${\displaystyle i}$ is not continuous, derivative ${\displaystyle {\frac {di}{dt}}}$ would create a Dirac distribution in the second member. Consider equation:

${\displaystyle i={\frac {dq}{dt}}}$

This equation implies that ${\displaystyle q(t)}$ is continuous even if ${\displaystyle i}$ is discontinuous.

Example: Fluid mechanics Conservation laws are true at the distribution sense. Using distribution derivatives, so called "discontinuity" relations can be obtained immediately ([#References

## Differentiation of Stochastic processes

secstoch

When one speaks of stochastic\index{stochastic process} processes ([#References|references]), one adds the time notion. Taking again the example of the dices, if we repeat the experiment ${\displaystyle N}$ times, then the number of possible results is ${\displaystyle \Omega '=6^{N}}$ (the size of the set ${\displaystyle \Omega }$ grows exponentialy with ${\displaystyle N}$). We can define using this ${\displaystyle \Omega '}$ a probability ${\displaystyle P'}$. So, from the first random variable ${\displaystyle X}$, we can define another random variable ${\displaystyle X_{t}}$:

Definition:

Let ${\displaystyle X}$ a random variable\index{random variable}. A stochastic process (associated to ${\displaystyle X}$) is a function of ${\displaystyle X}$ and ${\displaystyle t}$.

${\displaystyle X_{t}}$ is called a stochastic function of ${\displaystyle X}$ or a

stochastic process. Generally probability ${\displaystyle P(X_{t}\in {\mathrel {[}}x,x+dx{\mathrel {[}}{\mbox{ at }}t_{i})}$ depends on the history of values of ${\displaystyle X_{t}}$ before ${\displaystyle t_{i}}$. One defines the conditional probability ${\displaystyle P(X_{t=t_{i}}\in {\mathrel {[}}x,x+dx{\mathrel {[}}|X_{t\leq t_{i}})}$ as the probability of ${\displaystyle X_{t}}$ to take a value between ${\displaystyle x}$ and ${\displaystyle x+dx}$, at time ${\displaystyle t_{i}}$ knowing the values of ${\displaystyle X_{t}}$ for times anterior to ${\displaystyle t_{i}}$ (or ${\displaystyle X_{t}}$ "history"). A Markov process is a stochastic process with the property that for any set of succesive times ${\displaystyle t_{1},\dots ,t_{n}}$ one has:

${\displaystyle P_{1|n-1}(X_{t=t_{n}}\in {\mathrel {[}}x,x+dx{\mathrel {[}}|X_{t_{1}}\dots X_{t_{n}})=P_{1|1}(X_{t=t_{n}}\in {\mathrel {[}}x,x+dx{\mathrel {[}}|X_{t_{n-1}})}$

${\displaystyle P_{i|j}}$ denotes the probability for ${\displaystyle i}$ conditions to be satisfied, knowing ${\displaystyle j}$ anterior events. In other words, the expected value of ${\displaystyle X_{t}}$ at time ${\displaystyle t_{n}}$ depends only on the value of ${\displaystyle X_{t}}$ at previous time ${\displaystyle t_{n-1}}$. It is defined by the transition matrix by ${\displaystyle P_{1}}$ and ${\displaystyle P_{1|1}}$ (or equivalently by the transition density function ${\displaystyle f_{1}(x,t)}$ and ${\displaystyle f_{1|1}(x_{2},t_{2}|x_{1},t_{1})}$. It can be seen ([#References|references]) that two functions ${\displaystyle f_{1}}$ and ${\displaystyle f_{1|1}}$ defines a Markov\index{Markov process} process if and only if they verify:

• the Chapman-Kolmogorov equation\index{Chapman-Kolmogorov equation}:

${\displaystyle f_{1|1}(x_{3},t_{3})=\int f_{1|1}(x_{3},t_{3}|x_{2},t_{2})f_{1|1}(x_{2},t_{2}|x_{1},t_{1})dx_{2}}$

eqnecmar

${\displaystyle f_{1}(x_{2},t_{2})=\int f_{1|1}(y_{2},t_{2}|y_{1},t_{1})f_{1}(y_{1},t_{1})dx_{1}}$

A Wiener process\index{Wiener process}\index{Brownian motion} (or Brownian motion) is a Markov process for which:

${\displaystyle f_{1|1}(x_{2},t_{2}|x_{1},t_{1})={\frac {1}{\sqrt {2\pi (t_{2}-t_{1})}}}e^{-{\frac {(x_{2}-x_{1})^{2}}{2(t_{2}-t_{1})}}}}$

Using equation eqnecmar, one gets:

${\displaystyle f_{1}(x,t)={\frac {1}{2\pi }}e^{-{\frac {x^{2}}{2t}}}}$

As stochastic processes were defined as a function of a random variable and time, a large class\footnote{This definition excludes however discontinuous cases such as Poisson processes} of stochastic processes can be defined as a function of Brownian motion (or Wiener process) ${\displaystyle W_{t}}$. This our second definition of a stochastic process:

Definition:

Let ${\displaystyle W_{t}}$ be a Brownian motion. A stochastic process is a function of ${\displaystyle W_{t}}$ and ${\displaystyle t}$.

For instance a model of the temporal evolution of stocks ([#References|references]) is

${\displaystyle X_{t}=e^{(\sigma W_{t}+(\mu -{\frac {1}{2}}\sigma ^{2})t)}}$

A stochastic differential equation

${\displaystyle dX_{t}=a(t,X_{t})dt+b(t,X_{t})dW_{t}}$

gives an implicit definition of the stochastic process. The rules of differentiation with respect to the Brownian motion variable ${\displaystyle W_{t}}$ differs from the rules of differentiation with respect to the ordinary time variable. They are given by the It\^o formula\index{It\^o formula} ([#References|references]). To understand the difference between the differentiation of a newtonian function and a stochastic function consider the Taylor expansion, up to second order, of a function ${\displaystyle f(W_{t})}$:

${\displaystyle f(W_{t}+dW_{t})-f(W_{t})=f^{'}(W_{t})dW_{t}+{\frac {1}{2}}f^{''}(W_{t})(dW_{t})^{2}+\dots }$

Usually (for newtonian functions), the differential ${\displaystyle df(W_{t})}$ is just ${\displaystyle f^{'}(W_{t})dW_{t}}$. But, for a stochastic process ${\displaystyle f(W_{t})}$ the second order term ${\displaystyle {\frac {1}{2}}f^{''}(W_{t})(dW_{t})^{2}}$ is no more neglectible. Indeed, as it can be seen using properties of the Brownian motion, we have:

${\displaystyle \int _{0}^{t}(dW_{s})^{2}=t}$

or

${\displaystyle (dW_{t})^{2}=dt.}$

Figure figbrown illustrates the difference between a stochastic process (simple brownian motion in the picture) and a differentiable function. The brownian motion has a self similar structure under progressive zooms. \begin{figure} \begin{tabular}[t]{c c}

\epsffile{b0_3} \epsffile{n0_3}

\epsffile{b0_4} \epsffile{n0_4}

\epsffile{b0_5} \epsffile{n0_5} \end{tabular} | center | frame |Comparison of a progressive zooming on a brownian motion and on a differentiable function}

figbrown

]]

Let us here just mention the most basic scheme to integrate stochastic processes using computers. Consider the time integration problem:

${\displaystyle dX_{t}=a(t,X_{t})dt+b(t,X_{t})dW_{t}}$

with initial value:

${\displaystyle X_{t_{0}}=X_{0}}$

The most basic way to approximate the solution of previous problem is to use the Euler (or Euler-Maruyama). This schemes satisfies the following iterative scheme:

${\displaystyle X_{n+1}=X_{n}+a(\tau _{n},Y_{n})(\tau _{n+1}-\tau _{n})+b(\tau _{n},Y_{n})(W_{\tau _{n+1}}-W_{\tau _{n}})}$

More sofisticated methods can be found in ([#References|references]).

## Functional derivative

Let ${\displaystyle (\phi )}$ be a functional. To calculate the differential ${\displaystyle dI(\phi )}$ of a functional ${\displaystyle I(\phi )}$ one express the difference ${\displaystyle I(\phi +d\phi )-I(\phi )}$ as a functional of ${\displaystyle d\phi }$.

The functional derivative of ${\displaystyle I}$ noted ${\displaystyle {\frac {\delta I}{\delta \phi }}}$ is given by the limit:

${\displaystyle {\frac {\delta I}{\delta \phi }}=\lim _{a\rightarrow 0}{\frac {\partial I}{\partial \phi _{i}}}}$

where ${\displaystyle a}$ is a real and ${\displaystyle \phi _{i}=\phi (ia)}$.

Here are some examples:

Example:

If ${\displaystyle I(\phi )=\int f(y)\phi ^{p}(y)dy}$ then ${\displaystyle {\frac {\delta I}{\delta \phi }}=pf(x)\phi ^{p-1}(x)}$

Example:

If ${\displaystyle I(\phi )=\int V(\phi (y))dy}$ then ${\displaystyle {\frac {\delta I}{\delta \phi }}=V^{\prime }(\phi (x))}$.

chapretour

## Comparison of tensor values at different points

### Expansion of a function in serie about x=a

Definition:

A function ${\displaystyle f}$ admits a expansion in serie at order ${\displaystyle n}$ around ${\displaystyle x=a}$ if there exists number ${\displaystyle (\lambda _{1},\dots ,\lambda _{n})}$ such that:

${\displaystyle f(a+h)=\sum _{k=0}^{n}\lambda _{k}h^{k}+h^{n}\epsilon (h)}$

where ${\displaystyle \epsilon (h)}$ tends to zero when ${\displaystyle h}$ tends to zero.

Theorem:

If a function is derivable ${\displaystyle n}$ times in ${\displaystyle a}$, then it admits an expansion in serie at order ${\displaystyle n}$ around ${\displaystyle x=a}$ and it is given by the Taylor-Young formula:

${\displaystyle f(a+h)=\sum _{k=0}^{n}{\frac {1}{k!}}f^{(k)}(a)h^{k}+h^{n}\epsilon (h)}$

where ${\displaystyle \epsilon (h)}$ tends to zero when ${\displaystyle h}$ tends to zero and where ${\displaystyle f^{(k)}(a)}$ is the ${\displaystyle k}$ derivative of ${\displaystyle f}$ at ${\displaystyle x=a}$.

Note that the reciproque of the theorem is false: ${\displaystyle f(x)={\frac {1}{x^{3}}}\sin(x)}$ is a function that admits a expansion around zero at order 2 but isn't two times derivable.

secderico

### Non objective quantities

Consider two points ${\displaystyle M}$ and ${\displaystyle M'}$ of coordonates ${\displaystyle x^{i}}$ and ${\displaystyle x^{i}+dx^{i}}$. A first variation often considered in physics is:

eqapdai

${\displaystyle d(a^{i}e_{i})={\frac {\partial a^{i}}{\partial x^{j}}}dx^{j}e_{i}}$

The non objective variation is

${\displaystyle da^{i}={\frac {\partial a^{i}}{\partial x^{j}}}dx^{j}}$

Note that ${\displaystyle da^{i}}$ is not a tensor and that equation eqapdai assumes that ${\displaystyle e_{i}}$ doesn't change from point ${\displaystyle M}$ to point ${\displaystyle M'}$. It doesn't obey to tensor transformations relations. This is why it is called non objective variation. An objective variation that allows to define a tensor is presented at next section: it takes into account the variations of the basis vectors.

exmpderr

Example: Lagrangian speed: the Lagrangian description of the mouvement of a particle number ${\displaystyle a}$ is given by its position ${\displaystyle r_{a}}$ at each time ${\displaystyle t}$. If

${\displaystyle r_{a}(t)=x^{i}(t)e_{i}}$

the Lagrangian speed is:

${\displaystyle {\frac {dr_{a}}{dt}}={\frac {dx^{i}}{dt}}e_{i}}$

Derivative introduced at example exmpderr is not objective, that means that it is not invariant by axis change. In particular, one has the famous vectorial derivation formula:

eqvectderfor

${\displaystyle {\frac {dA}{dt}}_{R}={\frac {dA}{dt}}_{R_{1}}+\omega _{R_{1}/R}\wedge A}$

Example:

Eulerian description of a fluid is given by a field of "Eulerian" ${\displaystyle v(x,t)}$ velocities and initial conditions, such that:

${\displaystyle x=r_{a}(0)}$

where ${\displaystyle r_{a}}$ is the Lagrangian position of the particle, and:

${\displaystyle v(x,t)={\frac {dr_{a}}{dt}}.}$

Eulerian and Lagrangian descriptions are equivalent.

Example:

Let us consider the variation of the speed field ${\displaystyle u}$ between two positions, at time ${\displaystyle t}$. If speed field ${\displaystyle u}$ is differentiable, there exists a linear mapping ${\displaystyle K}$ such that:

eqchampudif

${\displaystyle u_{i}({\vec {r}}+d{\vec {r}})-u_{i}({\vec {r}})=K.\delta {\vec {r}}_{j}+O(||{\vec {r}}||)}$

${\displaystyle K_{ij}=u_{i,j}}$ is called the speed field gradient tensor. Tensor ${\displaystyle K}$ can be shared into a symmetric and an antisymmetric part:

${\displaystyle K=\left({\begin{array}{ccc}e_{11}&e_{12}&e_{13}\\e_{21}&e_{22}&e_{23}\\e_{31}&e_{32}&e_{33}\\\end{array}}\right)+\left({\begin{array}{ccc}0&-s_{3}&s_{2}\\.&0&-s_{1}\\.&.&0\\\end{array}}\right)}$

Symmetric part is called dilatation tensor, antisymmetric part is called rotation tensor. Now, ${\displaystyle u_{i}({\vec {r}}+d{\vec {r}})-u_{i}({\vec {r}})={\frac {d\delta {\vec {r}}}{dt}}}$. Thus using equation eqchampudif:

${\displaystyle {\frac {d\delta {\vec {r}}}{dt}}=K\delta {\vec {r}}}$

This result true for vector ${\displaystyle \delta {\vec {r}}}$ is also true for any vector ${\displaystyle {\vec {a}}}$. This last equation allows to show that

• The derivative with respect to time of the elementary volume ${\displaystyle dv}$ at the neighbourhood of a particle that is followed in its movement is\footnote{ Indeed

${\displaystyle d(\delta v)=d(\delta x)\delta y\delta z+d(\delta y)\delta x\delta z+d(\delta z)\delta x\delta y}$

eqformvol

${\displaystyle {\frac {d(\delta v)}{dt}}={\mbox{ div }}u\delta v}$

} :

${\displaystyle {\frac {d(dv)}{dt}}={\mbox{ div }}udv}$

• The speed field of a solid is antisymmetric[1].

exmppartder

Example: Particulaire derivative of a tensor: The particulaire derivative is the time derivative of a quantity defined on a set of particles that are followed during their movement. When using Lagrange variables, it can be identified to the partial derivative with respect to time ([#References

The following property can be showed ([#References|references]): \begin{prop} Let us consider the integral:

${\displaystyle I=\int _{V}\omega }$

where ${\displaystyle V}$ is a connex variety of dimension ${\displaystyle p}$ (volume, surface...) that is followed during its movement and ${\displaystyle \omega }$ a differential form of degree ${\displaystyle p}$ expressed in Euler variables. The particular derivative of ${\displaystyle I}$ verifies:

${\displaystyle {\frac {d}{dt}}\int _{V}\omega =\int _{V}{\frac {d\omega }{dt}}}$

\end{prop} A proof of this result can be found in ([#References|references]).

Example:

Consider the integral

${\displaystyle I=\int _{V}C(x,t)dv}$

where ${\displaystyle D}$ is a bounded connex domain that is followed during its movement, ${\displaystyle C}$ is a scalar valuated function continuous in the closure of ${\displaystyle D}$ and differentiable in ${\displaystyle D}$. The particulaire derivative of ${\displaystyle I}$ is

${\displaystyle {\frac {dI}{dt}}=\int _{D}\{{\frac {\partial C}{\partial t}}+{\mbox{ div }}(C{\vec {u}})\}dv,}$

since from equation eqformvol:

${\displaystyle {\frac {d}{dt}}(dv)={\mbox{ div }}{\vec {u}}dv.}$

secandericov

### Covariant derivative

In this section a derivative that is independent from the considered reference frame is introduced (an objective derivative). Consider the difference between a quantity ${\displaystyle a}$ evaluated in two points ${\displaystyle M}$ and ${\displaystyle M'}$.

${\displaystyle da=a(M')-a(M)=da^{i}e_{i}+a^{i}de_{i}}$

As at section secderico:

${\displaystyle da^{i}e_{i}={\frac {\partial a^{i}}{\partial x^{j}}}dx^{j}e_{i}}$

Variation ${\displaystyle de_{i}}$ is linearly connected to the ${\displaystyle e_{j}}$'s {\it via} the tangent application:

${\displaystyle de_{i}=d\omega _{i}^{j}e_{j}}$

Rotation vector depends linearly on the displacement:

eqchr

${\displaystyle de_{i}=\Gamma _{ik}^{j}dx^{k}e_{j}}$

Symbols ${\displaystyle \Gamma _{ik}^{j}}$ called Christoffel symbols[2] are not[3] tensors. they connect properties of space at ${\displaystyle M}$ and its properties at point ${\displaystyle M'}$. By a change of index in equation eqchr :

eqcovdiff

${\displaystyle da^{i}e_{i}={\frac {\partial a^{i}}{\partial x^{j}}}dx^{j}e_{i}+a^{k}\Gamma _{kj}^{i}dx^{j}e_{i}}$

As the ${\displaystyle x^{j}}$'s are independent variables:

Definition:

The covariant derivative of a contravariant vector ${\displaystyle a^{i}}$ is

eqdefdercov

${\displaystyle {\frac {Da^{i}}{Dx^{j}}}={\frac {\partial a^{i}}{\partial x^{j}}}+a^{k}\Gamma _{kj}^{i}}$

The differential can thus be noted:

${\displaystyle da^{i}={\frac {Da^{i}}{Dx^{j}}}dx^{j},}$

which is the generalization of the differential:

${\displaystyle da_{i}={\frac {\partial a_{i}}{\partial x^{j}}}dx^{j}}$

considered when there are no tranformation of axes. This formula can be generalized to tensors.

Remark:

For the calculation of the particulaire derivative exposed at section

secderico the ${\displaystyle x^{j}}$ are the coordinates of the point, but the quantity


to derive depends also on time. That is the reason why a term ${\displaystyle {\frac {\partial x^{j}}{\partial t}}}$ appear in equation eqformalder but not in equation

eqdefdercov.


Remark:

From equation eqdefdercov the vectorial derivation formula of equation

eqvectderfor can be recovered when:


${\displaystyle de_{i}=\omega _{i}^{j}dte_{j}}$

Remark:

In spaces with metrics, ${\displaystyle \Gamma _{kj}^{i}}$ are functions of the metrics tensor ${\displaystyle g^{ij}}$.

### Covariant differential operators

Following differential operators with tensorial properties can be defined:

${\displaystyle a={\mbox{ grad }}V}$

with ${\displaystyle a_{i}={\frac {\partial V}{\partial x^{i}}}}$.

• Rotational of a vector

${\displaystyle b={\mbox{ rot }}a_{i}}$

with ${\displaystyle b_{ik}={\frac {\partial a_{k}}{\partial x^{i}}}-{\frac {\partial a_{i}}{\partial x^{k}}}}$. the tensoriality of the rotational can be shown using the tensoriality of the covariant derivative:

${\displaystyle {\frac {\partial a_{k}}{\partial x^{i}}}-{\frac {\partial a_{i}}{\partial x^{k}}}={\frac {Da_{k}}{Dx^{i}}}-{\frac {Da_{i}}{Dx^{k}}}}$

• Divergence of a contravariant density:

${\displaystyle d={\mbox{ div }}a^{i}}$

where ${\displaystyle d={\frac {\partial a^{i}}{\partial x^{i}}}}$.

For more details on operators that can be defined on tensors, see

([#References|references]).


In an orthonormal euclidian space on has the following relations:

${\displaystyle {\mbox{ rot }}({\mbox{ grad }}\phi )=0}$

and

${\displaystyle {\mbox{ div }}({\mbox{ rot }}(a))=0}$

${\displaystyle \nabla \wedge (\nabla \wedge c)=\nabla (\nabla .c)-\nabla ^{2}a}$

1. Indeed, let ${\displaystyle u}$ and ${\displaystyle v}$ be two position vectors binded to the solid. By definition of a solid, scalar product ${\displaystyle uv}$ remains constant as time evolves. So:

${\displaystyle {\frac {d(uv)}{dt}}=0}$

${\displaystyle {\frac {du}{dt}}v+u{\frac {dv}{dt}}=0}$

So:

${\displaystyle K_{ij}u_{j}v_{i}+u_{i}K_{ij}v_{J}=0}$

As this equality is true for any ${\displaystyle u,v}$, one has:

${\displaystyle K_{ij}=-K_{ji}}$

In other words, ${\displaystyle K}$ is antisymmetrical. So, from the preceeding theorem:

${\displaystyle {\frac {dPQ}{dt}}=\Omega _{i,j}(PQ)_{j}}$

This can be rewritten saying that speed field is antisymmetrical, {\it i. e.}, one has:

${\displaystyle V_{P}=V_{O}+\Omega \wedge (OP)}$

2. I a space with metrics ${\displaystyle g_{ij}}$ coefficients ${\displaystyle \Gamma _{hk}^{i}}$ can expressed as functions of coefficients ${\displaystyle g_{ij}}$.
3. Just as ${\displaystyle {\frac {\partial a^{i}}{\partial x^{j}}}}$ is not a tensor. However, ${\displaystyle d(a^{i}e_{i})}$ given by equation eqcovdiff does have the tensors properties