# General Relativity/Einstein Summation Notation

In the last sections we talked about a number of operations involving tensors. One of them is to take a covariant vector and contravariant vector and turn them into scalar. Another is to get a contravariant vector and put it into a tensor and get out a force.

Since we want to do math with these, let us try to see how we can represent these. We take as an example trying to combine a contravariant vector (v) which represents the direction and speed we are travelling in and a covariant vector (w) which represents the rate of distance at which a temperature is changing in a certain direction. We want to get the scale invariant quantity describing the rate of time at which the temperature is changing as we move in direction v.

Now we could do it really abstractly. For example if we want to combine a contravariant tensor and covariant tensor to get a scalar we could write...

$f=\mathbf {v} \cdot \mathbf {w}$ This is just our old friend the dot product. This has the advantage that it is short and simple to write. However, the problem with this is that it doesn't let us know what f, v, and w are. f is a scalar. v is a contravariant tensor. w is a covariant tensor. This wasn't a problem in basic vector calculus, where we just had to deal with scalars and vectors. But it is a problem now that our mathematical zoo has more animals.

The next approach would be to write everything as a component. So we have

$f=v^{1}w_{1}+v^{2}w_{2}$ The trouble with this is that it is a lot of typing of the same numbers, over and over again. Lets write it out in summation notation.

$f=\sum _{\mu =1}^{2}v^{\mu }w_{\mu }$ Better... But that summation sign, do we really want to write it over and over and over and over? What does it give us? We can be really clever and just write

$f=v^{\mu }w_{\mu }$ and just know that when we see the same index on top and on the bottom, we mean to take a sum. This is called Einstein summation notation. Whenever one sees the same letter on both superscript ("upper") indices and subscript ("lower") indices in a product, one automatically sums over the indices. Note that in GR, indices usually range from 0 to 3. (Note: Greek letters typically range from 0 to 3, while Roman letters range from 1 to 3).

Here are some more examples of the Einstein summation notation being used:

1. $v^{\mu }\sigma _{\mu }=\sum _{\mu =0}^{3}v^{\mu }\sigma _{\mu }=v^{0}\sigma _{0}+v^{1}\sigma _{1}+v^{2}\sigma _{2}+v^{3}\sigma _{3}$ 2. $T^{\alpha \beta }S_{\alpha \beta }=\sum _{\alpha ,\beta =0}^{3}T^{\alpha \beta }S_{\alpha \beta }=T^{00}S_{00}+T^{10}S_{10}+T^{20}S_{20}+T^{30}S_{30}+T^{01}S_{01}+T^{11}S_{11}+$ etc. (16 terms total)

3. $R_{\mu \nu }=R_{\ \mu \rho \nu }^{\rho }=\sum _{\rho =0}^{3}R_{\ \mu \rho \nu }^{\rho }=R_{\ \mu 0\nu }^{0}+R_{\ \mu 1\nu }^{1}+R_{\ \mu 2\nu }^{2}+R_{\ \mu 3\nu }^{3}$ ## Identities

Several identities arise from indicial notation.

Contraction

Since $\delta _{j}^{i}=1$ if $i=j$ ,

$\delta _{j}^{i}\delta _{k}^{j}=\delta _{k}^{i}\,$ $\delta _{j}^{i}x_{ij}=x_{mm}=\mathrm {trace} (x_{ij})$ Differentiation
${\frac {\partial x_{i}}{\partial x_{j}}}=\delta _{j}^{i}$ ${\frac {\partial x_{ij}}{\partial x_{k\ell }}}=\delta _{k}^{i}\delta _{\ell }^{j}$ 