# Mathematical Methods of Physics/Summation convention

The Basic Notation Often when working with tensors there is a large amount of summation that needs to occur. To make this quicker we use Einstein index notation.

The notation is simple. Instead of writing $\sum_{j} A_j$ we simply write $A_j$ and the summation over j is implied.

A couple of common tensors used with this notation are

• $\delta_{a}^{b}$ is only nonzero for the case that $a = b$
• $\epsilon_{i j k}$ is zero if $i = j \or j = k \or i = k$ . For odd permutations (i.e. $\epsilon_{j i k} = - \epsilon_{i j k} = - \epsilon_{k i j}$). In other words, swapping any two indices flips the sign of the tensor.
• These are related by $\epsilon_{ijk} \epsilon^{imn} = \delta_j^m \delta_k^n - \delta_j^n \delta_k^m$ (convince yourself that this is true)

Now we can write some common vector operations :

• Scalar (Dot) Product $\vec{A} \cdot \vec{B} = A_i B_i$
• Vector (Cross) Product $\vec{A} \times \vec{B} = \epsilon_{ijk} A_j B_k$

Examples

• Bulleted list item

Prove that $\vec{A} \cdot (\vec{B} \times \vec{C}) = \vec{B} \cdot (\vec{C} \times \vec{A})$

$A_i (\epsilon_{ijk} B_j C_k)$ from here we can swap the indices (i <-> j) and get $B_j (- \epsilon_{jik} A_i C_k)$. Note the sign flip. In order to get a positive sign again we can just swap the indices (i <-> k) and get $B_j (\epsilon_{jki} C_k A_i) = \vec{B} \cdot (\vec{C} \times \vec{A})$ as desired.

• Prove that $\vec{A} \times (\vec{B} \times \vec{C}) = (\vec{A} \cdot \vec{C}) \vec B - (\vec{A} \cdot \vec{B}) \vec{C}$

$\vec{A} \times (\vec{B} \times \vec{C}) \equiv \epsilon_{ijk} A_j (\epsilon_{klm} B_l C_m)$ (Watch the indices closely - some students inadvertently add too many)

We know we want to get a dot product out of this. In order to do that we will have to use the expansion of the Levi-Cevita tensor in terms of the Kronecker Deltas. We want to get the Tensors to have the same first index, so we can do this by swapping the indices ( i <-> k) $- \epsilon_{kji} \epsilon_{klm} A_j B_l C_m = - (\delta_{j l} \delta_{i m} - \delta_{j m} \delta_{i l}) A_j B_l C_m$ . Now we can make the observation that the first term is only non-zero if j=i and l=m, so $\delta_{j l} \delta_{i m} A_j B_l C_m = A_i B_l C_l$ Note that this is just the dot product $(\vec{B} \cdot \vec{C}) A_i$. The second term is only non-zero if k = m and j = l, so $\delta_{k m} \delta_{j l} A_j B_l C_m = A_j B_j C_k = (\vec{A} \cdot \vec{B}) C_k$

Combining these we are left with $(\vec{A} \cdot \vec{C}) B_k - (\vec{A} \cdot \vec{B}) C_k =(\vec{A} \cdot \vec{C}) \vec B - (\vec{A} \cdot \vec{B}) \vec{C}$ as desired.

Tensor Notation When tensors are used then a distinct difference between an upper and lower index becomes important as well as the ordering. $T^{a}_{b} v^b$ will be contracted into a new vector, but $T^{a}_{ b} v_b$ will not.

Definitions that may prove useful : If a tensor is symmetric, then it satisfies the property that $T_{a b} = T_{b a}$ If a tensor is antisymmetric, then $T_{a b} = - T_{b a}$ There are many tensors that satisfy neither of these properties - so make sure it makes sense to use them before blindly applying them to some problem.