Electronic Properties of Materials/Quantum Mechanics for Engineers/The Fundamental Postulates

There are four basic postulates that underlie quantum mechanics.

Postulate I: Observables and Operators are Related

Postulate II: Measurement collapses the Wave Function

Postulate III: There exists a state function that allows expectation values to be calculated.

Postulate IV: The wave function evolves according to the time-dependent Schrodinger equation.

Postulate I

Each self-consistent, well-defined, observable has a linear operator that satisfies the eigenvalue equation, ${\displaystyle {\hat {A}}\phi =a\phi }$, where ${\displaystyle A}$ is observable, ${\displaystyle {\hat {A}}}$ is the operator, ${\displaystyle a}$ is the measured eigenvalue, and ${\displaystyle \phi }$ is the eigenfunction of ${\displaystyle a}$. In a given system you have a different eigenfunction for every eigenvalue so often times you will see ${\displaystyle \phi _{a}}$ which specifies that ${\displaystyle \phi }$ is the eigenfunction of ${\displaystyle a}$. Thus, this postulate links an observable to a mathematical operator.

What are Mathematical Operators?

An "operator" is thing or mathematical expression which operates on a function and makes it different. For example:

In this function, ${\displaystyle {\hat {D}}_{x}}$ is the mathematical operator defined as the derivative with respect to ${\displaystyle x}$. This means that if we later have ${\displaystyle x}$ operating on some function of ${\displaystyle x}$, we can then apply additional operators to the function which change the result, but still follow the same rule. For example, let's apply an operator, ${\displaystyle R_{z90}}$, which rotates the function 90° about the z-axis.

Furthermore, applying a "divide by three" operator, or an Identity Operator, which leaves the function unchanged, yields similar results.

${\displaystyle {\begin{array}{lcl}{\hat {B}}=\ {\text{divide by three}}&\Longrightarrow &{\hat {B}}\phi ={\phi \over 3}\\{\hat {I}}=\ {\text{identity operator}}&\Longrightarrow &{\hat {I}}g=g\end{array}}}$

Physically Significant Operator Observables:

Physically meaningful observables all have operators, which come about in a variety of ways, but the way that you can start to think about them is as operators in the classical world which are further quantized with the addition of ${\displaystyle \hbar }$ and ${\displaystyle i}$. If you look at these case s long enough, you'll eventually start seeing that there's a pattern to it.

Let's take the example of linear momentum, ${\displaystyle p}$. I will give it the operator, ${\textstyle {\hat {p}}}$, a vector which is equal to ${\textstyle -i\hbar \nabla }$. While you can look at the whole in three dimensions, the gradient allows us to look at it equally in parts so let's simplify this problem and look only at the x component of this vector.

${\displaystyle {\hat {p}}_{x}=-i\hbar \ {\partial \over \partial x}}$
In applying this operator to some function, ${\displaystyle \phi }$, gives:
${\displaystyle {\hat {p}}_{x}\phi (x)=-i\hbar \ {\partial \over \partial x}\ \phi (x)=p_{x}\phi }$

Solving this differential equation provides one solution by applying the planewave equations:

${\displaystyle \phi =Ae^{ikx}=A[\cos(kx)+i\sin(kx)]}$

The solution is just a planewave with wave number, ${\displaystyle k}$. ${\textstyle \left(k={2\pi \over \lambda }\right)}$

${\displaystyle \underbrace {-i\hbar {\partial \over \partial x}} _{{\hat {p}}_{x}}*\ \underbrace {\left(Ae^{ikx}\right)} _{\phi }\ =\ \underbrace {-i\hbar (ik)} _{p_{x}}*\ \underbrace {Ae^{ikx}} _{\phi }}$
${\displaystyle p_{x}=\hbar k;\qquad \phi =-Ae^{ikx}}$

This isn't very exciting on its own as ${\displaystyle k}$ and ${\displaystyle P_{x}}$ can take any value, thus it doesn't look "quantized". Physically, this represents a free particle (i.e. a particle alone in an infinite vacuum), and the quantization comes from the boundary conditions we apply.

Application of Boundary Conditions

<FIGURE> "Born-von Karman Boundary Conditions" (These boundary conditions could be pictured as a box or as a ring.)

Let's apply periodic boundary conditions (PBC) called "Born-von Karman Boundary Conditions". <FIGURE> With this we are essentially putting the particle in a one-dimensional box where it is free to move within the box, but once it leaves the box it loops back around in space and reenters the box from the other side. The box has some size, ${\displaystyle L}$, which gives us the quantization. This concept can also be pictured as a ring with radius ${\textstyle R={L \over 2\pi }}$.

These boundary conditions restrict the solutions, because the solutions must match at these boundaries. Thus:

{\displaystyle {\begin{aligned}Ae^{iko}&=Ae^{ikL}\\\phi (o)&=\phi (L)\end{aligned}}}
This isn't obviously solvable so we go in and substitute sine and cosine as described in the planewave equations which gives:
${\displaystyle \underbrace {\cos(ko)} _{=1}+\underbrace {i\sin(ko)} _{=0}=\underbrace {\cos(kL)} _{must\ be\ 1}+\underbrace {i\sin(kL)} _{must\ be\ 0}}$
Since the right hand side of the equation must be equal to a known value, we can conclude that ${\textstyle kL=0,2\pi ,4\pi ,...}$. Following this logic:
${\displaystyle k={n2\pi \over L};\qquad \phi (x)=Ae^{i{2\pi n \over L}x}}$

Now we have a quantized solution. Going back to the idea of the ring boundary condition, and come upon the de Broglie hypothesis from Chapter 1 (${\textstyle p=\hbar k}$), showing us that when Plank initially quantized particles he was thinking of a periodic situation. Additionally, we can develop the Bohr model of the atom by combining these two concepts.

{\displaystyle {\begin{aligned}k={2\pi \over \lambda }={n2\pi \over L}\longrightarrow L=n\lambda =2\pi R\\\longrightarrow n\lambda =2\pi R\end{aligned}}}

<FIGURE> "Bohr Atom Model from de Broglie Equations" (Description)

Effect of Boundary Conditions

This is what makes nanoscience interesting! When the dimensions of a structure are small enough they affect the quantization. If we can control the dimensionality at a nanoscale, we can control the quantum nature of electrons.

Another well defined observable is energy. In classical mechanics there are several ways to formulate the equations of motion (Newtonian, Lagrangian, Hamiltonian). I'm not going to talk about these, but you should know that in quantum mechanics the formalism matches classical Hamiltonian formalism. For systems where the kinetic energy depends on momentum and potential energy or position, the Hamiltonian operator takes the simple form:

${\displaystyle {\hat {H}}={\hat {T}}+{\hat {V}}}$, where ${\displaystyle {\hat {T}}}$ is the kinetic energy and ${\displaystyle {\hat {V}}}$ is the potential energy.

For now we are going to talk about particles in a vacuum which sets the potential energy (${\displaystyle {\hat {V}}}$) to zero. For now we are simply looking at the kinetic energy (${\displaystyle {\hat {T}}}$). We can take the equation for kinetic energy, ${\textstyle {{\hat {p}}^{2} \over 2m}}$, from classical mechanics and substitute in our momentum operator, ${\displaystyle -i\hbar \nabla }$, to get a simplified equation for ${\displaystyle {\hat {T}}}$, referred to as the Laplacian operator.

{\displaystyle {\begin{aligned}\\{\hat {V}}&=V(r)=0\quad (for\ now)\\{\hat {T}}&={{\hat {p}}^{2} \over 2m}={1 \over 2m}(-i\hbar \nabla )^{2}={-\hbar ^{2} \over 2m}\nabla ^{2}\end{aligned}}}

Simplification of nabla^2:

{\displaystyle {\begin{aligned}\nabla ^{2}&=\nabla \cdot \nabla \\&=\left\langle {\partial \over \partial x},{\partial \over \partial y},{\partial \over \partial z}\right\rangle \\&={\partial ^{2} \over \partial x^{2}}+{\partial ^{2} \over \partial y^{2}}+{\partial ^{2} \over \partial z^{2}}\end{aligned}}}
Once again, we can simplify this to a one dimensional problem, by utilizing the expanded form of ${\displaystyle \nabla ^{2}}$.
${\displaystyle {\hat {H}}={-\hbar \over 2m}{\partial ^{2} \over \partial x^{2}}}$

We are taking the second derivatives so as the operator is operating it returns the curvature of the function showing us that the kinetic energy operator is proportional to a function's curvature. Thus, solutions with tighter curves will have higher energies than slowly changing functions.

Ideally, we want to solve: ${\displaystyle {\hat {H}}\phi =E\phi }$ (Time-Independent Schrodinger Equation)

${\displaystyle E\phi ={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\phi }$

What solves this? Planewaves! ${\textstyle \left(\phi =Ae^{ikx}+Be^{-ikx}\right)}$ As it turns out, planewaves are a common solution in quantum mechanics!

{\displaystyle {\begin{aligned}E\phi &={-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\left[Ae^{ikx}+Be^{-ikx}\right]\\&={-\hbar ^{2} \over 2m}{\partial \over \partial x}\left[Aike^{ikx}+B(-ik)e^{-ikx}\right]\\&={-\hbar ^{2} \over 2m}\left[A(ik)^{2}e^{ikx}+B(ik)^{2}e^{-ikx}\right]\\&=\underbrace {{-\hbar ^{2} \over 2m}\left(-k^{2}\right)} _{E}\ \underbrace {\left[Ae^{ikx}+Be^{-ikx}\right]} _{\phi }\end{aligned}}}

Here we can see that our eigenvalues are ${\displaystyle -{\hbar ^{2} \over 2m}}$, thus breaking up the equation gives us:

${\displaystyle E={-\hbar ^{2} \over 2m}(-k)^{2};\qquad \phi =Ae^{ikx}+Be^{-ikx}}$

These variables are consistent with our earlier finding that:

${\displaystyle P_{x}=\hbar k;\qquad E={1 \over 2}mv^{2}={P^{2} \over 2m}={(\hbar k)^{2} \over 2m}}$
Note: Our earlier equation ${\textstyle \left(\phi =Ae^{ikx}\right)}$had one component due to the singe derivative present in the parent equation while our current solution has two components due to the double derivative present in the parent equation.

Here, the momentum is telling us what the value is and the ${\displaystyle A}$ and ${\displaystyle B}$ coefficients are telling us if it travels to the left or to the right. As you may have guessed, the energy and the momentum are commensurate with each other, we can know them both at the same time. In quantum mechanics, if operators "commute" then they share eigenfunctions. We should notice that if ${\displaystyle {\hat {A}}}$ or ${\displaystyle {\hat {B}}}$ are zero, then the eigenfunctions of energy are also the eigenfunctions of momentum. Generally, ${\displaystyle {\hat {A}}}$ and ${\displaystyle {\hat {B}}}$ commute if:

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}=0}$
For example, let's look at momentum and energy, when ${\displaystyle f(x)}$ is some test function:
{\displaystyle {\begin{aligned}\ \left[{\hat {p_{x}}},{\hat {H}}\right]&=\left[-i\hbar {\partial \over \partial x},{-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\right]f(x)\\&=\left[\left(-i\hbar {\partial \over \partial x}\right)\left({-\hbar \over 2m}{\partial ^{2} \over \partial x^{2}}\right)-\left({-\hbar ^{2} \over 2m}{\partial ^{2} \over \partial x^{2}}\right)\left(-i\hbar {\partial \over \partial x}\right)\right]f(x)\\&=-i\hbar {-\hbar ^{2} \over 2m}{\partial \over \partial x}{\partial ^{2} \over \partial x^{2}}f(x)-{-\hbar ^{2} \over 2m}(-i\hbar ){\partial ^{2} \over \partial x^{2}}{\partial \over \partial x}f(x)\\&=i{\hbar ^{3} \over 2m}f'''(x)-i{\hbar ^{3} \over 2m}f'''(x)=0\end{aligned}}}

Since ${\displaystyle [{\hat {p_{x}}},{\hat {H}}]=0}$, ${\displaystyle {\hat {p_{x}}}}$ and ${\displaystyle {\hat {H}}}$ commute.

Let's try a different operator. This time, let's compare position and momentum.

{\displaystyle {\begin{aligned}\ \left[{\hat {p_{x}}},{\hat {x}}\right]f(x)&=\left[-i\hbar {\partial \over \partial x},x\right]f(x)\\&=-i\hbar {\partial \over \partial }xf(x)-x(-i\hbar ){\partial \over \partial x}f(x)\\&=(-i\hbar )\left[{\partial \over \partial x}xf(x)-x{\partial \over \partial x}f(x)\right]\\&=(-i\hbar )\left[x{\partial \over \partial x}f(x)+f(x){\partial \over \partial x}x-x{\partial \over \partial x}f(x)\right]\\&=-i\hbar f(x)\end{aligned}}}

Here, ${\displaystyle [{\hat {p_{x}}},{\hat {x}}]=-i\hbar \neq 0}$, meaning that ${\displaystyle {\hat {H}}}$ and ${\displaystyle x}$ do not commute. This means that momentum and position do not commute and thus do not share eigenfunctions. As it so happens, this is all tied to observation and the fundamental uncertainty in our knowledge.

Recall the Heisenberg Uncertainty Principle:

${\displaystyle \Delta x\Delta p\geq {\hbar \over 2}}$
When operators commute then we say that the observables associated with the operators are "compatible" meaning that they can be measured simultaneously to arbitrary precision. (Related to the Schwartz inequality...) Without proof, I will tell you that:

If ${\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {C}}\neq 0}$, then ${\displaystyle \Delta A\Delta B\geq {1 \over 2}|\langle c\rangle |}$, where ${\displaystyle \langle c\rangle }$ refers to "expectation value".

So, for ${\displaystyle \left[{\hat {x}},{\hat {p_{x}}}\right]=-i\hbar }$, ${\displaystyle \Delta x\Delta p_{x}\geq {1 \over 2}\hbar }$ (working with ${\displaystyle \left|\left({\hat {x}},{\hat {p_{x}}}\right)\right|^{2}}$) *see B&J p.215

This is a BIG DEAL! It means that it is impossible to simultaneously know certain things. (Remember our thought experiment from Chapter 2?) What's more, this is purely a quantum effect. Consider again, momentum. What if we precisely measure the momentum to be ${\displaystyle \hbar k}$, then the particle's wave function is ${\displaystyle \phi _{k}}$.

Remember in the probabilistic interpretation:

{\displaystyle {\begin{aligned}\psi ^{*}\psi &=P(x)\\A^{*}e^{-ikx}Ae^{ikx}&=|A|^{2}=P(x)\end{aligned}}}
<FIGURE> "Incompatible Observables" (Constant value ${\displaystyle |A|^{2}}$)

But ${\displaystyle A}$ is just the normalization constant, so the probability distribution appears as (FIGURE). If we know precisely ${\displaystyle p_{x}}$ then we know nothing about ${\displaystyle x}$! It was an equal probability any where in the range ${\displaystyle -\infty .

Thus, ${\displaystyle x}$ and ${\displaystyle p_{x}}$ are incompatible observables.

Postulate II

A measurement of observable ${\displaystyle A}$ that yields value ${\displaystyle a}$ leaves the system in state ${\displaystyle \phi _{a}}$.

${\displaystyle {\hat {A}}\phi _{a}=a\phi _{a}}$

We say that the measurement "collapses the wave function" to ${\displaystyle \phi _{a}}$, where ${\displaystyle \phi _{a}}$ is the eigenfunction of the particular value measured Immediate subsequent measurements will thus yield the value ${\displaystyle a}$ as the eigenfunction will remain collapsed about that value ${\displaystyle a}$ until another property is measured, as seen in Chapter 2.

What is important here? Before the initial measurement, the expectation of the measurement is given statistically from ${\displaystyle \psi }$, a superposition of possible states. After the act of measuring leaves ${\displaystyle \phi _{a}}$, one particular state, for subsequent measurements. Note that this is very similar to solving partial differential equations. When solving a partial differential equation for a particular solution you get a linear superposition of all possible solutions which is analogues to what we see here.

Postulate III

There exists a state function, called the "wave function" that represents the state of the system at any given instant, and all the information we could know about the system is contained in this state function, ${\displaystyle \Psi }$, which is continuous and differentiable.

For any observable, ${\displaystyle C}$, we can find the expectation value, for measuring ${\displaystyle C}$ from ${\displaystyle \Psi }$.

${\displaystyle \langle c\rangle =\int \Psi ^{*}{\hat {C}}\ \Psi \ dr}$
Here ${\displaystyle \Psi ^{*}}$ is the complex conjugate of ${\displaystyle \Psi \rightarrow (a+bi)^{*}=a-bi}$, and ${\displaystyle \int dr}$ is an abbreviation for ${\displaystyle \int \int \int dx\ dy\ dz}$

Review of Statistics (and the meaning of the "expectation value", ${\displaystyle \langle c\rangle }$)

In statistics, ${\displaystyle {\bar {c}}}$, is the expectation value of, ${\displaystyle \langle c\rangle }$, and when all goes well in sampling theory:

${\displaystyle {\bar {c}}={1 \over N}\sum _{i=1}^{N}ci}$

Within this function, if you know all the possibilities then you can essentially write the state function for the system. Let's say I have a bag with 5 pennies, 3 dimes, and 2 quarters. The probability of me pulling any given coin type out of the bag is:

{\displaystyle {\begin{aligned}{\bar {c}}&={1 \over N}\sum _{i=1}^{N}ci={1 \over 10}(1+1+1+1+1+10+10+10+25+25)\\&={85 \over 10}={8.5}\\&=\left(1\times {5 \over 10}\right)+\left(10\times {3 \over 10}\right)+\left(25\times {2 \over 10}\right)\end{aligned}}}
${\displaystyle {\bar {c}}=\sum _{all\ c}CiP(ci)}$

For continuous probability distribution:

${\displaystyle \int cP(c)dc}$

State Functions in Quantum Mechanics

Applying this statistical expectation value to our quantum state function gives us:

${\displaystyle \langle c\rangle =\int \Psi ^{*}\underbrace {{\hat {c}}\Psi } _{=c\Psi }\ dr=\int cP(c)dc}$

Where, since ${\displaystyle c}$ is just a number we can simplify ${\displaystyle \Psi ^{*}{\hat {c}}\ \Psi }$ to ${\displaystyle cP(c)}$.

Postulate IV

The state function, ${\displaystyle \Psi }$, develops according to the equation:

${\displaystyle i\hbar {\partial \over \partial x}\Psi (rt)={\hat {H}}\Psi (rt)}$

This is the time dependent Schrodinger Equation and is true for non-relativistic space. (Note that this equation is a postulate, there is no proof for this.) As it happens, to account for relativity we either fix our solutions by perturbation methods or instead solve using the Dirac Equation:

${\displaystyle \left(\beta mc^{2}+\sum _{k=1}^{3}\alpha _{k}p_{k}c\right)\psi (rt)=i\hbar \ {\partial \psi (rt) \over \partial t}}$

These four postulates give us the basis for everything we do in Quantum Mechanics, and the reason they work out is tied to linear Hermitian operators. The solution to the eigenvalue equation has special properties, wherein the eigenfunctions are orthonormal. For an arbitrary system with bound states:

${\displaystyle {\hat {O}}\psi _{m}=o_{n}\psi _{n}}$; where ${\displaystyle n=0,1,2,...}$, and ${\displaystyle o_{n}}$ is the ${\displaystyle n^{th}}$ eigenvalue which corresponds to the ${\displaystyle n^{th}}$ eigenfunction ${\displaystyle \psi _{m}}$.

Orthonormality

An orthonormal function...

${\displaystyle \int \psi _{n}^{*}\psi _{m}\ dr=\delta _{nm}\quad {\begin{cases}=1\quad (if\ \ n=m)\\=0\quad (otherwise)\end{cases}}}$

Here, ${\displaystyle \delta _{nm}}$, is the Kronecker Delta Function. This function is a consequence of the Stern-Louisville Theorem where the set of ident functions, ${\displaystyle \{\psi _{n}}\}$, span Hilbert space, sometimes only sub-space, the function-space where ${\displaystyle \Psi }$ lives. Hilbert space can be thought of as an equivalent space to Euclidean space, where vectors live, which will have some set of vectors ${\displaystyle \{q_{i}\}}$. If that set of vectors is orthonormal and span space, then they can act as a basis for all other vectors in that space, and we can write any arbitrary vector ${\displaystyle v}$ as a sum of these vectors ${\displaystyle \{q_{i}\}}$.

${\displaystyle v=\sum _{i}c_{i}\ q_{i}}$

Those who have taken linear algebra might also remember a bunch of rules about eigenvalues, pertinents, etc... Well, they all will apply to what you're going to see here, and in fact, there is a matrix notation that allows one to directly map all of quantum mechanics to sets of matrices and vectors.

Hilbert Space

With this orthogonal property, we can express ${\displaystyle \Psi }$ using ${\displaystyle \psi _{n}}$ as a basis.

${\displaystyle \Psi =\sum _{n}c_{n}\psi _{n}}$

Just as with Euclidean space, ${\displaystyle c_{n}}$ are the projection of ${\displaystyle \Psi }$ onto ${\displaystyle \psi _{n}}$. The value of this being that we can solve for ${\displaystyle c_{n}}$ by taking the equivalent of an inner product. (dot product)

{\displaystyle {\begin{aligned}c_{i}&=\int dr\ \psi _{i}^{*}\ \Psi \\&=\int dr\ \psi _{i}^{*}\ \sum _{n}c_{n}\psi _{n}\\&=\int dr(\psi _{i}^{*}c_{1}\psi _{1}+\psi _{i}^{*}c_{2}\psi _{2}+\cdots +\psi _{i}^{*}c_{i}\psi _{i})\qquad {\begin{cases}\psi _{i}^{*}c_{i}\psi _{i}\neq 0\end{cases}}\end{aligned}}}

The fact that we can have a basis which is orthonormal, spans space, allows us to write the wave function, gives us a way to describe it in Hilbert space, and allows us to describe the coefficients as the projection of the wave function onto that particular eigenfunction, is very important!

Think back to expectation values, where ${\displaystyle {\hat {O}}\psi _{n}=o_{n}\psi _{n}}$. Solving for each term:

{\displaystyle {\begin{aligned}\langle {\hat {o}}\rangle =\int \psi ^{*}{\hat {o}}\Psi &=\int (c_{1}^{*}\psi _{1}^{*}+c_{2}^{*}\psi _{2}^{*}+\cdots ){\hat {o}}(c_{1}\psi _{1}+c_{2}\psi _{2}+\cdots )\\&=\int c_{i}^{*}\psi _{i}^{*}\ {\hat {o}}\ c_{j}\psi _{j}\\&=c_{i}^{*}c_{j}\int \psi _{i}^{*}\underbrace {{\hat {o}}\psi _{j}} _{o_{j}\psi _{j}}\\&=c_{i}^{*}c_{j}o_{j}\int \psi _{i}^{*}\psi _{j}\\&=c_{i}^{*}c_{j}o_{j}\ \partial _{ij}\end{aligned}}}
Thus, ${\displaystyle \langle o\rangle ={\bar {o}}=\sum _{all\ o}o_{i}p(o_{i})}$

Therefore the probability of measuring a particular value is ${\displaystyle p(o_{i})=c_{i}^{*}c_{i}}$, given by the coefficient which is the projection of the wave function onto that particular eigenfunction. If you think about this physically in vector space, it kind of makes sense! We're saying that if I have a vector that's mostly in the 1 direction, then it's going to have a behavior that's also "mostly" in the 1 direction. There is still a probability of measuring it in the other directions as well. So, when we talk about superposition, it's as a linear sum of eigenfunctions. Remembering that with each eigenfunction there is a coefficient which is the projection of the wave function onto that eigenfunction, this tells us the probability of measuring any particular value.

We have some operator, ${\textstyle {\hat {S}}_{z}}$, which operates on some function, ${\textstyle \chi }$, and returns the value ${\displaystyle s_{z}\chi }$. This system has only two solutions (in the case of the silver atom):

{\displaystyle {\begin{aligned}&s_{z}={\hbar \over 2},\ k\uparrow \\&s_{z}={-\hbar \over 2},\ k\downarrow \\\end{aligned}}}

When we had that initial beam of atoms, passing through vacuum, initially we didn't know anything about the state; it was randomized.

{\displaystyle {\begin{aligned}\Psi =&{1 \over {\sqrt {2}}}x_{\uparrow }+{i \over {\sqrt {2}}\ x_{\downarrow }}\\&P({\hbar \over 2})={1 \over {\sqrt {2}}}{1 \over {\sqrt {2}}}=0.50\\&P({-\hbar \over 2})={-i \over {\sqrt {2}}}{i \over {\sqrt {2}}}=0.50\end{aligned}}}

This says that the probability of measuring each outcome is 50/50 odds! Furthermore, the wave function is normalized and the sum of the probabilities is equal to one. If this was not true we would have to go through and scale the vector until it is normalized. Now let's say we measure the case and find an "up" spin, meaning that ${\displaystyle \Psi }$ has collapsed to ${\displaystyle x_{\uparrow }}$. Now that we have measured the case, the probability of further finding an "up" case is now one and the probability of finding a down case is now zero.

What about ${\displaystyle S_{y}}$?

${\displaystyle {\hat {S_{y}}}\zeta =s_{y}\zeta }$; ${\displaystyle \left\{{s_{y}=+{\hbar \over 2},\quad \zeta _{\uparrow } \atop s_{y}=-{\hbar \over 2},\quad \zeta _{\downarrow }}\right\}}$

This system has two possible results, analogous to the ones shown with ${\textstyle {\hat {S}}_{z}}$. We can write both systems together as:

${\displaystyle \alpha _{1}x_{\uparrow }+\alpha _{2}x_{\downarrow }=\Psi =\beta _{1}\zeta _{\uparrow }+\beta _{2}\zeta _{\downarrow }}$

The set ${\displaystyle \{x\}}$ and ${\displaystyle \{\zeta \}}$ are incompatible. When we measure one, the vector function snaps to one of the basis, then again with the other.

Most importantly, we can collapse ${\displaystyle \Psi }$ into either ${\displaystyle \{x\}}$ or ${\displaystyle \{\zeta \}}$, but not both. These two operators are incommensurate, as they don't commute, and if they don't commute they must form different basis sets within Hilbert space. We can write them out sideways, as each set is still equal to the wave function, but information about one set does not tell us anything about the other set.

The collapse of ${\displaystyle \Psi }$ to ${\displaystyle \Psi =\zeta _{\uparrow \downarrow }}$ or ${\displaystyle \Psi =x_{\uparrow \downarrow }}$ is unique to quantum mechanics and is why we can't simultaneously know these two observables!