Quantum Mechanics

Table of Contents

Reading

  • Quantum Physics - Stephen Gasiorowicz
  • "Quantum Physics", Messiah, p. 463 Appendix A: Distributions & Fourier Transform

Progress

  • Note taken on [2017-09-20 Wed 21:40]
    p. 115

Definitions

Words

plane wave
multi-dimensional wave
wave packet
superposition of plane waves
hilbert space
Banach space withe addition of an inner product
banach space
Space with metric, and is complete wrt. to the metric in a sense that each Cauchy sequence converges to a limit within the space.
isotropic
Independent of orientation.

Bound / unbound wave equations

Bound energy

Restricts us to positive energies $E > 0$

Unbound energy

Allows both positive and negative energies $E$

Schrödinger equation

\begin{equation*}
i \hbar \frac{\partial}{\partial t} \ket{\Psi, t} = \hat{H}_0 \ket{\Psi, t}
\end{equation*}

Operators

Position operator

\begin{equation*}
\hat{x} \Psi(x, t) = x \Psi(x, t)
\end{equation*}

Russel-Sanders notation

States that arise in coupling orbital angular momentum $\ell$ and spin $s$ to give total angular momentum $j$ are denoted:

\begin{equation*}
n^{(2s + 1)} l_j
\end{equation*}

where the $n$ is "optional" (not always included).

Further, remember that we often use the following notation to denote the different angular momentum $\ell$:

\begin{equation*}
l = s, p, d, f, \dots
\end{equation*}

Stuff

Zero point energy

Heisenbergs uncertainty principle tells us that a particle cannot sit motionless at the bottom of it's potential well, as this would imply that the uncertainty in momentum $\Delta p$ would be infinite. Therefore, the lowest-energy state of the system (called ground state ) must be a distribution for the position and momentum which at all times satisfy the uncertainty principle.

Near the bottom of a potential well, the Hamiltonian of a general system (the quantum-mechanical operator giving a system its energy) can be approximated as a quantum harmonic oscillator:

\begin{equation*}
  \hat{H} = V_0 + \frac{1}{2} k ( \hat{x} - x_0 )^2 + \frac{1}{2m} \hat{p}^2
\end{equation*}

Quantum Harmonic Oscillator

\begin{equation*}
\hat{H} = \Bigg[ - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + \frac{1}{2} m \omega^2 x^2 \Bigg]
\end{equation*}

Thus, the eigenvalue function of the Hamiltonian becomes

\begin{equation*}
\Bigg[ - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + \frac{1}{2} m \omega^2 x^2 \Bigg] \psi = E \psi
\end{equation*}

As it turns out, this differential equation has the eigenfunctions:

\begin{equation*}
u_n(x) = c_n H_n(\alpha x) e^{- \alpha^2 x^2 / 2 }
\end{equation*}

where $H_n$ denote the n-th [BROKEN LINK: No match for fuzzy expression: *Hermite%20polynomials], with $\alpha = \frac{m \omega}{\hbar}$.

Further, using the raising and lowering operators discussed in Angular momentum - reloaded we can rewrite the solutions as

\begin{equation*}
u_n(x) = \frac{1}{\sqrt{n}} \big( \hat{a}^\dagger \big)^n u_0 (x)
\end{equation*}

Annihiliation / lowering operator:

\begin{equation*}
\hat{a} = \sqrt{\frac{m \omega}{2 \hbar}} \hat{X} + \frac{i}{\sqrt{2 m \hbar \omega}} \hat{P}
\end{equation*}

Creation / Raising operator:

\begin{equation*}
\hat{a}^{\dagger} = \sqrt{\frac{m \omega}{2 \hbar}} \hat{X} - \frac{i}{\sqrt{2 m \hbar \omega}} \hat{P} \\
\end{equation*}

Solution

There's two ways of going about this:

  1. Solving using the ladder operators, but you somehow need to obtain the ground levels
  2. Solving using straight up good ole' method of variation of parameters and Sturm-Liouville theory
Solving using ladder-operators
  • Notation
    • $\ket{n}$ is the eigenfunction corresponding to the n-th eigenvalue $E_n$
    • $\hat{\xi} = \sqrt{\frac{m \omega}{2 \hbar}} \hat{X}$, which is dimensionless
    • $\hat{\eta} = \frac{1}{\sqrt{2 m \hbar \omega}} \hat{P}$, which is dimensionless
    • $\hat{a} = \hat{\xi} + i \hat{\eta}$
    • $\hat{a}^\dagger = \hat{\xi} - i \hat{\eta}$
  • Stuff

    The Hamiltonian in this case is given by

    \begin{equation*}
\hat{H} = \frac{\hat{P}^2}{2m} + \frac{1}{2} m \omega^2 \hat{X}^2
\end{equation*}

    with the TISE

    \begin{equation*}
\hat{H} \ket{n} = E_n \ket{n}
\end{equation*}

    For convenience we rewrite the Hamiltonian

    \begin{equation*}
\begin{split}
  \hat{H} &= \hbar \omega \Bigg[ \frac{\hat{P}^2}{2 m \hbar \omega} + \frac{m \omega}{2 \hbar} \hat{X}^2 \Bigg] \\
  &= \hbar \omega \Big[ \hat{\eta}^2 + \hat{\xi}^2 \Big]
\end{split}
\end{equation*}

    But $\hat{\xi}$ and $\hat{\eta}$ do not commute:

    \begin{equation*}
[ \hat{\xi}, \hat{\eta} ] = \frac{i}{2}
\end{equation*}

    So we instead use the notation of $\hat{a}$ and $\hat{a}^\dagger$ which have the property

    \begin{equation*}
[ \hat{a}, \hat{a}^\dagger ] = 1
\end{equation*}

    Once again rewriting the Hamiltonian

    \begin{equation*}
\hat{H} = \hbar \omega \Big( \hat{a} \hat{a}^\dagger + \hat{a} \hat{a}^\dagger \Big) = \hbar \omega \Big( \hat{a}^\dagger \hat{a} + \frac{1}{2} \Big)
\end{equation*}

    Observe that

    \begin{equation*}
[ \hat{H}, \hat{a} ] = - \hbar \omega \ \hat{a}
\end{equation*}

    Conider the following operation:

    \begin{equation*}
\begin{split}
  \hat{H} \Big( \hat{a} \ket{n} \Big) &= \hat{a} \Big( \hat{H} \ket{n} \Big) - \hbar \omega \hat{a} \ket{n} \\
  &= \Big( E_n - \hbar \omega \Big) (\hat{a} \ket{n}) \\
  &= E_{n - 1} ( \hat{a} \ket{n} )
\end{split}
\end{equation*}

    where we in the last line used the fact that for a simple Harmonic Oscillator $E_n = \Big(n + \frac{1}{2}\Big) \hbar$.

    Thus,

    \begin{equation*}
\hat{a} \ket{n} = c_n \ket{n - 1}
\end{equation*}

    I.e. applying $\hat{a}$ to an eigenfunction $\ket{n}$ gives us the eigenfunction of the $n-1$ eigenstate.

    We call this operator the annhiliation or lowering operator.

    We can do exactly the same of $\hat{a}^\dagger$ to observe that

    \begin{equation*}
\hat{a}^\dagger \ket{n} = d_n \ket{n+1}
\end{equation*}

    which is why we call $\hat{a}^\dagger$ the creation or rising operator.

    Buuut there's a problem: we can potential "annihilate" below to a state with energy below zero! To fix this we simply define the ground state such that

    \begin{equation*}
\hat{a} \ket{0} = 0
\end{equation*}

    Thus,

    \begin{equation*}
\hat{H} \ket{0} = \hbar \omega \Bigg(\hat{a}^\dagger \hat{a} + \frac{1}{2} \Bigg) \ket{0} = \frac{\hbar \omega}{2} \ket{0}
\end{equation*}

    And all other higher energy states can then be constructed from this, by successsive application of $\hat{a}^\dagger$

    • Normalizing the lowering- and raising-operator
      \begin{equation*}
|c_n|^2 = \bra{n} \hat{a}^\dagger \hat{a} \ket{n} = n, \qquad \hat{H} \ket{n} = \hbar \omega \Big( n + \frac{1}{2} \Big) \ket{n} = \hbar \omega \Big( \hat{a}^\dagger \hat{a} + \frac{1}{2} \Big) \ket{n}
\end{equation*}

      Thus,

      \begin{equation*}
c_n = \sqrt{n}
\end{equation*}

      Doing the same for $\hat{a}^\dagger$, we get

      \begin{equation*}
d_n = \sqrt{n + 1}, \qquad \hat{a}^\dagger \ket{n} = \sqrt{n + 1} \ket{n + 1}
\end{equation*}
    • Solving for the ground-state

      We can find the proper solution for the ground-state by solving the differential equation

      \begin{equation*}
\hat{a} u_0 (x) = 0
\end{equation*}

      And substituting in for $\hat{a}$ both the $\hat{\xi}$ and $\hat{eta}$, so that we get the differential equation. Solving this, we get

      \begin{equation*}
u_0(x) = C \exp \Big( - \frac{m \omega}{2 \hbar} x^2 \Big)
\end{equation*}
Solving the differential equation

We start of by rewriting the Schrödinger equation for the harmonic oscillator:

\begin{equation*}
\psi''(x) + \bigg( \frac{2mE}{\hbar^2} - \frac{m^2 \omega^2}{\hbar^2} x^2 \bigg) \psi(x) = 0
\end{equation*}

Letting $\alpha = \frac{m \omega}{\hbar}$, we have

\begin{equation*}
\psi''(x) + \bigg( \frac{2mE}{\hbar^2} - \alpha^2 x^2 \bigg) \psi(x) = 0
\end{equation*}

We'll require our solution to be normalizable, i.e. square-integrable, therefore we need the differential equation to be satisfied as $x \to \infty$, in which case we can drop the constant term:

Letting $\omega = \frac{\hbar}{m}$ which is just a constant (apparently…), we have

\begin{equation*}
\psi''(x) + \bigg( \frac{2mE}{\hbar^2} - x^2 \bigg) \psi(x) = 0
\end{equation*}

I'm not sure I'm "cool" with this. There's clearly something weird going on here, since $\omega$ is a parameter of the potential, NOT a variable which we can just set.

Letting $\beta = \frac{2mE}{\hbar^2}$, we then have

\begin{equation*}
\psi''(x) - x^2 \psi(x) = - \beta \psi(x)
\end{equation*}

Therefore we instead consider the simpler problem:

\begin{equation*}
\psi''(x) + (1 - x^2) \psi(x) = 0
\end{equation*}

where we've dropped the $\beta$, which we will include later on through the us of method of variation of parameters. This clearly has the solution

\begin{equation*}
\psi_0(x) = e^{-\frac{x^2}{2}}
\end{equation*}

Now, using method of variation of parameters, we suppose there exists some particular solution of the form:

\begin{equation*}
\psi(x) = u(x) \psi_0(x)
\end{equation*}

For which we have

\begin{equation*}
\begin{split}
  \psi''(x) &= \frac{d}{dx} \bigg( u'(x) \psi_0(x) + u(x) \psi_0'(x) \bigg) \\
  &= u''(x) \psi_0(x) + 2 u'(x) \psi_0'(x) + u(x) \psi_0''(x)
\end{split}
\end{equation*}

Substituting into the diff. eqn. involving $\beta$,

\begin{equation*}
\Big( u''(x) \psi_0(x) + 2 u'(x) \psi_0'(x) + u(x) \psi_0''(x) \Big) + (\beta - x^2) u(x) \psi(x) = 0
\end{equation*}

Since $\psi_0$ satisfies the original equation, we have

\begin{equation*}
u''(x) \psi_0(x) + 2u'(x) \psi_0'(x) + (\beta - 1) u(x) \psi(x) + \underbrace{\Big( \psi_0''(x) + (1 - x^2) \psi_0 \Big)}_{= 0} = 0
\end{equation*}

Seeing this, we substitute in our solution for $\psi_0 = e^{-\frac{x^2}{2}}$,

\begin{equation*}
u''(x) e^{-\frac{x^2}{2}} + 2 x u'(x) e^{- \frac{x^2}{2}} + (\beta - 1) u(x) e^{- \frac{x^2}{2}} = 0
\end{equation*}

Thus,

\begin{equation*}
u'' + 2xu' + (\beta - 1) u = 0
\end{equation*}

Which, if we let

\begin{equation*}
\beta - 1 = 2n \iff \beta = 2n + 1
\end{equation*}

gives us

\begin{equation*}
u'' + 2x \ u' + 2n \ u = 0 
\end{equation*}

which we recognize as Hermite differential equation! Hence, we have the solutions

\begin{equation*}
\psi_n(x) = H_n(x) e^{- \frac{x^2}{2}}
\end{equation*}

Why the $n$ though? Well, if you attempt to solve the Hermite equation above using a series solution, i.e. assuming

\begin{equation*}
u(x) = \sum_{n = 0}^{\infty} a_n x^n
\end{equation*}

you end up seeing that you'll have a recurrence relation which depends on the integer $n$, and further that for this to be square-integrable we require the series to terminate for some $n$, hence you get that dough above.

Also, it shows us that we do indeed have a (countably) infinite number of solutions, as we'd expect from a Sturm-Liouville problem such as the Schrödinger equation :)

Correspondance principle

The correspondence principle states that the behavior of systems described by quantum mechanics reproduces the classical physics in the limit of large numbers.

Operators

Momentum

\begin{equation*}
\hat{p} = - i \hbar \frac{\partial}{\partial x}
\end{equation*}
"Derivation"

Considering the wave equation of a free particle, we have

\begin{equation*}
\begin{split}
  \Psi(x, t) &= e^{i (kx - \omega t)} \\
  &= e^{i (px - Et) / \hbar}
\end{split}
\end{equation*}

Taking the derivative of the above, and rearranging, we obtain:

\begin{equation*}
p \big[ \Psi(x, t) \big] = - i \hbar \frac{\partial}{\partial x} \big[ \Psi(x, t) \big]
\end{equation*}

and thus we have the momentum operator

\begin{equation*}
p = - i \hbar \frac{\partial}{\partial x}
\end{equation*}

Hamiltonian

\begin{equation*}
\hat{H} = - \frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} + V(x)
\end{equation*}
"Derivation"

Where the Hamiltonian comes from the fact that for TISE we have the following:

\begin{equation*}
- \frac{\hbar^2}{2m} \nabla^2 \varphi(x) + V(x) \varphi(x) = E \varphi(x)
\end{equation*}

which we can then write as:

\begin{equation*}
\Big[ - \frac{\hbar^2}{2m} \nabla^2 + V(x) \Big] \varphi(x) = E \varphi(x)
\end{equation*}

and we have our operator $H$ !

In the above "derivation" of the Hamiltonian, we assume TISE for convenience. It works equally well with the time-dependence, which is why we can write the TIDE expression using the Hamiltonian.

In fact, one can deduce it from writing the wave-function as a function of $p$ and $E$, and then note the operator defined for the momentum $p$. This operator can then be substituted into the classical formula for the energy to provide use with a definition of the Hamiltonian operator.

Theorems

Conserved operators

A time-independent operator is conserved if and only if $[\hat{H}, \hat{O}] = 0$, i.e. it commutes with the Hermitian operator.

Coherence

Stack excange answer

This guy provides a very nice explanation of quantum coherence and entanglement.

Basically what he's saying is that coherence refers to the case where the wave-functions which are superimposed to create the wave-function of the particle under consideration have a definite phase difference , i.e. the phase-difference between the "eigenstates" is does not change wrt. time and is not random .

Another answer to the same question says:

"It is better to think of there being one and only one wavefunction that describes all the particles in the universe."

Which is trying to make sense of entanglement, saying that we can look at the "joint probability" of e.g. two particles and based on measurement taken from one particle we can make a better judgement of the probability-distribution of the measurement of the other particle. At least this is how I interpret it.

Observables

Each dynamic variable is associated with a linear operator, say $\hat{O}$, and it's expectation can be computed:

\begin{equation*}
\bra{\Psi(t)} \hat{O} \ket{\Psi(t)} = \int dx \Psi(x, t)^* \hat{O} \Psi(x, t)
\end{equation*}

when there is no ambiguity about the state in which the average is computed, we shall write

\begin{equation*}
\langle O \rangle \equiv \bra{\Psi(t)} \hat{O} \ket{\Psi(t)}
\end{equation*}

The possible outcomes of an observable is given by the eigenvalue of the corresponding operator $\hat{O}$, i.e. the solutions of the eigenvalue equation:

\begin{equation*}
\hat{O}_k \psi_k = O_k \psi
\end{equation*}

where $\psi_k$ is the eigenstate corresponding to the eigenvalue $O_k$.

Observables as Hermitian operators

Observables are required to be Hermition operators, i.e. operators such that for the operator $\hat{O}$

\begin{equation*}
\exists \hat{O}^\dagger : \hat{O}^\dagger = \hat{O}
\end{equation*}

where $\hat{O}^\dagger$ is called the Hermitian conjugate of $\hat{O}$.

We an operator corresponding to an Observable to be a Hermitian operator as this is the only way to ensure that all the eigenvalues of the corresponding operator are real, and we want our measurements to be real, of course.

Compatilibity Theorem

Given two observables $A$ and $B$, represented by the Herimitian operators $\hat{A}$ and $\hat{B}$, then the following statements are equivalent:

  1. $A$ and $B$ are compatible
  2. $\hat{A}$ and $\hat{B}$ have a common eigenbasis
  3. $\hat{A}$ and $\hat{B}$ commute, i.e.: $[\hat{A}, \hat{B}] = 0$

Generalised Uncertainty Relation

If $\Delta \hat{A}_t$ and $\Delta \hat{B}_t$ denote the uncertainties in observables $\mathcal{A}$ and $\mathcal{B}$ respectively in the state $\ket{\Psi, t}$, then the generalised uncertainty relation states that

\begin{equation*}
\Delta \hat{A}_t \cdot \Delta \hat{B}_t \ge \frac{1}{2} \bigg| \Big\langle \comm{\hat{A}_t}{\hat{B}_t} \Big\rangle \bigg|
\end{equation*}

Momentum

\begin{equation*}
\hat{P}_k = - i \hbar \frac{\partial}{\partial x_k}, \quad k = 1, 2, 3
\end{equation*}

Then

\begin{equation*}
\hat{P}_k \psi(\mathbf{x}) = - i \hbar \frac{\partial}{\partial x_k} \psi(x_1, x_2, x_3)
\end{equation*}

And we have commutability, and thus the we can interchange the order of derivation

\begin{equation*}
[ \hat{P}_k, \hat{P}_l ] = 0 \quad \iff \quad \frac{\partial^2}{\partial x_k \partial x_l} \psi(\mathbf{x}) = \frac{\partial^2}{\partial x_l \partial x_k} \psi(\mathbf{x})
\end{equation*}

and

\begin{equation*}
[\hat{X}_k, \hat{P}_l ] = i \hbar \delta_{k l}
\end{equation*}

Hence,

\begin{equation*}
\hat{H} = \frac{\mathbf{\hat{P}}^2}{2m} + V(\hat{\mathbf{x}})
\end{equation*}

where

\begin{equation*}
\hat{\mathbf{P}}^2 = \hat{P}_x^2 + \hat{P}_y^2 + \hat{P}_z^2
\end{equation*}

Separation of variables

Here we consider the TISE for a isotropic harmonic oscillator in 3 dimensions, for which the potential is

\begin{equation*}
V(\mathbf{x}) = \frac{1}{2}m \omega^2 \mathbf{x}^2 = \frac{1}{2} \omega^2 \Big( x^2 + y^2 + z^2 \Big)
\end{equation*}

We can then factorize the Hamiltonian in the following way

\begin{equation*}
\begin{split}
  \hat{H} &= \frac{1}{2m} \Big( \hat{P}_x^2 + \hat{P}_y^2 + \hat{P}_z^2 \Big) + \frac{1}{2} m \omega^2 \Big( \hat{X}^2 + \hat{Y}^2 + \hat{Z}^2 \Big) \\
  &= \hat{H}_x + \hat{H}_y + \hat{H}_z, \quad \hat{H}_k = \frac{1}{2m} \hat{P}_k^2 + \frac{1}{2} m \omega^2 \hat{X}_k^2
\end{split}
\end{equation*}

Using this in the Schrödinger equation, we get

\begin{equation*}
\hat{H} \psi(x, y, z) = E \psi(x, y, z)
\end{equation*}

And we then assume that we can express $\psi$ as

\begin{equation*}
\psi(x, y, z) = X(x) Y(y) Z(z)
\end{equation*}

which gives us

\begin{equation*}
\begin{split}
  \hat{H} \psi(x, y, z) &= \Big( \hat{H}_x X(x) \Big) Y(y) Z(z) + X(x) \Big( \hat{H}_y Y(y) \Big) Z(z) + X(x) Y(y) \Big( \hat{H}_z Z(z) \Big) \\
  &= E X(x) Y(y) Z(z) \\
  &= E \psi(x, y, z) \\
\end{split}
\end{equation*}

Which gives us

\begin{equation*}
\frac{\hat{H}_x X(x)}{X(x)} + \frac{\hat{H}_y Y(y)}{Y(y)} + \frac{\hat{H}_z Z(z)}{Z(z)} = E
\end{equation*}

And since $E$ is a constant, we can write $E = E_x + E_y + E_z$, which gives us a system of equations

\begin{equation*}
\begin{cases}
  \hat{H}_x X(x) &= E_x X(x), \quad E_{x, n_x} = \hbar \omega \Big( n_x + \frac{1}{2} \Big), u_{n_x}(x) \\
  \hat{H}_y Y(y) &= E_y Y(y), \quad E_{y, n_y} = \hbar \omega \Big( n_y + \frac{1}{2} \Big), u_{n_y}(y) \\
  \hat{H}_z Z(z) &= E_z Z(z), \quad E_{z, n_z} = \hbar \omega \Big( n_z + \frac{1}{2} \Big), u_{n_z}(z)
\end{cases}
\end{equation*}

where $u_{n_k}(x)$ denotes the eigenfunction in the k-th dimension for the n-th quantised energy-level (remember we're working with a harmonic osicallator). Which gives

\begin{equation*}
E_{n_x n_y n_z} = \hbar \omega \Big( n_x + n_y + n_z + \frac{3}{2} \Big), \quad \Psi_{n_x, n_y, n_z} (x, y, z) = u_{n_x}(x) + u_{n_y}(y) + u_{n_z}(z)
\end{equation*}

where $u_{n_k}$ denotes the factors of the Hermite polynomials.

Angular momentum

\begin{equation*}
\hat{\mathbf{L}} = \hat{\mathbf{X}} \times \hat{\mathbf{P}}
\end{equation*}
\begin{equation*}
\hat{L}_k = \varepsilon_{klm} \hat{X}_l \hat{P}_m, \quad \varepsilon_{123} = +1, \quad \varepsilon_{213} = -1
\end{equation*}

Commutation

The interesting thing about the angular momentum operators, is that unlike the position- and momentum-operators in different dimensions does not commute!

\begin{equation*}
[ \hat{L}_i, \hat{L}_j ] = i \hbar \ \varepsilon_{ijk} \hat{L}_k
\end{equation*}

This implies that the angular momentum in different dimensions are not compatible observables, i.e. we cannot observe one without affecting the distribution of measurements for the other!

Square of the angular momentum

\begin{equation*}
\hat{L}^2 = \hat{L}_x^2 + \hat{L}_y^2 + \hat{L}_z^2 = \sum_{i=1}^{3} \hat{L}_i^2
\end{equation*}

or in spherical coordinates,

\begin{equation*}
\hat{L}^2 = - \hbar^2 \Bigg[ \frac{1}{\sin \theta} \frac{\partial}{\partial \theta} \Big( \sin \theta \frac{\partial}{\partial \theta} \Big) + \frac{1}{\sin^2 \theta} \frac{\partial^2}{\partial \phi^2} \Bigg]
\end{equation*}

We then observe that $\hat{L}^2$ is compatible with any of the Cartesian components of the angular momentum:

\begin{equation*}
[\hat{L}^2, \hat{L}_x] = [\hat{L}^2, \hat{L}_y] = [\hat{L}^2, \hat{L}_z] = 0
\end{equation*}

which also tells us that they have a common eigenbasis / simultaneous eigenfunctions .

Angular momentum operators in spherical coordinates

\begin{equation*}
\begin{split}
  \hat{L}_x &= i \hbar \Bigg( \sin \phi \frac{\partial}{\partial \theta} + \cot \theta \cos \phi \frac{\partial}{\partial \phi} \Bigg) \\
  \hat{L}_y &= i \hbar \Bigg( - \cos \phi \frac{\partial}{\partial \theta} + \cot \theta \sin \phi \frac{\partial}{\partial \phi} \Bigg) \\
  \hat{L}_z &= - i \hbar \frac{\partial}{\partial \phi}
\end{split}
\end{equation*}

Eigenfunctions

\begin{equation*}
\hat{L}^2 Y_{\ell}^m (\theta, \phi) = \ell (\ell + 1) \hbar^2 Y_{\ell}^m (\theta, \phi)
\end{equation*}

where $\ell = 0, 1, 2, \dots$ and

\begin{equation*}
Y_{\ell}^m (\theta, \phi) = (-1)^m \Bigg[ \frac{2 \ell + 1}{4 \pi} \frac{(\ell - m)!}{(\ell + m)!} \Bigg]^{-1/2} P_{\ell}^m ( \cos \theta ) e^{im \phi}
\end{equation*}

with $P_{\ell}^m(\cos \theta)$ known as the associated Legendre polynomials.

\begin{equation*}
Y_{\ell}^m (\theta, \phi) = (-1)^m \Bigg[ \frac{2 \ell + 1}{4 \pi} \frac{(\ell - m)!}{(\ell + m)!} \Bigg]^{-1/2} P_{\ell}^m ( \cos \theta ) e^{im \phi}
\end{equation*}

with $P_{\ell}^m(\cos \theta)$ known as the associated Legendre polynomials.

Quantisation of angular momentum

The eigenvalue function for the magnitude of the angular momentum tells us that the angular momentum is quantised. The square of the magnitude of the angular momentum can only assume one of the discrete set of values

\begin{equation*}
\ell (\ell + 1) \hbar^2 , \quad \ell = 0, 1, 2, \dots
\end{equation*}

and the z-component of the angular momentum can only assume one of the discrete set of values

\begin{equation*}
m \hbar, \quad m = - \ell, \dots, \ell - 1, \ell
\end{equation*}

for a given value of $\ell$.

Laplacian operator

\begin{equation*}
\nabla^2 = \frac{1}{r^2} \frac{\partial}{\partial r} \Big( r^2 \frac{\partial}{\partial r} \Big) + \frac{1}{r^2 \sin \theta} \frac{\partial}{\partial \theta} \Big( \sin \theta \frac{\partial}{\partial \theta} \Big) + \frac{1}{r^2 \sin^2 \theta} \frac{\partial^2}{\partial \phi^2}
\end{equation*}

Angular momentum - reloaded

We move away from the notation of using $\hat{L}$ for angular momentum because what follows is true for any operator satisfying these properties.

To generalize what the raising and lowering operators we found for the angular momentum we let $\hat{\mathbf{J}} = ( \hat{J}_1, \hat{J}_2, \hat{J}_3)$ denote an (Hermitian) operator which satisfies the following commutation relations:

\begin{equation*}
[\hat{J}_k, \hat{J}_\ell ] = i \hbar \varepsilon_{k \ell m} \hat{J}_m, \qquad \hat{J}^2 = \sum_{k} \hat{J}_k^2, \qquad [ \hat{J}^2, \hat{J}_k ] =  0
\end{equation*}

And since $\hat{J}^2$ and $\hat{J}_k$ are compatible, we know that these have a common eigenbasis, which we denote $\ket{\lambda, m}$, and we write the eigenvalues in the following form:

\begin{equation*}
\begin{cases}
  \hat{J}^2 \ket{\lambda, m}  &= \hbar^2 \lambda \ket{\lambda, m} \\
  \hat{J}_3 \ket{\lambda, m} &= \hbar m \ket{\lambda, m}
\end{cases}
\end{equation*}

Further, we introduce the raising and lowering operators defined by:

\begin{equation*}
\hat{J}_{\pm} = \hat{J}_1 \pm i \hat{J}_2
\end{equation*}

for which we can compute the following properties:

\begin{equation*}
[ \hat{J}^2, \hat{J}_\pm ] = 0
\end{equation*}
\begin{equation*}
[ \hat{J}_3, \hat{J}_\pm ] = \pm \hbar \hat{J}_{\pm}
\end{equation*}

I.e. we're working with a raising operator for $\hat{J}_3$.

Considering the action of the commutator $\comm{\hat{J}_3}{\hat{J}_\pm}$ on an eigenstate $\ket{\lambda, m}$, we obtain the following:

\begin{equation*}
\hat{J}_3 \hat{J}_+ \ket{\lambda, m} = (m + 1) \hbar \ \hat{J}_+ \ket{\lambda, m}
\end{equation*}
\begin{equation*}
\hat{J}_3 \hat{J}_- \ket{\lambda, m} = (m - 1) \hbar \ \hat{J}_- \ket{\lambda, m}
\end{equation*}

which tells us that both $\hat{J}_\pm \ket{\lambda, m}$ are also eigenstates of the $\hat{J}_3$ but with eigenvalues $(m \pm 1) \hbar$, unless $\hat{J}_\pm \ket{\lambda, m} = 0$.

Thus $\hat{J}_+$ and $\hat{J}_-$ act as raising and lowering operators for the z-component of $\hat{\mathbf{J}}$.

Further, notice that since $\comm{\hat{J}^2}{\hat{J}_\pm} = 0$ we have

\begin{equation*}
\hat{J}^2 \Big( \hat{J}_\pm \ket{\lambda, m} \Big) = \hat{J}_\pm \Big( \hat{J}^2 \ket{\lambda, m} \Big) = \lambda \hbar^2 \Big( \hat{J}_\pm \ket{\lambda, m} \Big)
\end{equation*}

Hence, the eigenstates generated by the action of $\hat{J}_\pm$ are still eigenstates of $\hat{J}^2$ belonging to the same eigenvalue $\lambda \hbar^2$. Thus, we can write

\begin{equation*}
\begin{split}
  \hat{J}_+ \ket{\lambda, m} &= c_+ \hbar \ket{\lambda, m + 1} \\
  \hat{J}_- \ket{\lambda, m} &= c_- \hbar \ket{\lambda, m - 1}
\end{split}
\end{equation*}

where $c_\pm$ are proportionality constants.

The notation used for the eigenstates $\hat{J}_\pm \ket{\lambda, m} = c_\pm \hbar \ket{\lambda, m \pm 1}$ is simply saying that we know that the eigenvalue for $\hat{J}^2$ does not change under the operator $\hat{J}_\pm$, and thus only the eigenvalue corresponding to $m$ changes with the factor $(m \pm 1)$ (or rather another term of $\hbar$) and so we denote this new eigenstate as $\ket{\lambda, m \pm 1}$.

To generalize the raising and lowering operators we found for the angular momentum we let $\hat{\mathbf{J}} = ( \hat{J}_1, \hat{J}_2, \hat{J}_3)$ denote an (Hermitian) operator which satisfies the following commutation relations:

\begin{equation*}
[\hat{J}_k, \hat{J}_\ell ] = i \hbar \varepsilon_{k \ell m} \hat{J}_m, \qquad \hat{J}^2 = \sum_{k} \hat{J}_k^2, \qquad [ \hat{J}^2, \hat{J}_k ] =  0
\end{equation*}

And since $\hat{J}^2$ and $\hat{J}_k$ are compatible, we know that these have a common eigenbasis, which we denote $\ket{j, m}$, and we write the eigenvalues in the following form:

\begin{equation*}
\begin{cases}
  \hat{J}^2 \ket{j, m}  &= j (j + 1) \hbar^2 \ \ket{j, m} \\
  \hat{J}_3 \ket{j, m} &= \hbar m \ket{j, m}, \qquad m = -j, - (j - 1), \dots, j - 1, j
\end{cases}
\end{equation*}

with $j = 0, \frac{1}{2}, 1, \frac{3}{2}, 2, \dots$

The set of $(2j + 1)$ states $\{ \ket{j, m} \}$ is called a multiplet.

We introduce the raising and lowering operators defined by:

\begin{equation*}
\hat{J}_{\pm} = \hat{J}_1 \pm i \hat{J}_2
\end{equation*}

for which we can compute the following properties:

\begin{equation*}
[ \hat{J}^2, \hat{J}_\pm ] = 0
\end{equation*}
\begin{equation*}
[ \hat{J}_3, \hat{J}_\pm ] = \pm \hbar \hat{J}_{\pm}
\end{equation*}

And we have the relations:

\begin{equation*}
\begin{split}
  \hat{J}_+ \ket{j, m} &= c_+ \hbar \ket{j, m + 1} \\
  \hat{J}_- \ket{j, m} &= c_- \hbar \ket{j, m - 1}
\end{split}
\end{equation*}

with

\begin{equation*}
|c_\pm|^2 = j(j + 1) - m(m \pm 1)
\end{equation*}

or, if we're assuming $c_\pm \in \mathbb{R}^+$, we have

\begin{equation*}
c_\pm = \sqrt{j(j + 1) - m(m \pm 1)}
\end{equation*}

There was originally also the relation

\begin{equation*}
\hat{J}_\pm^+ = \hat{J}_{\mp}
\end{equation*}

which I somehow picked up from the lecture. But I'm not entirely sure what this actually means…

Some proofs

Lowering operator on an eigenstate is another eigenstate
\begin{equation*}
\hat{J}_3 \hat{J}_- \ket{\lambda, m} = (m - 1) \hbar \ \hat{J}_- \ket{\lambda, m}
\end{equation*}
\begin{equation*}
\comm{\hat{J}_3}{\hat{J}_-} \ket{\lambda, m} &= \big( \hat{J}_3 \hat{J}_- - \hat{J}_- \hat{J}_3 \big) \ket{\lambda, m} = -\hbar \hat{J}_- \ket{\lambda, m}
\end{equation*}

Where the RHS is due to the commutation relation we deduced earlier. This gives us the equation

\begin{equation*}
\begin{split}
  \hat{J}_3 \hat{J}_- \ket{\lambda, m} - \hat{J}_- \hat{J}_3 \ket{\lambda, m} &= - \hbar \hat{J}_- \ket{\lambda, m} \\
  \hat{J}_3 \hat{J}_- \ket{\lambda, m} &= \hbar m \hat{J}_- \ket{\lambda, m} - \hbar \hat{J}_- \ket{\lambda, m} \\
  \hat{J}_3 \hat{J}_- \ket{\lambda, m} &= (m - 1) \hbar \ \hat{J}_- \ket{\lambda, m}
\end{split}
\end{equation*}

which tells us that $\hat{J}_-$ is also an eigenstate of $\ket{\lambda, m}$ but with the eigenvalue $(m - 1) \hbar$.

Just to make it completely obvious what we're saying, we can write the relation above as:

\begin{equation*}
\hat{J}_3 \Big( \hat{J}_- \ket{\lambda, m} \Big) = (m - 1) \hbar \ \Big( \hat{J}_- \ket{\lambda, m} \Big)
\end{equation*}
Eigenvalues are bounded

The eigenvalues of $\hat{J}_3$ are bounded above and below, or more specifically

\begin{equation*}
\lambda - m^2 \ge 0 \quad \iff \quad - \sqrt{\lambda} \le m \le \sqrt{\lambda}
\end{equation*}

Further, we have the following properties:

  1. Eigenvalues of $\hat{J}^2$ are $j (j + 1) \hbar^2$, where $j$ is one of the allowed values

    \begin{equation*}
 j = 0, \frac{1}{2}, 1, \frac{3}{2}, 2, \dots
\end{equation*}
  2. $\lambda = j (j + 1)$ thus we label the eigenstates of $\hat{J}^2$ and $\hat{J}_3$ by $j$ rather than by $\lambda$, so that

    \begin{equation*}
 \begin{split}
   \hat{J}^2 \ket{\lambda, m} &= j (j + 1) \hbar^2 \ \ket{j, m} \\
   \hat{J}_3 \ket{\lambda, m} &= m \hbar \ket{j, m}, \quad m = - j, - (j + 1), \dots, j - 1, j
 \end{split}
\end{equation*}
  3. For an eigenstate $\ket{j, \cdot}$, there are $(2j + 1)$ possible eigenvalues of $\hat{J}_3$

The set of $(2j + 1)$ states $\{ \ket{j, m} \}$ is called a multiplet

We start by observing the following:

\begin{equation*}
\begin{split}
  \Big( \hat{J}^2 - \hat{J}_3^2 \Big) \ket{\lambda, m} &= \Big( \hat{J}_1^2 + \hat{J}_2^2 \Big) \ket{\lambda, m} \\
  \Big( \lambda - m^2 \Big) \hbar^2 \ \ket{\lambda, m} &= \Big( \hat{J}_1^2 + \hat{J}_2^2 \Big) \ket{\lambda, m}
\end{split}
\end{equation*}

Taking the scalar product with $\bra{\lambda, m}$ yields

\begin{equation*}
\Big( \lambda - m^2 \Big) \hbar^2 = \ev{\hat{J}_1^2 + \hat{J}_2^2} \ge 0
\end{equation*}

so that

\begin{equation*}
\lambda - m^2 \ge 0 \quad \iff \quad - \sqrt{\lambda} \le m \le \sqrt{\lambda}
\end{equation*}

Hence the spectrum of $\hat{J}_3$ is bounded above and below, for a given $\lambda$. We can deduce that

\begin{equation*}
\begin{split}
  \hat{J}_+ \ket{\lambda, m_{\text{max}}} &= 0 \\
  \hat{J}_- \ket{\lambda, m_{\text{min}}} &= 0
\end{split}
\end{equation*}

Using the following relations:

\begin{equation*}
\begin{split}
  \hat{J}^2 &= \hat{J}_+ \hat{J}_ - \hbar \hat{J}_3 + \hat{J}_3^2 \\
  \hat{J}^2 &= \hat{J}_- \hat{J}_+ + \hbar \hat{J}_3 + \hat{J}_3^2
\end{split}
\end{equation*}

and applying the $\hat{J}^2$ to $\ket{\lambda, m_\text{min}}$ and $\ket{\lambda, m_\text{max}}$ in turn, we can obtain the following relations:

\begin{equation*}
\lambda = m_\text{min} (m_\text{min} - 1), \quad \lambda = m_\text{max} (m_\text{max} + 1)
\end{equation*}

Using the notation $j := m_\text{max}$ and using the equality above, we get the equation

\begin{equation*}
\begin{split}
  m_\text{min}^2 - m_\text{min} - j^2 - j &= 0 \\
  (m_\text{min} + j) ( m_\text{min} - j - 1) = 0
\end{split}
\end{equation*}

and we see that, since $m_\text{min} \le j$ by definition, the only acceptable root to the above quadratic is

\begin{equation*}
m_\text{min} = -j
\end{equation*}

Now, since $m_\text{max}$ and $m_\text{min}$ differ by some integer, $k$, we can write

\begin{equation*}
m_\text{max} - m_\text{min} = k, \quad k = 0, 1, 2, 3, \dots
\end{equation*}

Or equivalently,

\begin{equation*}
j - (- j) = 2j = k
\end{equation*}

Hence, the allowed values are

\begin{equation*}
j = 0, \frac{1}{2}, 1, \frac{3}{2}, 2, \dots
\end{equation*}

for any given value of $j$, we see that $m$ ranges over the values

\begin{equation*}
-j, -j + 1, \dots, j - 1, j
\end{equation*}

which is a total of $(2j + 1)$ values.

Concluding the following:

  1. Eigenvalues of $\hat{J}^2$ are $j (j + 1) \hbar^2$, where $j$ is one of the allowed values

    \begin{equation*}
 j = 0, \frac{1}{2}, 1, \frac{3}{2}, 2, \dots
\end{equation*}
  2. Since $\lambda = j (j + 1)$ we can equally well label the simultaneous eigenstates of $\hat{J}^2$ and $\hat{J}_3$ by $j$ rather than by $\lambda$, so that

    \begin{equation*}
 \begin{split}
   \hat{J}^2 \ket{\lambda, m} &= j (j + 1) \hbar^2 \ \ket{j, m} \\
   \hat{J}_3 \ket{\lambda, m} &= m \hbar \ket{j, m}
 \end{split}
\end{equation*}
  3. For a eigenvalue of $j$, there are $(2j + 1)$ possible eigenvalues of $\hat{J}_3$, denoted by $m \hbar$, where $m$ runs from $j$ to $- j$
  4. The set of $(2j + 1)$ states $\{ \ket{j, m} \}$ is called a multiplet
  • Which step makes it so that the j in the case of "normal" angular momentum only takes on integer values?

    In the case where we're working with "normal" angular momentum, the spherical harmonics are also eigenfunctions $\hat{L}_3 = - i \hbar \pdv{\phi}$. Since we require that the wave function must be single-valued, the spherical coordinates must be periodic in $\phi$ with period $2 \pi$:

    \begin{equation*}
Y_{\ell}^m (\theta, \phi) = Y_{\ell}^m(\theta, \phi + 2 \pi)
\end{equation*}

    this implies that $m$ is an integer, hence $j$ must also be an integer.

Properties of J

  1. $m^2 \le \lambda$ implies

    \begin{equation*}
 \hat{J}^2 - \hat{J}_3 \ket{\lambda, m} = \hbar^2 (\lambda - m^2) \ket{\lambda, w} = (\hat{J}_1^2 + \hat{J}_2^2) \ket{\lambda, m}
\end{equation*}

    I.e. $\lambda - m^2 \ge 0$, thus $(\hat{J}_1^2 + \hat{J}_2^2)$ is positive definite

  2. $\exists m_{\max}, m_{\min}$ such that

    \begin{equation*}
 \begin{split}
   \hat{J}_{+} \ket{\lambda, m_{\max}} &= 0 \\
   \hat{J}_{-} \ket{\lambda, m_{\min}} &= 0
 \end{split}
\end{equation*}
  3. This is :

    \begin{equation*}
\begin{split}
  \hat{J}_+ \hat{J}_- &= (\hat{J}_1 + i \hat{J}_2) (\hat{J}_1 - i \hat{J}_2) \\
  &= \hat{J}_1^2 + \hat{J}_2^2 + i [\hat{J}_2, \hat{J}_1] \\
  &= \hat{J}^2 - \hat{J}_3^2 + \hbar \hat{J}_3
\end{split}
\end{equation*}

    and

    \begin{equation*}
\begin{split}
  \hat{J}^2 &= \hat{J}_+ \hat{J}_- + \hat{J}_3 - \hbar \hat{J}_3 \\
  &= \hat{J}_ \hat{J}_+ + \hat{J}_3^2 + \hbar \hat{J}_3
\end{split}
\end{equation*}
  4. :

    \begin{equation*}
\begin{split}
  \hat{J}^2 \ket{\lambda, m_{\min}} &= \hbar^2 m_{\min} (m_{\min} - 1) \ket{\lambda, m_{\min}} \\
  \hat{J}^2 \ket{\lambda, m_{\max}} &= \hbar^2 m_{\max} (m_{\max} + 1) \ket{\lambda, m_{\max}}
\end{split}
\end{equation*}
  5. $m_{\max} - m_{\min} = 2 j$, $2 j \in \mathbb{N}$ implies that $j = \frac{k}{2} = 0, \frac{1}{2}, 1, \frac{3}{2}, \dots$

Normalization constant

\begin{equation*}
| c_+ |^2 \bra{\lambda, m + 1}\ket{\lambda, m + 1} = \bra{\lambda, m} \hat{J}_- \hat{J}_+ \ket{\lambda, m}, \quad \hat{J}_+^+ = \hat{J}_
\end{equation*}

Using the Property 3 in Properties of J, we get

\begin{equation*}
\begin{split}
  &= \bra{\lambda, m} \hat{J}^2 - \hat{J}_3^2 - \hbar \hat{J}_3 \ket{\lambda, m} \\
  &= \hbar^2 [ j (j + 1) - m^2 - m ] \bra{\lambda m}\ket{\lambda, m} \\
  &= \hbar^2 [ j (j + 1) - m^2 - m ] \\
\end{split}
\end{equation*}

which gives

\begin{equation*}
\begin{split}
  \hbar c_+ &= \hbar \sqrt{ j (j + 1) - m (m + 1)} \\
  c_+ &= \sqrt{ j (j + 1) - m (m + 1)}
\end{split}
\end{equation*}

since $\bra{\lambda, m + 1}\ket{\lambda, m + 1} = 1$

Doing the same for $c_-$ we get

\begin{equation*}
c_- = \sqrt{j (j + 1) - m ( m - 1)}
\end{equation*}

Computing the matrix elements

\begin{equation*}
\mel{j, m'}{\hat{J}_3}{j, m} = m \hbar \bra{j, m'}\ket{j, m} = m \hbar \delta_{m', m}
\end{equation*}

The matrix is $(2j + 1) \times (2j + 1)$

And for the raising and lowering operators

\begin{equation*}
\begin{split}
  \mel{j, m'}{\hat{J}_\pm}{j, m} &= \hbar c_\pm \ \bra{j, m'}\ket{j, m \pm 1} \\
  &= \hbar c_\pm \ \delta_{m', m \pm 1} \\
  &= \hbar \sqrt{j(j + 1) - m(m + 1)} \ \delta_{m', m \pm 1}
\end{split}
\end{equation*}

Q & A

DONE Which step makes it so that the j in the case of "normal" angular momentum only takes on integer values?

In the case where we're working with "normal" angular momentum, the spherical harmonics are also eigenfunctions $\hat{L}_3 = - i \hbar \pdv{\phi}$. Since we require that the wave function must be single-valued(?), the spherical coordinates must be periodic in $\phi$ with period $2 \pi$:

\begin{equation*}
Y_{\ell}^m (\theta, \phi) = Y_{\ell}^m(\theta, \phi + 2 \pi)
\end{equation*}

this implies that $m$ is an integer, hence $j$ must also be an integer.

I'm not sure what they mean by "single-valued". From the notes they do this:

\begin{equation*}
Y_{\ell}^m (\theta, \phi) \sim e^{i m \phi}
\end{equation*}

of which I'm not entirely sure what they mean.

From looking at looking at the spherical harmonics from a [BROKEN LINK: No match for fuzzy expression: *Spherical%20harmonics] the eigenvalue $\lambda = \ell (\ell + 1)$ for some non-negative integer $\ell \ge |m|$, is due to regularity on the poles $\theta = 0, \pi$.

Central potential

Notation

  • $\theta$ is the angle vertical to the xy-plane
  • $\phi$ is the angle in the xy-plane
  • $\mu$ is the mass (avoids confusion with the magnetic quantum number $m$)
  • $\mathbf{r}$ is the position vector
  • $r$ denotes the magnitude of the position vector

Stuff

\begin{equation*}
\Bigg[ - \frac{\hbar^2}{2 \mu} \nabla^2 + V(r) \Bigg] u(\mathbf{r}) = E u(\mathbf{r})
\end{equation*}

In spherical coordinates we write the Laplacian as

\begin{equation*}
\nabla^2 = \frac{1}{r^2} \frac{\partial}{\partial r} \Big( r^2 \frac{\partial}{\partial r} \Big) + \frac{1}{r^2 \sin \theta} \frac{\partial}{\partial \theta} \Big( \sin \theta \frac{\partial}{\partial \theta} \Big) + \frac{1}{r^2 \sin \theta} \frac{\partial^2 }{\partial \phi^2}
\end{equation*}

we can then rewrite this in terms of the angular momentum operator:

\begin{equation*}
\nabla^2 = \frac{1}{r^2} \frac{\partial}{\partial r} \Big( r^2 \frac{\partial}{\partial r} \Big) - \frac{1}{\hbar^2 r^2} \hat{L}^2
\end{equation*}

and thus the TISE becomes

\begin{equation*}
\frac{h^2}{2 \mu} \Big[ - \frac{1}{r^2} \frac{\partial}{\partial r} \Big( r^2 \frac{\partial}{\partial r} \Big) + \frac{1}{\hbar^2 r^2} \hat{L}^2 \Big] u(r, \theta, \phi) = \big(E - V(r) \big) u(r, \theta, \phi)
\end{equation*}
\begin{equation*}
\hat{H} = \frac{\hat{p}^2}{2m} + \hat{V}(r)
\end{equation*}

where $\hat{p}^2 = \mathbf{\hat{p}} \cdot \mathbf{\hat{p}}$, then any pair of $\{ \hat{H}, \hat{L}^2, \hat{L}_z \}$ commutes, so there exists a set of simultaneous eigenstates $\{ \ket{n, \ell, m} \}$.

Hydrogen atom

Notation

  • $\mu$ effective mass of system
  • $m_p = 1.7 \times 10^{-27} \text{ kg}$
  • $m_e = 0.9 \times 10^{-30} \text{ kg}$
  • $q = 1.6 \times 10^{-19} \ C$
  • $V_{\text{eff}} (r ) = - \frac{e^2}{r} + \frac{\hbar^2 \ell (\ell + 1)}{2 \mu r^2}$
  • $a_0 = \frac{\hbar^2}{\mu \ell^2} = 0.53 \times 10^{-10} \ \text{m} = 0.53 \text{ Å}$
  • $E_I = \frac{\hbar^2}{2 \mu a_0^2} = 13.6 \ \text{eV} = 1 \ \text{Ry}$
  • $r = \rho a_0$ is the Bohr radius
  • $\lambda = \sqrt{\frac{- E}{E_I}}$
  • $u_\ell(\rho) = \chi_\ell(a_0 \rho)$

Stuff

Superposition of eigenstates of proton and electron

(Coloumb) potential:

\begin{equation*}
V(r) = - \frac{q^2}{4 \pi \varepsilon_0} = - \frac{e^2}{r}
\end{equation*}
\begin{equation*}
\hat{H} = \frac{\hat{P}^2}{2 \mu} + V(r)
\end{equation*}

where the effective mass is

\begin{equation*}
\mu = \frac{m_p m_e}{m_p + m_e} \approx m_e ( 1 - \frac{m_e}{m_p} )
\end{equation*}

The TISE is:

\begin{equation*}
\Big[ - \frac{\hbar^2}{2 \mu} \nabla^2 + V(r) \Big] \Psi(\mathbf{x}) = E \Psi(\mathbf{x})
\end{equation*}

We make the ansatz: $\Psi(r, \theta, \phi) = \frac{X_\ell(r)}{r} Y_{\ell + 1} (\theta, \phi)$

Thus,

\begin{equation*}
\nabla^2 = \frac{1}{r^2} \frac{\partial }{\partial r} \Big( r^2 \frac{\partial}{\partial r} \Big) - \frac{\hat{L}^2}{\hbar^2 r^2}
\end{equation*}

Thus, we end up with

\begin{equation*}
\Bigg[ \frac{\hbar^2}{2 \mu}\frac{d^2}{dr^2} - \frac{e^2}{r} + \frac{\hbar^2 \ell (\ell + 1)}{2 \mu r^2} \Bigg] \chi_{\ell}(r) = E \chi_{\ell} (r), \quad \chi_\ell(0) = 0
\end{equation*}

and we let

\begin{equation*}
V_{\text{eff}} (r ) = - \frac{e^2}{r} + \frac{\hbar^2 \ell (\ell + 1)}{2 \mu r^2}
\end{equation*}

We now introduce some "scales" for length and energy:

\begin{equation*}
\begin{split}
  a_0 &= \frac{\hbar^2}{\mu \ell^2} = 0.53 \times 10^{-10} \ \text{m} = 0.53 \text{ Å} \\
  E_I &= \frac{\hbar^2}{2 \mu a_0^2} = 13.6 \ \text{eV} = 1 \ \text{Ry}
\end{split}
\end{equation*}

where $a_0$ is the Bohr radius. Further, we let

\begin{equation*}
r = \rho a_0, \quad \lambda = \sqrt{\frac{- E}{E_I}}
\end{equation*}

Substituting these scales into the TISE deduced earlier:

\begin{equation*}
\Bigg[ - \frac{\hbar^2}{2 \mu} \frac{1}{a_0^2} \frac{d^2}{d \rho^2} + \frac{\hbar^2 \ell (\ell + 1)}{2\mu a_0^2 \rho^2} - \frac{e^2}{a_0 \rho} - E \Bigg] \chi_{\ell}(a_0 \rho) = 0
\end{equation*}
\begin{equation*}
- E_I \Bigg[ - \frac{d^2}{d \rho^2} - \frac{\ell (\ell + 1)}{\rho^2} + \frac{2}{\rho} + \frac{E}{E_I} \Bigg] \chi_{\ell} (a_0 \rho) = 0
\end{equation*}

We tehn introduce another function of $e$ instead of $a_0 \rho$ :

\begin{equation*}
u_\ell(\rho) = \chi_\ell(a_0 \rho)
\end{equation*}

which gives us

\begin{equation*}
\Bigg[ \frac{d^2}{d \rho^2} - \frac{\ell (\ell + 1)}{\rho^2} + \frac{2}{\rho} - \lambda^2 \Bigg] u_\ell(\rho) = 0
\label{tise-wrt-u-rho}
\end{equation*}
1

with boundary conditions: $u_\ell(0) = 0$.

We then consider the boundary condition where $\rho \to \infty$ :

\begin{equation*}
\rho \to \infty : \Bigg[ \frac{d^2}{d \rho^2} - \lambda^2 \Bigg] u_{\ell}(\rho) = 0 \implies u_{\ell}(\rho) = e^{\pm \rho \lambda}
\end{equation*}

But since we need it to be bounded:

\begin{equation*}
u_{\ell}(e) \approx e^{-\lambda \rho}
\end{equation*}

Another ansatz: $u_{\ell} (\rho) = e^{- \lambda \rho} \eta_{\ell}(\rho), \quad y_{\ell}(0) = 0$, which is simply the familiar method of variation of parameters, where we tack on another function of $\rho$, the independent variable, to obtain another solution $u_{\ell}$ which is independent of $e^{- \lambda \rho}$.

If we then plug our $u_{\ell}(\rho)$ into the [tise-wrt-u-rho], which ends up being

\begin{equation*}
y_{\ell}''(\rho) - 2 \lambda y_{\ell}' (\rho) + \Bigg[ - \frac{\ell (\ell + 1)}{\rho^2} + \frac{2}{\rho} \Bigg] y_{\ell}(\rho) = 0
\end{equation*}

where due to the behaviour of $\chi_{\ell}$ we have

\begin{equation*}
\rho \to 0 \implies y_{\ell}(\rho) \to e^{\ell + 1}
\end{equation*}

Ansatz:

\begin{equation*}
y_{\ell}(\rho) = e^{\ell + 1} \sum_{q=0}^{\infty} c_q \rho^q
\end{equation*}

Spin - intrinsic angular momentum

Notation

  • $\ket{s_z, s}$ denotes the eigenstate of a system with spin (eigenvalue of $\hat{S}_z$) $s_z$ in the z-direction, and $s$ being the eigenvalue of $\hat{S}^2$
  • $\ket{\frac{1}{2}, \pm \frac{1}{2}} = \ket{\pm \frac{1}{2}}$ since the first number ($s$) is provided in the second anyways
  • $\ket{+ \frac{1}{2}} = \ket{\uparrow}$ and $\ket{- \frac{1}{2}} = \ket{\downarrow}$
  • $\psi_{s_z}, \psi_1, \psi_2$ etc. we use to represent the coefficient of the function $\psi(x)$ in some basis
  • when we say a system has spin $s$, we're talking about a system with eigenvalue of $\hat{S}^2$ to be $\hbar^2 \ s (s + 1)$

Stuff

Now that we know

\begin{equation*}
[ \hat{J}_k , \hat{J}_l ] = i \hbar \varepsilon_{klm} \hat{J}_m
\end{equation*}
  • $\hat{J}^2$ eigenvalues: $\hbar^2 j(j + 1), \quad 2j \in \mathbb{N}$
  • $\hat{J}_3$ eigenvalues: $- \hbar j, - \hbar (j - 1), \dots, \hbar j$

From the orbital angular momentum, we have

\begin{equation*}
\hat{L}_k = \varepsilon_{klm} \hat{X}_l \hat{P}_m
\end{equation*}
\begin{equation*}
\hat{L}_3 = - i \hbar \frac{\partial}{ \partial \phi} \implies Y_{l}^m(\theta, \phi) \propto e^{im \phi}
\end{equation*}

In the case of orbital angular momentum , we have the requirement that $m \in \mathbb{N}$, which implies $j \in \mathbb{N}$, rather than $2 j \in \mathbb{N}$ which we found in the Angular momentum - Reloaded.

$\hat{L}_1, \hat{L}_2$ are diff. operators, which implies $\hat{L}_{\pm}$ diff. operators

For a given value of $\ell$, $\hat{L}_+ Y_{\ell}^{\ell} (\theta, \phi) = 0$ since $Y_{\ell}^{\ell} (\theta, \phi)$ is the maximum for a spherical harmonic.

\begin{equation*}
Y_{\ell}^{\ell} (\theta, \phi) = f_{\ell}(\theta) e^{i \ell \theta}
\end{equation*}

Now using the properties derived for the more general $\hat{J}_{\ell}$ operator we can deduce the all the other spherical harmonics by the usage of raising and lowering operators.

\begin{equation*}
\hat{S} = ( \hat{S}_x, \hat{S}_y, \hat{S}_z)
\end{equation*}

And it satisfies the commutation relation

\begin{equation*}
\comm{\hat{S}_j}{\hat{S}_k} = - i \hbar \varepsilon_{jk \ell} \hat{S}_{\ell}
\end{equation*}

and

\begin{equation*}
\comm{\hat{S}^2}{\hat{S}_k} = 0
\end{equation*}

Thus, we can construct a basis using the eigenstates of $\hat{S}^2, \hat{S}_z$, i.e. they form a C.S.O.C..

And as we've seen earlier, if an operator satisfies the above properties, we have:

  • eigenvalues of $\hat{S}^2$: $\hbar \ s (s + 1)$, $s = \frac{n}{2}, \quad n \in \mathbb{N}$
  • eigenvalues of $\hat{S}_z$ : $- \hbar s, - \hbar (s - 1), \dots, \hbar s$

For a given value of $s$, $\{ \ket{s, s_z} \}$ has $(2s + 1)$ elements

\begin{equation*}
\ket{\psi} = \sum_{s_z = -s}^{+s} \psi_{s_z} \ket{s, s_z}, \quad \psi_{s_z} \in \mathbb{C}
\end{equation*}

And the eigenfunctions have the relations:

\begin{equation*}
\begin{cases}
  \hat{S}^2 \ket{s, s_z} &= \hbar^2 s(s + 1) \ket{s, s_z} \\
  \hat{S}_z \ket{s, s_z} &= \hbar s_z \ket{s, s_z}
\end{cases}
\end{equation*}

Example: system of spin 1 / 2

Since $s = \frac{1}{2}$, we have:

  • eigenvalue of $\hat{S}^2$ is $\hbar^2 \frac{3}{4}$
  • eigenvalue of $\hat{S}_z$ are $\pm \frac{\hbar}{2}$

The basis of the space of the physical states is then

\begin{equation*}
\Bigg\{ \ket{\frac{1}{2}, - \frac{1}{2}}, \ket{\frac{1}{2}, + \frac{1}{2}} \Bigg\}
\end{equation*}

where we use the notation

\begin{equation*}
\ket{\uparrow}, \quad \ket{\downarrow}
\end{equation*}

Thus, any eigenstate we can expand in this basis:

\begin{equation*}
\ket{\psi} = \sum_{s_z = -\frac{1}{2}, \frac{1}{2}} \psi_{s_z} \ket{s_z} = \psi_1 \ket{\uparrow} + \psi_2 \ket{\downarrow}, \quad \psi_1, \psi_2 \in \mathbb{C}
\end{equation*}

and then

  • $|\psi_1|^2$ is the prob. of finding the system in $+ \frac{\hbar}{2}$ measuring $S_z$
  • $|\psi_2|^2$ is the prob. of finding the system in $- \frac{\hbar}{2}$ measuring $S_z$

TODO Pauli matrices

Since we can write

\begin{equation*}
\hat{S}_x = \frac{\hat{S}_+ + \hat{S}_-}{2}, \quad \hat{S}_y = \frac{\hat{S}_+ - \hat{S}_-}{2}
\end{equation*}

Addition of angular momenta

Notation

  • $\hat{J}$, without integer subscripts refers to the total angular momentum , and thus $\hat{J}_z$ refers to the z-component of this total angular momentum
  • $\hat{J}_{1z}$ refers to the z-component of the angular momentum indexed by, well, 1
  • $\hat{J}_1^2$ refers to the square of the angular momentum indexed by 1

Stuff

This excerpt from some book might be a really good way of getting a better feel of all of this

In this section we deal with the case of having two angular momenta!

For example, we might wish to consider an electron which has both an intrinsic spin and some orbital angular momentum, as in a real hydrogen atom.

Total angular momentum operator
\begin{equation*}
\mathbf{\hat{J}} = \mathbf{\hat{J}}_1 + \mathbf{\hat{J}}_2
\end{equation*}

where we assume $\mathbf{\hat{J}_1}$ and $\mathbf{\hat{J}_2}$ are independent angular momenta, meaning each satisfies the usual angular momentum commutation relations

\begin{equation*}
\comm{\hat{J}_{nx}}{\hat{J}_{ny}} = i \hbar \hat{J}_{nz}, \quad \comm{\hat{J}_n^2}{\hat{J}_{ni}} = 0
\end{equation*}

where

  • $n = 1, 2$ labels the individual angular momenta
  • $i = x, y, z$ stands for cyclic permutations.

Furthermore, any component of $\mathbf{\hat{J}}_1$ commutes with any component of $\mathbf{\hat{J}}_2$:

\begin{equation*}
\comm{\hat{J}_{1i}}{\hat{J}_{2k}} = 0, \quad i, k = x, y, z
\end{equation*}

so that the two angular momenta are compatible.

It follows that the four operators $\hat{J}_1^2, \hat{J}_{1z}, \hat{J}_2^2, \hat{J}_{2z}$ are mutually commuting and thus must posses a common eigenbasis!

This common eigenbasis is known as the uncoupled basis and is denoted $\{ \ket{j_1, m_1, j_2, m_2} \}$.

It has the following properties:

\begin{equation*}
\begin{cases}
  \hat{J}_1^2 \ket{j_1, m_1, j_2, m_2} &= j_1 (j_1 + 1) \hbar^2 \ket{j_1, m_1, j_2, m_2} \\
  \hat{J}_{1z} \ket{j_1, m_1, j_2, m_2} &= m_1 \hbar \ket{j_1, m_1, j_2, m_2} \\
  \hat{J}_2^2 \ket{j_1, m_1, j_2, m_2} &= j_2 (j_2 + 1) \ket{j_1, m_1, j_2, m_2} \\
  \hat{J}_{2z} \ket{j_1, m_1, j_2, m_2} &= m_2 \hbar \ket{j_1, m_1, j_2, m_2}
\end{cases}
\end{equation*}

The allowed values of the total angular momentum quantum number $j$, given two angular momenta corresponding to the quantum numbers $j_1$ and $j_2$ are:

\begin{equation*}
j = j_1 + j_2, j_1 + j_2 - 1, \dots, |j_1 - j_2|
\end{equation*}

and for each of these values of $j$, we have $m$ take on the $(2j + 1)$ values

\begin{equation*}
m = -j, - (j - 1), \dots, j - 1, j
\end{equation*}

The z-component of the total angular momentum operator $\hat{J}_z$ commutes with $\hat{J}_1^2$ and $\hat{J}_2^2$, thus the set of four operators $\{ \hat{J}^2, \hat{J}_z, \hat{J}_1^2, \hat{J}_2^2 \}$ are also a mutually commuting set of operators with a common basis known as the coupled basis, denoted $\{ \ket{j, m, j_1, j_2} \}$ and satisfying

\begin{equation*}
\begin{cases}
  \hat{J}^2 \ket{j, m, j_1, j_2} &= j (j + 1) \hbar^2 \ket{j, m, j_1, j_2} \\
  \hat{J}_z \ket{j, m, j_1, j_2} &= m \hbar \ket{j, m, j_1, j_2} \\
  \hat{J}_1^2 \ket{j, m, j_1, j_2} &= j_1( j_1 + 1) \hbar^2 \ket{j, m, j_1, j_2} \\
  \hat{J}_2^2 \ket{j, m, j_1, j_2} &= j_2 (j_2 + 1) \hbar^2 \ket{j, m, j_1, j_2}
\end{cases}
\end{equation*}

these are states of definite total angular momentum and definite z-component of total angular momentum but not in general states with definite $\hat{J}_{1z}$ or $\hat{J}_{2z}$.

In fact, these states are expressible as linear combinations of the states of the uncoupled basis, with coefficients known as Clebsch-Gordan coefficients.

I think what they mean by "states of definite angular momentum…" are the states which are coupled in a way that one eigenvalue decides the other?

TODO Example: two half-spin particles

Suppose we have two half-spin particles (e.g. electrons) with spin quantum numbers $s_1 = \frac{1}{2}$ and $s_2 = \frac{1}{2}$.

According to the Addition theorem, the total spin quantum number $s$ takes on the values $s_1 + s_2 = 1$ and we require $|s_1 - s_2 | = 0$.

Thus, two electroncs can only have a total spin:

  • $s = 1$ called the triplet states, for which there are three possible values of the spin magnetic quantum number $m_s = -1, 0, 1$
  • $s = 0$ called the singlet states, for which there are only a single possible value $m_s = 0$

Let's denote the elements of the uncoupled basis as

\begin{equation*}
\alpha_1 \alpha_2, \quad \alpha_1 \beta_2, \quad \beta_1 \alpha_2, \quad \beta_1 \beta_2
\end{equation*}

where the subscripts 1 and 2 refer to electron 1 and 2, respectively. The operators $\hat{S}_1^2$ and $\hat{S}_{1z}$ act only on the parts labelled 1, and so on.

If we let $\alpha_1 \alpha_2$ be the state which has $m_{s_1} = \frac{1}{2}$ and $m_{s_2} = \frac{1}{2}$ then it must have $m_s = 1$, i.e. total z-component of spin $\hbar$, and can therefore only be $s = 1$ and not $s = 0$.

The eigenstates ends up being:

\begin{equation*}
\begin{split}
  \ket{s = 1, m_s = 1, s_1 = \frac{1}{2}, s_2 = \frac{1}{2}} &= \alpha_1 \alpha_2 \\
  \ket{s = 1, m_s = - 1, s_1 = \frac{1}{2}, s_2 = \frac{1}{2}} &= \beta_1 \beta_2 \\
  \ket{s = 1, m_s = 0, s_1 = \frac{1}{2}, s_2 = \frac{1}{2}} &= \frac{1}{\sqrt{2}} [ \alpha_1 \beta_2 + \beta_1 \alpha_2 ] \\
  \ket{s = 0, m_s = 0, s_1 = \frac{1}{2}, s_2 = \frac{1}{2}} &= \frac{1}{\sqrt{2}} [ \alpha_1 \beta_2 - \beta_1 \alpha_2 ]
\end{split}
\end{equation*}

Identical Particles

Stuff

Systems of identical particles with integer spin $(s = 0, 1, 2, \dots)$ known as bosons , have wave functions which are symmertic under interchange of any pair of particle labels (i.e. swapping the states of say two particles).

The wave function is said to obey Bose-Einstein statistics .

Systems of identical particles with half-odd-integer spins $(s = \frac{1}{2}, \frac{3}{2}, \dots, )$ known as fermions , have wave functions which are antisymmetric under interchange of any pair of particle labels.

The wave function is said to obey Fermi-Dirac statistics .

In the simplest model of the helium atom, the Hamiltonian is

\begin{equation*}
\hat{H} = \hat{H}_1 + \hat{H}_2 + \frac{e^2}{4 \pi \epsilon_0 \ |\mathbf{r}_1 - \mathbf{r}_2|}
\end{equation*}

where

\begin{equation*}
\hat{H}_i = \frac{\mathbf{\hat{P}}_i^2}{2 \mu} - \frac{2e^2}{4 \pi \epsilon_0 r_i}
\end{equation*}

This is symmetric under permutation of the indices 1 and 2 which label the two electrons. Thus it must be the case if the two electrons are identical or industinguishable : it cannot matter which particle we label 1 and we which we label 2.

Due to symmerty condition, we have

\begin{equation*}
\hat{H}(1, 2) = \hat{H}(2, 1)
\end{equation*}

Suppose that

\begin{equation*}
\hat{H}(1, 2) \psi(1, 2) = E \psi(1, 2), \quad \hat{H}(2, 1) \psi(2, 1) = E \psi(2, 1)
\end{equation*}

so we conclude that $\psi(1, 2)$ and $\psi(2, 1)$ are both eigenfunctions belonging to the same eigenvalue $E$. Further, any linear combination of the two is!

In particular, the normalised symmertic and antisymmertic combinations

\begin{equation*}
\psi_\pm = \frac{1}{\sqrt{2}} \big( \psi(1, 2) \pm \psi(2, 1) \big)
\end{equation*}

are eigenfunctions belonging to the eigenvalue $E$.

If we introduce the particle interchange operator, $P_{12}$, with the property that

\begin{equation*}
P_{12} \psi(1, 2) = \psi(2, 1)
\end{equation*}

then the symmertic and antisymmertic combinations are eigenfunctions of $P_{12}$ with eigenvalues $\pm 1$ respectively:

\begin{equation*}
P_{12} \psi_\pm = \pm \psi_\pm
\end{equation*}

Since $\psi_\pm$ are simultaneous eigenfunctions of $\hat{H}$ and $P_{12}$ it follows that:

\begin{equation*}
\comm{\hat{H}}{P_{12}} = 0
\end{equation*}

Fourier Transforms and Uncertainty Relations

This is heavily inspired by a post from mathpages.

Notation

  • $Q = \int_{-\infty}^{\infty} e^{-z^2} \ dx$
  • $\ket{a}$ is such that $\hat{A} \ket{a} = a \ket{a}$, i.e. the ket corresponding to the eigenvalue $a$

Stuff

Gaussian integral
  • Basic

    Taking the square of the integral $Q$ we get

    \begin{equation*}
Q^2 = \Bigg( \int_{-\infty}^{\infty} e^{- x^2 } \dd x \Bigg) \Bigg( \int_{-\infty}^{\infty} e^{- y^2 } \dd y \Bigg) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{- (x^2 + y^2)} \dd{x} \dd{y}
\end{equation*}

    Reparametrizing in polar coordinates

    \begin{equation*}
x = r \cos \theta, \quad y = r \sin \theta, \quad r = x^2 + y^2
\end{equation*}

    which gives us the Jacobian transformation

    \begin{equation*}
\begin{vmatrix}
  \pdv{x}{r} & \pdv{x}{\theta} \\
  \pdv{y}{r} & \pdv{y}{\theta}
\end{vmatrix} = 
\begin{vmatrix}
  \cos \theta & - r \sin \theta \\
  \sin \theta & r \cos \theta
\end{vmatrix} = r
\end{equation*}

    so the incremental area element is

    \begin{equation*}
d A = dx \ dy = r dr \ d \theta
\end{equation*}

    Hence the double-integral above becomes

    \begin{equation*}
Q^2 = \int_{0}^{2 \pi} \int_0^{\infty} e^{-r^2} \ rdr \ d \theta
\end{equation*}

    and we since

    \begin{equation*}
\int x e^{- x^2} = - \frac{1}{2} e^{-x^2}
\end{equation*}

    we get

    \begin{equation*}
Q^2 = \int_0^{2 \pi} \frac{1}{2} d \theta = \pi
\end{equation*}

    Finally giving us

    \begin{equation*}
Q = \sqrt{\pi}
\end{equation*}
  • More general

    If we let $y^2 = a x^2 - bx + c$, i.e. we consider some arbitrary quadratic, then

    \begin{equation*}
y = z \sqrt{a} - \frac{b}{2 \sqrt{a}}
\end{equation*}

    in terms of which we can write

    \begin{equation*}
\int_{-\infty}^{\infty} e^{- ax^2 + bx - c} \ dx = \frac{e^{(b^2 - 4ac) / 4a}}{\sqrt{a}} \int_{-\infty}^{\infty} e^{-y^2} \ dy
\end{equation*}
Fourier transform of a normal probability density

For any function $f(x)$ we have the relation

\begin{equation*}
\begin{split}
  F(y) &= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(x) e^{ixy} \dd x \\ 
  f(x) &= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} F(y) e^{-ixy} \dd y
\end{split}
\end{equation*}

Now, if $f(x)$ is a normal probability density, we have

\begin{equation*}
f(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{- (x - \mu)^2 / 2 \sigma^2}
\end{equation*}

which has the Fourier transform

\begin{equation*}
F(y) = \frac{1}{2 \pi \sigma} \int_{-\infty}^{\infty} e^{-(x - \mu)^2 / 2 \sigma^2 + (ixy)} \dd x
\end{equation*}

Where we can write the exponent in the integral of the form $- ax^2 + bx - c$ with

  • $a = \frac{1}{2s^2}$
  • $b = iy + \frac{m}{s^2}$
  • $c = - \frac{m^2}{2s^2}$

hence the Fourier transform of the normal density function is

\begin{equation*}
F(y) = e^{\mu^2 / (2\sigma^2)} \frac{1}{\frac{1}{\sigma} \sqrt{2 \pi}} e^{- \frac{( y - i \mu / \sigma^2)^2}{2 / \sigma^2} }
\end{equation*}

If we then choose our scales so that the mena of $f$ is zero, i.e. so that $m = 0$ the expression reduces to

\begin{equation*}
F(y) = \frac{1}{\frac{1}{\sigma} \sqrt{2 \pi}} e^{- \frac{y^2}{2 / \sigma^2}}
\end{equation*}

which means that the Fourier transform of a normal distribution with mean $0$ and variance $\sigma^2$ is a normal distribution with mean $0$ and variance $\frac{1}{\sigma^2}$.

Which tells us that the variances of the distributions $f$ and $F$ satisfy the uncertainty relation:

\begin{equation*}
\text{var} (f) \ \text{var} (F) = 1
\end{equation*}

This is the limiting case of the general inequality on the product of variances of Fourier transform pairs. In general, if $f$ is an arbitrary probability density and $F$ is its Fourier Transform, then

For any probability density function $f(x)$ and it's the Fourier Transform $F(y)$ we have

\begin{equation*}
\text{var}(f) \ \text{var}(F) \ge 1
\end{equation*}

$f$ and $F$ are two different ways of characterizing the same distribution, one in the amplitude domain and one in the frequency domain. Given either of these, the other is completely determined.

Remember, due to the Central Limit Theorem, i.i.d. the sample mean of any random variable with finite variance follows a Normal distribution

Hence, the equality above is saying that if we have an infinte number of realizations of some random variable $X$ then the variance of this will be bounded below by the inverse of the variance of the Fourier Transform.

Hmm, I'm not 100% sure about all this. How can we for sure know that the Fourier Transform of the probability density will have a sample variance which decreases quicker than the actual density function $f$?

Why can we guarantee that the variance of the Fourier transform does not decrease faster than the variance of the density $f$?

We're saying that variance of random variable whos sample average converges to $f$ has a variance of $\text{var}(f)$, which is fiiine. Then we're saying that some the variance of some random variable whos sample average converges to $F$ has a variance of $\text{var}(F)$, which is also fine. Then, we're saying that the variance of this random variable is always going to be greater than the limiting variances for these random variables, whiiiich you really can't say.

Application to Quantum mechanics

The canonical commutation relation is the fundamental relation between canonical conjugate quantities, i.e. quantities which are related by definition such that one is the Fourier transform of the other, which for two operators which are canonical conjugates, we have

\begin{equation*}
\comm{\hat{A}}{\hat{B}} = \hat{A} \hat{B} - \hat{B} \hat{A} = i \hbar
\end{equation*}

Equivalently, if we have the above commutation relation between two Hermitian operators, then they form a Fourier Transform pair.

Where, if we take the commutation-relation the other way around the the sign of $i \hbar$ changes, i.e. it's "symmetric" in a sense.

Now, if we then take some state $\ket{\psi}$ and compute the probability amplitudes that the measurements corresponding to the operators $\hat{A}$ and $\hat{B}$ will return the eigenvalues $a$ and $b$ respectively, then we find

\begin{equation*}
\begin{split}
  \braket{a}{\psi} &= \frac{1}{\hbar} \int_{-\infty}^{\infty} \braket{b}{\psi} e^{-iab / \hbar} \dd b \\
  \braket{b}{\psi} &= \frac{1}{\hbar} \int_{-\infty}^{\infty} \braket{a}{\psi} e^{iab / \hbar} \dd a
\end{split}
\end{equation*}

where $\bra{a}$ is the bra corresponding to the eigenstate $\ket{a}$ with the eigenvalue $a$ of the operator $\hat{A}$.

That is; the probability amplitude distributions of two conjugate variables are simply the (suitably scaled) Fourier transforms of each other.

We saw earlier that the variances of two density distributions that comprise a Fourier transform pair satisfy the variance inequality:

\begin{equation*}
\text{var} (\hat{A} ) \ \text{var} (\hat{B}) \ge 1
\end{equation*}
Heisenberg Uncertainty Principle

How is this interesting? Well, as it turns out, the position operator $\hat{X}$ and the momentum operator $\hat{P}$ have the commutation relation

\begin{equation*}
\comm{\hat{X}}{\hat{P}} = i \hbar
\end{equation*}

which then tells us that they are conjugate varianbles, i.e. the probability amplitude distributions of the two are scaled Fourier transforms of each other! Which means they satisfy the following inequality

\begin{equation*}
\text{var}( \hat{X} ) \ \text{var}( \hat{P} ) \ge 1
\end{equation*}

which is just the Heisenberg uncertainty principle!

Bra, kets, and matrices

Time-independent Perturbation Theory

Notation

  • $\hat{H}_0$ is the Hamiltonian corresponding to the unperturbed / exactly solvable system
  • $E_n^{(0)}$ and $\ket{n}$ represent the n-th eigenvalue and eigenfunction of the unperturbed Hamiltonian
  • $E_n^{(i)}$ and $\ket{n}^{(i)}$ is the energy and ket of the i-th perturbation term
  • $E_n$ and $\ket{n}$ is the corrected (perturbed with all the terms) eigenvalue and eigenfunction, i.e.
  • $E_n = E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \dots$
  • $\ket{n} = \ket{n^{(0)}} + \lambda \ket{n^{(1)}} + \lambda^2 \ket{n^{(2)}} + \dots$
  • $H_{kn}' = \bra{k^{(0)}}\hat{H}'\ket{{n^{(0)}}}$
  • $\hat{J} = \hat{L} + \hat{S}$ denote the total angular momentum (rather than a general ang. mom. as we've used it for earlier)

Overview

  • Few problems in Quantum Theory can be solved exactly
    • Helium atom: inter-electron electrostatic repulsion term in $\hat{H}$ changes the problem into one which cannot be solved analytically
  • Perturbation Theory provides a method for finding approx. energy eigenvalues and eigenstates for a system whose Hamiltonian is of the form

    \begin{equation*}
  \hat{H} = \hat{H}_0 + \hat{H}'
\end{equation*}

    where $\hat{H}_0$ is the Hamiltonian of an exactly solvable system, for which we know the eigenvalues, $E_n^{(0)}$, and eigenstates, $\ket{n^{(0)}}$, and $\hat{H}'$ is small, time-independent perturbation.

Does not converge

Perturbation does not actually guarantee convergence, i.e. that each term is smaller than the previous.

But, it is something called Borel summable, which is a method for summing up divergent series.

Short and sweet

Let

\begin{equation*}
\hat{H} = \hat{H}_0 + \lambda \hat{H}'
\end{equation*}

where:

  • $\lambda$ is a real parameter used for convenience
  • $\hat{H}'$ is some small perturbation to the Hamiltonian, e.g. $x^4$

Then, the corrected eigenvalues and eigenfunctions are going to be given by:

\begin{equation*}
E_n^{(1)} = \mel{n^{(0)}}{\hat{H}'}{n^{(0)}}
\end{equation*}
\begin{equation*}
\ket{n^{(1)}} = \sum_{k} \ket{k^{(0)}} \bra{k^{(0)}}\ket{n^{(1)}} = \sum_{k} \ket{k^{(0)}}  \frac{\mel{k^{(0)}}{\hat{H}'}{n^{(0)}}}{E_n^{(0)} - E_k^{(0)}}
\end{equation*}

"My" version (first order mainly)

It is convenient to consider the related problem of a system with Hamiltonian

\begin{equation*}
\hat{H} = \hat{H}_0 + \lambda \hat{H}'
\end{equation*}

where:

  • $\lambda$ is a real parameter used for convenience
  • $\hat{H}'$ is some small perturbation to the Hamiltonian, e.g. $x^4$

Then the eigenvalue problem we're trying to solve is then:

\begin{equation*}
\hat{H} \ket{n} = E_n \ket{n}
\end{equation*}

Assume $\hat{H}$ and $\hat{H}_0$ posses discrete, non-degenerate eigenvalues only, and we write:

\begin{equation*}
\hat{H}_0 \ket{n^{(0)}} = E_n^{(0)} \ket{n^{(0)}}
\end{equation*}

where $\ket{n^{(0)}}$ are orthonormal.

The effect of the perturbation on the eigenstates and eigenvalues is defined by the following maps:

\begin{equation*}
\begin{split}
  \ket{n^{(0)}} &\mapsto \ket{n} \\
  E_n^{(0)} &\mapsto E_n
\end{split}
\end{equation*}

where

\begin{equation*}
\hat{H} \ket{n} = E_n \ket{n}
\end{equation*}

Thus, we solve the full eigenvalue problem by assuming we can exapnd $E_n$ and $\ket{n}$ in a series as follows:

\begin{equation*}
\begin{split}
  E_n &= E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \dots \\
  \ket{n} &= \ket{n^{(0)}} + \lambda \ket{n^{(1)}} + \lambda^2 \ket{n^{(2)}} + \dots
\end{split}
\end{equation*}

were the correction terms $E_n^{(1)}, E_n^{(2)}, \dots$ and $\ket{n^{(1)}}, \ket{n^{(2)}}, \dots$ are of succesively higher order of "smallness": the power of $\lambda$ keeps track of this for us.

The correction terms $\ket{n^{(1)}}, \ket{n^{(2)}}, \dots$ are not normalized (by defualt)!

Substitute these into the equation for the full Hamiltonian (can be seen above):

\begin{equation*}
\begin{split}
  & \big( \textcolor{green}{\underbrace{\hat{H}_0}_{\text{unperturbed}}} + \lambda \hat{H} \big) \bigg( \textcolor{green}{\underbrace{\ket{n^{(0)}}}_{\text{unperturbed}}} + \lambda \underbrace{\ket{n^{(1)}}}_{\text{perturbed}}  + \lambda^2 \underbrace{\ket{n^{(2)}}}_{\text{perturbed}} + \dots \bigg) \\
  = \Big( & \textcolor{green}{E_n^{(0)}} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \dots \Big) \bigg( \textcolor{green}{\ket{n^{(0)}}} + \lambda \ket{n^{(1)}} + \lambda^2 \ket{n^{(2)}} + \dots \bigg)
\end{split}
\end{equation*}

and equating the terms of the same degree of $\lambda$, giving:

\begin{eqnarray*}
  \lambda^0: & \textcolor{green}{\hat{H}_0 \ket{n^{(0)}}} & = \textcolor{green}{E_n^{(0)} \ket{n^{(0)}}} \\
  \lambda^1: & \Big( \textcolor{green}{\hat{H}_0} - \textcolor{green}{E_n^{(0)}} \Big) \ket{n^{(1)}} & = \Big( E_n^{(1)} - \hat{H}' \Big) \textcolor{green}{\ket{n^{(0)}}} \\
  \lambda^2: & \Big( \textcolor{green}{\hat{H}_0} - \textcolor{green}{E_n^{(0)}} \Big) \ket{n^{(2)}} & = \Big( E_n^{(1)} - \hat{H}' \Big) \ket{n^{(1)}} + E_n^{(2)} \textcolor{green}{\ket{n^{(0)}}}
\end{eqnarray*}

Now let's, for the heck of it, take the inner product of the coefficients of $\lambda^1$ with some arbitrary non-perturbed state $\ket{k^{(0)}}$:

\begin{equation*}
\textcolor{green}{\bra{k^{(0)}}} \Big( \textcolor{green}{\hat{H}_0} - \textcolor{green}{E_n^{(0)}} \Big) \ket{n^{(1)}} = \textcolor{green}{\bra{k^{(0)}}} \Big( E_n^{(1)} - \hat{H}' \Big) \textcolor{green}{\textcolor{green}{\ket{n^{(0)}}}}
\end{equation*}

which (due to $\hat{H}_0$ being Hermitian), becomes

\begin{equation*}
\Big( \textcolor{green}{E_k^{(0)}} - \textcolor{green}{E_n^{(0)}} \Big) \bra{k^{(0)}}\ket{n^{(1)}} = E_n^{(1)} \textcolor{green}{\bra{k^{(0)}}\ket{n^{(0)}}} - \mel{k^{(0)}}{\hat{H}'}{n^{(0)}}
\end{equation*}

Let's first consider the correction (of first order) to the energy (then we'll to the eigenstate afterwards) by letting $k = n$ (since $k$ is arbitrary):

\begin{equation*}
\Big( \textcolor{green}{E_n^{(0)}} - \textcolor{green}{E_n^{(0)}} \Big) \bra{n^{(0)}}\ket{n^{(1)}} = E_n^{(1)} - \mel{n^{(0)}}{\hat{H}'}{n^{(0)}}
\end{equation*}

LHS vanishes, and we're left with

\begin{equation*}
E_n^{(1)} = \mel{n^{(0)}}{\hat{H}'}{n^{(0)}}
\end{equation*}

which is the expression for the energy correction (of first order)!

To obtain the correction to the eigenfunction itself, we explore the following: $k \ne n$,

\begin{equation*}
\Big( \textcolor{green}{E_k^{(0)}} - \textcolor{green}{E_n^{(0)}} \Big) \bra{\textcolor{green}{k^{(0)}}}\ket{n^{(1)}} = E_n^{(1)} \textcolor{green}{\bra{k^{(0)}}\ket{n^{(0)}}} - \mel{\textcolor{green}{k^{(0)}}}{\hat{H}'}{\textcolor{green}{n^{(0)}}}
\end{equation*}

where $\textcolor{green}{\bra{k^{(0)}}\ket{n^{(0)}}} = 0$ when $k \ne n$, thus

\begin{equation*}
\bra{\textcolor{green}{k^{(0)}}}\ket{n^{(1)}} = - \frac{\mel{\textcolor{green}{k^{(0)}}}{\hat{H}'}{\textcolor{green}{n^{(0)}}}}{\textcolor{green}{\textcolor{green}{E_k^{(0)}}} - \textcolor{green}{E_n^{(0)}}} = \frac{\mel{\textcolor{green}{k^{(0)}}}{\hat{H}'}{\textcolor{green}{n^{(0)}}}}{\textcolor{green}{E_n^{(0)}} - \textcolor{green}{E_k^{(0)}}}
\end{equation*}

Hence, we can simply expand $\ket{n^{(1)}}$ in the basis of all $k \ne n$:

\begin{equation*}
\ket{n^{(1)}} = \sum_{k} \bra{\textcolor{green}{k^{(0)}}}\ket{n^{(1)}} = \sum_{k} \frac{\mel{\textcolor{green}{k^{(0)}}}{\hat{H}'}{\textcolor{green}{n^{(0)}}}}{\textcolor{green}{E_n^{(0)}} - \textcolor{green}{E_k^{(0)}}}
\end{equation*}

Aaaand we have our expression for the eigenstate correction (of first order)!

Preservation of orthogonality

Let's consider the inner product $\bra{n}\ket{k}$

\begin{equation*}
\begin{split}
  \ket{k} &= \ket{k^{(0)}} + \dots + \frac{H_{kn}}{E_k - E_n} \textcolor{green}{\ket{n^{(0)}}} + \dots \\
  \ket{n} &= \textcolor{green}{\ket{n^{(0)}}} + \dots + \frac{H_{nk}}{E_n - E_k} \ket{k^{(0)}} + \dots
\end{split}
\end{equation*}

where we've picked out the terms which will be non-zero in $\bra{n}\ket{k}$. Observe that the non-zero terms are of different sign, thus

\begin{equation*}
\begin{split}
  \bra{k}\ket{n} &= 0 \\
  H_{nk} \Bigg( \frac{1}{E_k - E_n} &- \frac{1}{E_k - E_n} \Bigg) = 0
\end{split}
\end{equation*}

Thus orthogonality is preserved (for first order approximation).

Notes

  • Corrected $\ket{n}$ needs to be normalized, thus we better have

    \begin{equation*}
  | H_{kn}' | << |E_n^{(0)} - E_k^{(0)}|, \quad \forall k \ne n
\end{equation*}
  • We require that the level shift be small compared to the level spacing in the unperturbed system:

Example: Potential well

\begin{equation*}
V(x) = \infty, \quad |x| > a, \quad V(x) = V_0 \cos(\frac{\pi x}{2a}), \quad |x| \le a
\end{equation*}

with

\begin{equation*}
u_n^{(0)} = \frac{1}{\sqrt{a}} 
\begin{bmatrix}
  \cos(\frac{\pi n x}{2a}) \\
  \sin(\frac{\pi n x}{2a})
\end{bmatrix}
\end{equation*}

where $V_0 << E_2^{(0)} - E_1^{(0)}$,

\begin{equation*}
E_n^{(0)} = \frac{\pi^2 \hbar^2 n^2}{8ma^2}, \quad n = 1, 2, 3, \dots
\end{equation*}

Thus, the first order perturbation correction is

\begin{equation*}
\Delta E = E_1^{(1)} = H_{11}' = \int_{-a}^{a} u_1^{(0)} \hat{H}' u_1^{(0)} \ dx
\end{equation*}
\begin{equation*}
\frac{V_0}{a} \int_{-a}^{a} \cos^3 \bigg(\frac{\pi x}{2a}\bigg) \ dx \implies \Delta E = .85 V_0
\end{equation*}

And the second order perturbation correction is

\begin{equation*}
\begin{split}
  \bra{k^{(0)}} (\hat{H}_0 - E_n^{(0)}) \ket{n^{(2)}} &= \bra{k^{(0)}} \big( E_n^{(1)} - \hat{H}' \big) \ket{n^{(1)}} + E_n^{(2)} \bra{k^{(0)}}\textcolor{green}{\ket{n^{(0)}}} \\
  \big( E_k^{(0)} - E_n^{(0)} \big)  \bra{k^{(0)}}\ket{n^{(2)}} &= \bra{k^{(0)}} E_n^{(1)} - \hat{H}' \ket{n^{(1)}} + E_n^{(2)} \delta_{kn}
\end{split}
\end{equation*}

Letting $k = n$, we have

\begin{equation*}
\begin{split}
  E_n^{(2)} &= \bra{n^{(0)}} \hat{H}' \ket{n^{(1)}} - E_n^{(1)} \bra{n^{(0)}}\ket{n^{(1)}} \\
  &= \sum_{m \ne n} c_{nm}^{(1)} \hat{H}_{nm}' \\
  &= \sum_{m \ne n} \frac{H'_{mn} H_{nm}}{E_n^{(0)} - E_m^{(0)}}
\end{split}
\end{equation*}

Degeneracy in Perturbation Theory

Notation

  • States are ordered s.t. first $g$ states are degenerate
  • $E = E^{(0)} + \lambda E^{(1)} + \lambda^2 E^{(2)} + \dots$
  • $\ket{E} = \ket{E^{(0)}} + \lambda \ket{E^{(1)}} + \lambda^2 \ket{E^{(2)}} + \dots$
  • $\hat{H}_{kn}' = \mel{E_k^{(0)}}{\hat{H}'}{E_n^{(0)}}$

Stuff

Remember, this is the case where $\hat{H}_0$ has degenerate eigenstates, and we're assuming that the "real" Hamiltonian $\hat{H}$ never is degenerate (which is sort of what you'd expect in nature, I guess).

Consider the eigenstate $E_n^{(0)}$ with degeneracy $g$:

\begin{equation*}
\hat{H} = \hat{H}_0 + \lambda \hat{H}', \quad \hat{H}_0 \ket{E_n^{(0)}} = E^{(0)} \ket{E_n^{(0)}}, \quad n = 1, \dots, g
\end{equation*}

Then

\begin{equation*}
\begin{split}
  \hat{H}_0 \ket{E_n^{(0)}} &= E_n^{(0)} \ket{E_n^{(0)}}, \quad n > g \\
  E_n^{(0)} \ne E^{(0)}, \quad n > g \quad & \quad E_n^{(0)} \ne E_m^{(0)}, \quad n \ne r, \quad n, r > g
\end{split}
\end{equation*}

We're just saying that we've ordered the states in such a way that we have state of $g$ degeneracy occuring in the first $g$ indices.

Since we have a g-dimensional subspace to work with for the degenerate eigenstate, now, instead of just finding the coefficients as in non-degenerate perturbation theory, we're looking for the "best" linear combination of the eigenbasis for the degenerate case!

\begin{equation*}
\ket{E^{(0)}} = \sum_{n=1}^{g} b_n \ket{E_n^{(0)}}
\end{equation*}

and want to find the "optimal" coefficients / projection.

Proceeding as for the non-degenerate case, we assume that

\begin{equation*}
\big( \hat{H}_0 + \lambda \hat{H}' \big) \ket{E} = E \ket{E}
\end{equation*}

where we assume we can write the following

  • $E = E^{(0)} + \lambda E^{(1)} + \lambda^2 E^{(2)} + \dots$
  • $\ket{E} = \ket{E^{(0)}} + \lambda \ket{E^{(1)}} + \lambda^2 \ket{E^{(2)}} + \dots$

Substituting the expansions into the eigenvalue equation and equating terms of the same degree $\lambda$, we find:

\begin{eqnarray*}
  \lambda^0: & (\hat{H}_0 - E^{(0)}) \ket{n^{(0)}} & = 0 \\
  \lambda^1: & \big( \hat{H}_0 - E^{(0)} \big) \ket{E^{(1)}} + \big( \hat{H}' + E^{(1)} \big) \ket{E^{(0)}} & = 0 \\
  \lambda^2: & \big( \hat{H}_0 - E^{(0)} \big) \ket{E^{(2)}} + \big( \hat{H}' - E^{(1)} \big) \ket{E^{(1)}} - E^{(2)} \ket{E^{(0)}} &= 0
\end{eqnarray*}

Once again, taking the scalar product with some arbitrary k-th unperturbed eigenstate $E_k^{(0)}$, we get

\begin{equation*}
\bra{E_k^{(0)}} \big( \hat{H}_0 - E^{(0)} \big) \ket{E^{(1)}} + \bra{E_k^{(0)}} \big( \hat{H}' - E^{(1)} \big) \ket{E^{(0)}} = 0
\end{equation*}

substituting in

\begin{equation*}
\ket{E^{(0)}} = \sum_{n = 1}^{g} b_n \ket{E_n^{(0)}}
\end{equation*}

we get

\begin{equation*}
\big( E_k^{(0)} - E^{(0)} \big) \bra{E_k^{(0)}}\ket{E^{(1)}} + \sum_{n = 1}^{g} b_n \big( H_{kn}' - \delta_{kn} E^{(1)} \big) = 0
\end{equation*}

Now we consider the different cases of $k$.

$k \le g$ - degenerate states

In this case we simply have $E_k^{(0)} = E^{(0)}$ and thus the first term vanishes

\begin{equation*}
\sum_{n=1}^{g} \big( H_{kn}' - \delta_{kn} E^{(1)} \big) b_n = 0, \quad k = 1, \dots, g
\end{equation*}

which is simply a "linear algebra eigenvalue problem" (when including for all $k$)!

We'll denote the "linear algebra eigenvalues" as roots, to distinguish from the eigenvalues from the Schrödinger equation.

\begin{equation*}
\big( \mathbf{H}' - E^{(1)} \mathbf{I} \big) \mathbf{b} = 0
\end{equation*}

where

\begin{equation*}
\mathbf{H}' = 
\begin{pmatrix}
  H_{kn}'
\end{pmatrix}
\end{equation*}

We get the roots $E_i^{(1)}$ then have the different cases:

  • $n$ distinct roots => we've broken the degeneracy of $\hat{H}_0$, yey!
  • one or more repeating roots => there's still some degeneracy left from $\hat{H}_0$, aaargh!
  • all equal roots => move on to 2nd order expansion, cuz 1st order didn't help mate!

If we have $n$ distinct roots, we end up with

\begin{equation*}
E_n^{(1)} = \Delta E_n^{(1)} = \mel{E_n^{(0)}}{\hat{H}'}{E_n^{(0)}}, \quad n = 1, \dots, g
\end{equation*}

Which is great; it's the same as the non-degenerate case! Therefore, we'd like to arrange for this to always be the case.

Suppose we can find some observable represented by an operator $\hat{A}$ such that

\begin{equation*}
\comm{\hat{H}_0}{\hat{A}} = \comm{\hat{H}'}{\hat{A}} = 0
\end{equation*}

Then there are simultaneous eigenstates of $\hat{H}_0$ and $\hat{A}$:

\begin{equation*}
\begin{split}
  \hat{H}_0 \ket{E^{(0)}, A_i} & = E^{(0)} \ket{E^{(0)}, A_i} \\
  \hat{A} \ket{E^{(0)}, A_i} &= A_i \ket{E^{(0)}, A_i}
\end{split}
\end{equation*}

If and only if the eigenvalues $A_i$ are distinct, then we have a C.S.O.C., and we can write

\begin{equation*}
\ket{E_n^{(0)}} = \ket{E^{(0)}, A_n}, \quad n = i, \dots, g
\end{equation*}

Thus,

\begin{equation*}
\bra{E_k^{(0)}} \comm{\hat{H}'}{\hat{A}}\ket{E_n^{(0)}} = 0
\end{equation*}

which is

\begin{equation*}
\bra{E_k^{(0)}} \hat{H}' \hat{A} \ket{E_n^{(0)}} - \bra{E_k^{(0)}} \hat{A} \hat{H}' \ket{E_n^{(0)}} = (A_n - A_k) \bra{E_k^{(0)}} \hat{H}' \ket{E_n^{(0)}} = (A_n - A_k) \hat{H}_{kn}' = 0
\end{equation*}

Therefore,

\begin{equation*}
\hat{H}_{kn}' = 0, \quad k \ne n, A_k \ne A_n
\end{equation*}

Hence, if $A_k \ne A_n$ for all $k, n$, then $\hat{H}'$ is diagonal, as wanted.

Now, what if $A_k$ are not distinct?! We then look for another operator $\hat{B}$, such that

\begin{equation*}
\comm{\hat{A}}{\hat{B}} = \comm{\hat{H}_0}{\hat{A}} = \comm{\hat{H}_0}{\hat{B}} = \comm{\hat{H}'}{\hat{B}} = 0
\end{equation*}

i.e. $\{ \hat{H}_0, \hat{A}, \hat{B} \}$ is a C.S.O.C., and letting $\ket{E^{(0)}} = \ket{E^{(0)}, A_i, B_j}$ we can repeat the about argument.

We're saying that if we can find some operator which commutes with both $\hat{H}_0$ and $\hat{H}'$, we can make $\hat{H}$ diagonalizable, i.e. making our lives super-easy above.

$k > g$ - non-degenerate states

We simply take our expansion for $E^{(0)} = \sum_{n=1}^{g} b_n \big( H_{kn}' - \delta_{kn} E^{(1)} \big)$ and do exactly what we did for the non-degenerate case!

Example

Central potential, spin-(1/2), spin-orbit interaction:

\begin{equation*}
\hat{H} = \frac{\hat{p}^2}{2m} + \hat{V}(r) + f(r) \mathbf{\hat{L}} \cdot \mathbf{\hat{S}} = \hat{H}_0 + \hat{H}'
\end{equation*}

We know that

\begin{equation*}
\comm{\hat{H}'}{\hat{L}_z} \ne 0 \quad \text{and} \quad \comm{\hat{H}'}{\hat{S}_z} \ne 0 \quad \text{but} \quad \comm{\hat{H}'}{\hat{L}_z + \hat{S}_z} = 0
\end{equation*}

so if we use the coupled basis $\{ \ket{n, \ell, s, j, m_j} \}$, we can use the non-degeneracy theory to compute the 1st order energy shifts!

\begin{equation*}
\comm{\hat{H}_0}{\mathbf{\hat{L}} \cdot \mathbf{\hat{S}}} = 0
\end{equation*}

but

\begin{equation*}
\comm{\hat{H}_0}{\hat{H}'} \ne 0
\end{equation*}

because of the $f(r)$ term in $\hat{H}'$.

Example: Hydrogen fine structure

Notation

  • $\hat{H}_{\text{S-O}}$ denotes the spin-orbit term, whose physical origin is the interaction between the intrinstic magnetic dipole moment of the electron and the magnetic field due to the electron's orbital motion in the electric field of the nuclues

Stuff

\begin{equation*}
\hat{H}_0 = \frac{\hat{p}^2}{2m} - \frac{Z e^2}{4 \pi \varepsilon_0 r} \quad E_n^{(0)} = - \frac{m}{2\hbar^2} \bigg( \frac{Ze^2}{4 \pi \varepsilon_0} \bigg) \frac{1}{n^2}
\end{equation*}

with

\begin{equation*}
\hat{H}' = \hat{H}_{\text{KE}}' + \hat{H}_{\text{S-O}}' + \hat{H}_{\text{Darwin}}', \quad \alpha = \frac{e^2}{4 \pi \varepsilon_0 \hbar c} \approx = \frac{1}{137}
\end{equation*}
\begin{equation*}
\hat{H}_{\text{KE}}' = - \frac{\hat{p}^4}{8m^3 c^2}
\end{equation*}

We start with the kinetic energy

\begin{equation*}
\begin{split}
  E_n^{(0)} &= 2 n^2 \\
  \Delta E_{\text{KE}} = \bra{n, \ell, m_{\ell}, s, m_s} - \frac{\hat{p}^4}{8m^3 c^2} \ket{n, \ell, m_{\ell}, s, m_s}
\end{split}
\end{equation*}

Where

\begin{equation*}
\hat{T} = \frac{\hat{p}^2}{2m} = - \frac{1}{2mc^2} \bra{n, \ell, m} \hat{T}^2 \ket{n, \ell, m}
\end{equation*}

Therefore

\begin{equation*}
T = \hat{H}_0 - \hat{V}(r) \implies \Delta E_{\text{KE}} 
= - \frac{1}{2 m c^2} \Big\langle \big( E_n^{(0)} \big)^2 - 2 E_n^{(0)} \big( \hat{V}_{n, \ell, m} \big) + \big( \hat{V}_{n, \ell, m} \big)^2 \Big\rangle
\end{equation*}

Which appearantly give us

\begin{equation*}
\Big\langle \frac{1}{r} \Big\rangle_{n, \ell} = \frac{Z}{a_0} \frac{1}{n^2}, \quad \Big\langle \frac{1}{r^3} \Big\rangle_{n, \ell} = \frac{Z^2}{a_0^2} \frac{1}{n^3 (\ell + \frac{1}{2})}
\end{equation*}
\begin{equation*}
\Delta E_{\text{KE}} = - E_n^{(0)} \frac{\big( Z a_0 \big)^2}{n^2} \bigg( \frac{3}{4} - \frac{n^2}{\ell + \frac{1}{2}} \bigg)
\end{equation*}

And for the spin-orbit interaction we have

\begin{equation*}
\hat{H}_{\text{S-O}}' = F(r) \hat{\mathbf{L}} \cdot \hat{\mathbf{S}}, \quad F(r) = \frac{1}{2m^2 c^2 r} \frac{d V}{dr}
\end{equation*}
\begin{equation*}
\hat{J}^2 = \big( \hat{L} + \hat{S} \big)^2 = \hat{L}^2 + \hat{S}^2 + 2 \hat{\mathbf{L}} \cdot \hat{\mathbf{S}}
\end{equation*}

which gives us

\begin{equation*}
\hat{\mathbf{L}} \cdot \hat{\mathbf{S}} = \frac{\hat{J}^2 - \hat{L}^2 - \hat{S}^2}{2}
\end{equation*}

Applying this to some ket

\begin{equation*}
\hat{J}^2 - \hat{L}^2 - \hat{S}^2 \ket{n, \ell, s, j, m_j} = \Big( j(j + 1) - \ell (\ell + 1) - s (s + 1) \Big) \hbar^2 \ket{n, \ell, s, j, m_j}
\end{equation*}

Thus,

\begin{equation*}
\begin{split}
  \Delta E_{\text{S-O}} &= \bra{n, \ell, s, j, m_j} \hat{H}_{\text{S-O}} \ket{n, \ell, s, j, m_j} \\
  &= \frac{1}{2} \Big( j(j + 1) - \ell (\ell + 1) - s (s + 1) \Big) \hbar^2 \ket{n, \ell, s, j, m_j}
\end{split}
\end{equation*}

where

\begin{equation*}
s = \frac{1}{2} \implies j = \bigg( \ell + \frac{1}{2} \bigg) \quad \text{or} \quad j = \bigg( \ell - \frac{1}{2} \bigg)
\end{equation*}

thus, if

\begin{equation*}
\ell = 0 \implies j = s = \frac{1}{2}
\end{equation*}
\begin{equation*}
\langle F(r) \rangle = \frac{1}{2 m^2 c^2} \frac{Z e^2}{4 \pi \varepsilon_0} \int_{0}^{\infty} \frac{1}{r^3} | R_{n, \ell} (r) |^2 r^2 \ dr
\end{equation*}

Which gives us

\begin{equation*}
\bigg\langle \frac{1}{r^3} \bigg\rangle_{n, \ell} = \frac{Z^3}{a_0^3} \frac{1}{n^3 \ell (\ell + \frac{1}{2}) (\ell + 1)}
\end{equation*}

implying

\begin{equation*}
\Delta E_{\text{S-0}} = - E_n^{(0)} \frac{\big( Z \alpha \big)^2}{2n} \frac{1}{\ell (\ell + \frac{1}{2}) (\ell + 1)}
\end{equation*}

And finally for the Darwin-correction

\begin{equation*}
\hat{H}_{\text{Darwin}} = \frac{\hbar^2}{8 m^2 c^2} \nabla^2 V(\mathbf{r}) = \frac{\pi \hbar^2}{2 m^2 c^2 } \bigg( \frac{Z e^2}{4 \pi \varepsilon_0} \bigg) \delta(\mathbf{r})
\end{equation*}

Observe that $R_{n, \ell}(r) \to 0$ as $\mathbf{r} \to \mathbf{0}$ for $\ell \ne 0$, then

\begin{equation*}
\Delta E_{\text{Darwin}} = \frac{\pi \hbar^2}{2 m^2 c^2} \bigg( \frac{Z e^2}{4 \pi \varepsilon_0} \bigg) \bra{n, 0, 0} \delta(\mathbf{r}) \ket{n, 0, 0}
       = \frac{\pi \hbar^2}{2 m^2 c^2} \frac{Ze^2}{4 \pi \varepsilon_0} | u_{n, 0,0}(0)|^2
\end{equation*}

where

\begin{equation*}
|u_{n, 0, 0}|^2 = \frac{1}{4 \pi} |R_{n, 0}(0)|^2 = \frac{Z^3}{\pi a_0^3 n^3}
\end{equation*}

Which gives us the final Darwin correction

\begin{equation*}
\Delta E_{\text{Darwin}} = - E_n^{(0)} \frac{\big( Z \alpha \big)^2}{n}
\end{equation*}

Giving the total correction

\begin{equation*}
\Delta E_{n, j} = E_n^{(0)} \frac{\big( Z \alpha \big)^2}{n^2} \Bigg( \frac{n}{j + \frac{1}{2}} - \frac{3}{4} \Bigg)
\end{equation*}

Example: Helium atom

Notation

  • $\alpha = \ket{\uparrow}$ represents the spin-down state
  • $\beta = \ket{\downarrow}$ represents the spin-up state
  • $\chi_{s, m_s} \equiv \ket{s_1 = \frac{1}{2}, s_2 = \frac{1}{2}, s, m_s}$ is the states for the coupled basis, where $s$ is the total spin quantum number and $m_s$ is the total magnetic quantum number

Neglecting mutual Coulomb repulsion between the electrons

Neglecting the mututal Coulomb repulsion between the electrons we end up with the Hamiltonian

\begin{equation*}
\hat{H} = \hat{H}_1 + \hat{H}_2 \quad \text{where} \quad \hat{H}_i = \frac{\hat{p}_i^2}{2m} - \frac{2e^2}{4 \pi \varepsilon_0 r_i}
\end{equation*}

It turns out that this yields the ground-state energy just

\begin{equation*}
E_{n = 1} = 8 \times (- 13.6 eV) = - 108.8 eV
\end{equation*}

but experimentally we find

\begin{equation*}
E_{n = 1} = - 78.957 eV
\end{equation*}

For the Helium atom we use the convention of denoting the ground state with $n = 1$ rather than $n = 0$ as for the Hydrogen atom.

Stuff

In the simplest model of the helium atom, the Hamiltonian is

\begin{equation*}
\hat{H} = \hat{H}_1 + \hat{H}_2 + \frac{e^2}{4 \pi \varepsilon_0 | \mathbf{r}_1 - \mathbf{r}_2|}
\end{equation*}

where

\begin{equation*}
\hat{H}_i = \frac{\hat{\mathbf{p}_i}^2}{2m} - \frac{2 e^2}{4 \pi \varepsilon_0 r_i}
\end{equation*}

The term added to $\hat{H}$ represents the repulsion of the two electrons, and in the $\hat{H}_i$ we have the negative term which represents the attraction.

Also, observe that this is symmertic under permutation of the indices which label the two electrons. Thus, the electrons are indistinguishable, as one would want.

The states of the coupled representation for the two spin-half electrons are the three triplet states:

\begin{equation*}
\begin{split}
  \chi_{1, 1} &= \alpha_1 \alpha_2 \\
  \chi_{1, 0} &= \frac{1}{\sqrt{2}} \big( \alpha_1 \beta_2 + \beta_1 \alpha_2 \big) \\
  \chi_{1, -1} &= \beta_1 \beta_2
\end{split}
\end{equation*}

and the singlet state:

\begin{equation*}
\chi_{0, 0} = \frac{1}{\sqrt{2}} \big( \alpha_1 \beta_2 - \beta_1 \alpha_2 \big)
\end{equation*}
  • triplet states are symmetric under permutation
  • singlet state is anti-symmetric under permutation

The overall 2-electron wavefunction is a product of a spatial wavefunction and a spin function:

\begin{equation*}
\Psi(1, 2) = \psi(\mathbf{r}_1, \mathbf{r}_2) \otimes \chi
\end{equation*}

The tensor product between the spatial wavefunction, $\psi$, and the spin function, $\chi$, represents some function $f$ such that

\begin{equation*}
\big( \psi \otimes \chi \big)(\mathbf{r}_1, \mathbf{r}_2, s, m_s) = \psi(\mathbf{r}_1, \mathbf{r}_2) \ \chi_{s, m_s}
\end{equation*}

where $\chi_{s, m_s}$ is as defined above.

Thus, the total 2-electron wavefunction has the following symmetries:

  symmetry of $\chi_s$ symmetry of $\psi$ symmerty of $\Psi$
$S = 0$ (singlet) anti sym anti
$S = 1$ (triplet) sym anti anti

Consider the case of the Helium atom, where (if we're neglecting the inter-electron interactions) we would have the spatial solutions

\begin{equation*}
\psi(\mathbf{r}_1, \mathbf{r}_2) = u_{n_1 \ell_1 m_1} (\mathbf{r}_1) u_{n_2 \ell_2 m_2} (\mathbf{r}_2)
\end{equation*}

since we can just use separation of variables and solve the Schrödinger eqn. for each electron separately, both having identical solutions.

Now, suppose further that $n = 0$, and thus $\ell_i = 0$ and $m_i = 0$, then

\begin{equation*}
\psi(\mathbf{r}_1, \mathbf{r}_2) = u_{100}(\mathbf{r}_1) u_{100}(\mathbf{r}_2)
\end{equation*}

Now consider the case where they would swap places, i.e. we were to interchange them (we're assuming they're completely opposite to each other, since i 2-electron orbit, they're almost always opposite of each other), then

\begin{equation*}
\psi(\mathbf{r}_2, \mathbf{r}_1) = u_{100} (\mathbf{r}_2) u_{100}(\mathbf{r}_1) = u_{100} (- \mathbf{r}_1) u_{100} (\mathbf{r}_1)
\end{equation*}

Now, we know that

\begin{equation*}
\hat{P} \ket{\ell, m} = (- 1)^{\ell} \ket{\ell, m} \iff \hat{P} Y_{\ell}^m = (- 1)^{\ell} Y_{\ell}^m
\end{equation*}

Therefore, $\ket{n=1, \ell=0, m=0}$, as we have above, is symmetric:

\begin{equation*}
u(- \mathbf{r}_i) = u(\mathbf{r}_i)
\end{equation*}

Thus,

\begin{equation*}
\psi(\mathbf{r}_1, \mathbf{r}_2) = \psi(\mathbf{r}_2, \mathbf{r}_1)
\end{equation*}

BUT we know that two fermions cannot occupy the same state at the same time, hence the above is not sufficient! We need the wave-functions to be different, further, for the wave-function of the entire Helium to stay the same we require the wave-functions to be anti-symmetric under interchanging of electrons (for $\ell_1 = \ell_2$, that is). And this is why we introduce the $\chi_{s, m_s}$ described earlier

Ground state
\begin{equation*}
\Psi(\text{ground state}) = u_{100}(\mathbf{r}_1) u_{100}(\mathbf{r}_2) \chi_{0, 0}
\end{equation*}

Compute the first order correction to $E_1$, we compute the expectation of the perturbation $\hat{H}'$, wrt. wavefunction:

\begin{equation*}
\Delta E_1 = \bra{u_{100}^1, u_{100}^2, \chi_{0, 0}} \hat{H}' \ket{u_{100}^1, u_{100}^2, \chi_{0, 0}}
\end{equation*}

which gives us a correction of

\begin{equation*}
\Delta E_1 = \frac{5}{4} Z \ \text{Ry} = \frac{5}{2} \ \text{Ry} = 34 \ \text{eV}
\end{equation*}

giving us the first order estimate of the ground-state energy

\begin{equation*}
E_1 = - 108.8 + 34 \ \text{eV} = - 74.8 \ \text{eV} = - 5.5 \ \text{Ry}
\end{equation*}

which is pretty close to the experimentally observed value of $- 78.957 \ \text{eV}$.

Stuff

Define a new operator

\begin{equation*}
\hat{H} = \hat{H}_0 + \varepsilon \hat{V}
\end{equation*}

With

\begin{equation*}
\hat{H}_0 \ket{u} = E_n \ket{u}
\end{equation*}

Then we want to expand wrt. $\varepsilon$

\begin{equation*}
\begin{cases}
  E &= E_0 + \varepsilon E_1 + \varepsilon^2 E_2 + \dots \\
  \ket{\psi} &= \ket{\psi_0} + \varepsilon \ket{\psi_1} + \varepsilon^2 \ket{\psi_2} + \dots
\end{cases}
\end{equation*}

Setting up the Schrödinger equation, we have

\begin{equation*}
\Big( \hat{H}_0 + \varepsilon \hat{V} \Big) \Big( \ket{\psi_0} + \varepsilon \ket{\psi_1} + \varepsilon^2 \ket{\psi_2} + \dots \Big) = \Big( E_0 + \varepsilon E_1 + \varepsilon^2 E_2 + \dots \Big) \Big( \ket{\psi_0} + \varepsilon \ket{\psi_1} + \varepsilon^2 \ket{\psi_2} \dots \Big)
\end{equation*}

Now, collecting terms involving the different factors of $\varepsilon$, we have:

  • For $\varepsilon^0$ we find:

    \begin{equation*}
  \big( \hat{H}_0 - E_0 \big) \ket{\psi_0} = 0
\end{equation*}

    and therefore $\ket{\psi_0}$ has to be one of the unperturbed eigenfunctions $\ket{\psi^{(n)}}$ and $E_0 = E^{(n)}$, i.e. the corresponding unperturbed eigenvalue.

  • For $\varepsilon^1$ we have:

    \begin{equation*}
  \Big( \hat{H}_0 - E_0 \Big) \ket{\psi_1} + \Big( \hat{V} - E_1 \Big) \ket{\psi_0} = 0
\end{equation*}

    Which we can take the scalar product of with $\ket{\psi_0}$:

    \begin{equation*}
  \mel{\psi_0}{\hat{H}_0}{\psi_1} + \mel{\psi_0}{\hat{V}}{\psi_0} = E_0 \bra{\psi_0}\ket{\psi_1} + E_1 \bra{\psi_0}\ket{\psi_0}
\end{equation*}

    Since $\mel{\psi_0}{\hat{H}}{\psi_1} = E_0 \bra{\psi_0}\ket{\psi_1}$, we have

    \begin{equation*}
  E_1 = \frac{\mel{\psi_0}{\hat{V}}{\psi_0}}{\bra{\psi_0}\ket{\psi_0}}
\end{equation*}

    Hence, the eigenvalue of the Hamitonian in this case becomes:

    \begin{equation*}
  E = E^{(n)} + \varepsilon \frac{\mel{\psi^{(n)}}{\hat{V}}{\psi^{(n)}}}{\bra{\psi^{(n)}}\ket{\psi^{(n)}}} + \mathcal{O}(\varepsilon^2)
\end{equation*}

This is the correction to the n-th energy level! So to obtain the complete correction due to the perturbation, we have to do this for each of the eigenfunctions which make up the wave-function of the entire system we're looking at.

He atom

\begin{equation*}
\hat{H} = \hat{H}_1 + \hat{H}_2 + \hat{V}
\end{equation*}
\begin{equation*}
\hat{V} = \frac{e^2}{| \mathbf{r}_1 - \mathbf{r}_2}
\end{equation*}
\begin{equation*}
\ket{0} = u_{100}(\mathbf{r}_1) u_{100}(\mathbf{r}_2) \ket{s = 0, s_z = 0}
\end{equation*}
\begin{equation*}
\Delta E = \bra{0} \hat{V} \ket{0}
\end{equation*}

Time-dependent Perturbation Theory

Notation

  • $\hat{H}_0$ is the Hamiltonian corresponding to the unperturbed / exactly solvable system
  • $E_n^{(0)}$ and $\ket{n}$ represent the n-th eigenvalue and eigenfunction of the unperturbed Hamiltonian
  • $E_n^{(i)}$ and $\ket{n}^{(i)}$ is the energy and ket of the i-th perturbation term
  • $E_n$ and $\ket{n}$ is the corrected (perturbed with all the terms) eigenvalue and eigenfunction, i.e.
  • $E_n = E_n^{(0)} + \lambda E_n^{(1)} + \lambda^2 E_n^{(2)} + \dots$
  • $\ket{n} = \ket{n^{(0)}} + \lambda \ket{n^{(1)}} + \lambda^2 \ket{n^{(2)}} + \dots$
  • $\dot{c}_n = \frac{\partial c_n}{\partial t}$
  • $\omega_{mn} = \omega_m - \omega_n$
  • $H_{mn}' = \bra{m^{(0)}} \hat{H}'(t) \ket{n^{(0)}}$
  • $f(t, \omega_{mk}) = \frac{1 - \cos \omega_{mk} t}{\omega_{mk}^2} = \frac{2 \sin^2 ( \omega_{mk} t / 2)}{\omega_{mk}^2}$
  • $\rho(E_m)$ is the density of final states

Expression for $c_m(t)$

  • Solution to the TIDE can be written

    \begin{equation*}
\begin{split}
  \ket{\Psi, t} &= \sum_{n} c_n^{(0)} \exp \Big( - \frac{i E_n^{(0)} t}{\hbar}  \Big) \ket{n^{(0)}} \\
  &= \sum_{n} c_n^{(0)} \exp(- i\omega_n t) \ket{n^{(0)}}
\end{split}
\end{equation*}
  • Generalize to the perturbed Hamiltonian

    \begin{equation*}
\hat{H} = \hat{H}_0 + \hat{H}'(t)
\end{equation*}

    the coefficients $c_n^{(0)}$ become time-dependent:

    \begin{equation*}
\ket{\Psi, t} = \sum_{n} c_n(t) \exp \Big( - \frac{i E_n^{(0)} t}{\hbar} \Big) \ket{n^{(0)}}
\end{equation*}

Observe that lack of $\lambda$ here!!!

We're not yet using the perturbation here, which would be

\begin{equation*}
\hat{H} = \hat{H}_0 + \lambda \hat{H}'
\end{equation*}

This is comes in the next section.

  • Probability of finding the system in the state $\ket{m^{(0)}}$ at time $t$ is then

    \begin{equation*}
\left| \bra{m^{(0)}}\ket{\Psi, t} \right|^2 = |c_m(t)|^2
\end{equation*}

    where we have used the orthonormality of $\hat{H}_0$.

  • Substituting into TDSE:

    \begin{equation*}
i \hbar \sum_n \big( \dot{c}_n - i \omega_n c_n \big) \exp(- i \omega_n t) \ket{n^{(0)}} = \sum_n \big( c_n \hbar \omega_n + c_n \hat{H}' \big) \exp(- i \omega_n t) \ket{n^{(0)}}  
\end{equation*}

    which gives:

    \begin{equation*}
\sum_n \big( i \hbar \dot{c}_n - c_n \hat{H}' \big) \exp(- i \omega_n t) \ket{n^{(0)}} = 0  
\end{equation*}
  • Taking scalar product with arbitrary unperturbed state $\ket{m^{(0)}}$:

    \begin{equation*}
i \hbar \dot{c}_m \exp(- i \omega_m t) - \sum_n c_n H_{mn}' \exp(- i\omega_n t) = 0
\end{equation*}

    which gives:

    \begin{equation*}
\dot{c}_m = \big( i \hbar \big)^{-1} \sum_n c_n H_{mn}' \exp(i \omega_{mn} t), \quad \omega_{mn} = \omega_m - \omega_n  
\end{equation*}

Perturbation

  • Now consider the Hamiltonian related to the perturbed one above:

    \begin{equation*}
  \hat{H} = \hat{H}_0 + \lambda \hat{H}'
\end{equation*}
  • Assume we can expand $c_n$ in power series:

    \begin{equation*}
c_n = c_n^{(0)} + \lambda c_n^{(1)} + \lambda^2 c_n^{(2)} + \dots
\end{equation*}
  • Substitute in the equation for $\dot{c}_m$ derived above (factor of $\lambda$ on RHS is from $\lambda \hat{H}'$ now instead of $\hat{H}'$):

    \begin{equation*}
\dot{c}_m^{(0)} + \lambda \dot{c}_m^{(1)} + \dots = \big( i \hbar \big)^{-1} \lambda \sum_n c_n^{(0)} H_{mn}' \exp(i \omega_{mn} t) + \dots
\end{equation*}
  • Equate terms of same degree in $\lambda$:

    \begin{equation*}
\begin{split}
  \dot{c}_m^{(0)} &= 0 \\
  \dot{c}_m^{(1)} &= \big( i \hbar \big)^{-1} \sum_n c_n^{(0)} H_{mn}' \exp(i \omega_{mn} t)
\end{split}
\end{equation*}
    • Zeroth order is time-independent; since to this order the Hamiltonian is time-independent => recover unperturbed result
  • Integrating first-order correction gives:

    \begin{equation*}
c_m^{(1)}(t) = c_m^{(1)}(t_0) + \big( i \hbar \big)^{-1} \sum_n c_n^{(0)} \int_{t_0}^t H_{mn}' \exp(i \omega_{mn} t') \ dt'
\end{equation*}
  • Suppose initially system is eigenstate of $\hat{H}_0$, say $\ket{k^{(0)}}$, then for $t > t_0$, /probability of finding system in different state $\ket{m^{(0)}}$ at time $t$

    \begin{equation*}
  c_m^{(1)}(t) = \big( i \hbar \big)^{-1} \int_{t_0}^t H_{mk}' \exp(i \omega_{mk} t') \ dt', \quad m \ne k
\end{equation*}
  • Thus, transition probability of finding the system at a later time, $t$, in the state $\ket{m^{(0)}}$ where $m \ne k$, is given by

    \begin{equation*}
p_{mk}(t) \approx \left| c_m^{(1)}(t) \right|^2 = \frac{1}{\hbar^2} \left| \int_{t_0}^{t} H_{mk}' \exp(i \omega_{mk} t') \ dt' \right|^2  
\end{equation*}

TODO Time-independent pertubations for time-dependent wave equation

  • Suppose $\hat{H}'$ is actually independent of time

Fermi's Golden Rule

  • Interested in transitions not to single state, but to a group of final states, $G$, in some range:

    \begin{equation*}
E_k^{(0)} - \Delta E \le E_k^{(0)} \le E_k^{(0)} + \Delta E
\end{equation*}

    e.g. transitions to continuous part of energy-spectrum.

  • Total transition probability:

    \begin{equation*}
p_G(t) = \frac{2}{\hbar^2} \int_{E_k^{(0)} - \Delta E}^{E_k^{(0)} + \Delta E} \left| H_{mk}' \right|^2 f(t, \omega_{mk}) \rho(E_m) \ dE_m
\end{equation*}
  • Assume $\Delta E$ to be small, s.t. can treat $\rho(E_m) \left| H_{mk}' \right|^2$ constant wrt. $E_m$ on this interval:

    \begin{equation*}
  p_G(t) = \frac{2}{\hbar^2} \left| H_{mk}' \right|^2 \rho\left(E_m^{(0)}\right) \int_{E_k^{(0)} - \Delta E}^{E_k^{(0)} + \Delta E}  f(t, \omega_{mk}) \ dE_m
\end{equation*}
  • Change of variables $d E_m = \hbar \ d \omega_{mk}$, we get

    \begin{equation*}
p_G(t) = \frac{2 \pi t}{\hbar} \left| H_{mk}' \right|^2 \rho(E)  
\end{equation*}

    where we've used

    \begin{equation*}
  \int_{- \infty}^{\infty} f(t, \omega) \ d \omega = \pi t
\end{equation*}

Number of transitions per time, the transition rate, $R$, is then just

\begin{equation*}
\mel{n' \ell' m'}{\hat{L}_z \hat{z} - \hat{z} \hat{L}_z}{n \ell m } = 0 \implies \mel{n' \ell' m'}{\hat{L}_z \hat{z}}{n \ell m} = \mel{n' \ell' m'}{\hat{z} \hat{L}_z}{n \ell m}
\end{equation*}
\begin{equation*}
R = \frac{d p_G(t)}{dt} = \frac{2 \pi}{\hbar} \left| H_{mk}' \right|^2 \rho(E)
\end{equation*}

Harmonic Perturbations

Notation

  • $\hat{\mathcal{H}}'$ is the time-independent Hermitian operator

Stuff

  • Transition probability amplitude is a sinusoidal function of itme which is turned on at time $t = 0$
  • Suppose that

    \begin{equation*}
\hat{H}'(t) = \hat{\mathcal{H}}' \sin \omega t  
\end{equation*}

    where $\hat{\mathcal{H}}'$ is the time-independent Hermitian operator, which we write

    \begin{equation*}
\hat{\mathcal{H}} = \hat{H}'(t) = \hat{A} \exp(i \omega t) + \hat{A}^{\dagger} \exp(- i \omega t), \quad \hat{A} = \frac{\hat{\mathcal{H}}'}{2 i}
\end{equation*}

Radiation

Notation

  • $\mathbf{D} = - e \mathbf{r}$ is the dipole operator

Electromagnetic radiation

$\mathbf{D} = - e \mathbf{r}$ is the dipole operator where

  • $e$ is unit charge

Einstein A-B coefficients

Notation

Spontanoues emission (thermodynamic argument)

We have

\begin{equation*}
\begin{split}
  E_m > E_k \quad \text{absorption} \quad & R_{mk} = \frac{\pi I (\omega_{mk})}{3 c \hbar^2 \epsilon_0} | D_{mk} |^2 \cos^2 \theta \\
  E_m < E_k \quad \text{emission} \quad & R_{mk} = \frac{\pi I(- \omega_{mk})}{c \hbar^2 \epsilon_0} |D_{mk}|^2 \cos^2 \theta
\end{split}
\end{equation*}

For absorption we have

\begin{equation*}
\begin{split}
  \dot{N}_{mk} &= N_k \rho(\omega_{mk}) B_{mk} \\
  \dot{N}_{mk} &= R_{mk] N_k \\
  B_{mk} &= \frac{R_{mk}}{\rho(\omega_{mk})} c \frac{R_{mk}}{I(\omega_{mk}}} \\
  \implies B_{mk} & = \frac{\pi}{3 \hbar^2 \epsilon_0} |D_{mk}|^2
\end{split}
\end{equation*}

Now, suppose we have emission , with transition $m \to k$:

\begin{equation*}
\begin{split}
  \dot{N}_{km} = B_{km} N_m \rho(\omega_{mk}) + A_{km} N_m
\end{split}
\end{equation*}

Note that this is $B_{km}$, NOT $B_{mk}$ as introduced earlier.

The distribution between the number of particles in the different states is then given by the Boltzmann distribution

\begin{equation*}
\begin{split}
  \frac{N_k}{N_m} &= \exp \Bigg( - \frac{E_k - E_m}{k_B T} \Bigg) \\
  &= \exp \Bigg( \frac{\hbar \omega_{mk}}{k_B T} \Bigg) \\
  \dot{N}_{mk} = \dot{N}_{km} & \implies \frac{N_k}{N_m} = \frac{A_{km} + B_{km} \rho(\omega_{mk})}{B_{mk} \rho(\omega_{mk})}
\end{split}
\end{equation*}

Plank's law tells us that Black body radiation follows

\begin{equation*}
\begin{split}
  n(\lambda) &= \frac{8 \pi h c}{\lambda^5}  \frac{1}{\exp(hc / \lambda k_B T) - 1} \\
  \rho(\omega) d \omega &= n(\lambda) d \lambda \\
  \rho(\omega_{mk}) &= \frac{\hbar \omega_{mk}^3}{\pi^2 c^3} \frac{1}{\exp(\hbar \omega_{mk}/k_B T) - 1}
\end{split}
\end{equation*}

Which we can rewrite as

\begin{equation*}
\begin{split}
  \rho(\omega_{mk}) &= \frac{A_{km}}{B_{mk} \exp(\hbar \omega_{mk} / k_B T) - B_{km}} \\
  B_{km} &= B_{mk} \\
  A_{km} &= \frac{\hbar \oemga_{mk}^3}{\pi^2 c^3} B_{km}
\end{split}
\end{equation*}

Substituting back into the epxression for spontaneous emission

\begin{equation*}
R_{mk}^{\text{spon}} = \frac{\omega_{mk}^3}{3 \pi c^3 \hbar \epsilon_0} |D_{mk}|^2
\end{equation*}

One obtains the same result in QED.

Selection rules

Goal is to evaluate $\mathbf{D}_{mk}$.

Hydrogenic atom
  • electric dipole operator is spin-independent
  • work in uncoupled basis and ignore spin
\begin{equation*}
\comm{\hat{L}_z}{\hat{z}} = 0
\end{equation*}

Which implies

\begin{equation*}
\mel{n' \ell' m'}{\comm{\hat{L}_z}{\hat{z}}}{n \ell m} = 0
\end{equation*}

Which implies either of the following is true

  • $\mel{n' \ell' m'}{\hat{z}}{n \ell m} = 0$
  • $\Delta m = m' - m = 0$

This is called a selection rule.

Doing the same for $\hat{x} + i \hat{y}$, we get

\begin{equation*}
\comm{\hat{L}_z}{\hat{x} + i \hat{y}} = \pm \hbar (\hat{x} + i \hat{y})
\end{equation*}

Considering the matrix elements, we get

\begin{equation*}
\mel{n' \ell' m'}{\comm{\hat{L}_z}{\hat{x} + i \hat{y}}}{n \ell m} = \pm \hbar \mel{n' \ell' m'}{(\hat{x} \pm i \hat{y})}{n \ell m}
\end{equation*}

which becomes

\begin{equation*}
(m' - m \mp 1) \hbar \mel{n' \ell' m'}{(\hat{x} \pm i \hat{y})}{ n \ell m} = 0
\end{equation*}

so we deduce that

\begin{equation*}
\mel{n' \ell' m'}{(\hat{x} \pm i \hat{y})}{n \ell m} = 0 \quad \text{unless} \quad m' = m \pm 1
\end{equation*}

giving the selection rule $\Delta m = \pm 1$.

We then conclude that the electric dipole transitions are only possible if

\begin{equation*}
\Delta m = 0, \pm 1
\end{equation*}

Which is due to

\begin{equation*}
\mathbf{D}_{mk} = \mel{m^{(0)}}{\mathbf{D}}{k^{(0)}}
\end{equation*}

hence $D_{mk}$ would be zero unless $\Delta = 0, \pm 1$.

Parity Selection Rule
  • Under parity operator $\mathbf{r} \mapsto - \mathbf{r}$, electric dipole operator is odd multi-electron atoms
  • See notes :)

Quantum Scattering Theory

Notation

  • $\theta$ is the scattering angle
  • $\phi$ is the incident angle
  • $\mathbf{p}$ is the incident momentum
  • $\mathbf{p}'$ is the scattering momentum (i.e. momentum after being scattered)
  • $\mathbf{k} = \mathbf{p} / \hbar$, thus when we talk about the direction and so on of $\mathbf{k}$ (wave-number), it's equivalent to the angle of the momentum
  • $d \Omega$ is the solid angle
  • Rate of transitions from initial to final plane-wave states

    \begin{equation*}
R = \frac{2 \pi}{\hbar} \left| \mel{\mathbf{k}'}{\hat{V}}{\mathbf{k}} \right|^2 \rho(E_{k'})
\end{equation*}
  • $\rho(E_{k'})$ is the density of the final states: $\rho(E_{k'}) \ d E_{k'}$ is the number of final states with energy in the range $[E_{k'}, E_{k'} + d E_{k'})$
  • $V(\mathbf{r})$ denotes fixed potential (i.e. we say the "potential source" is fixed at some point and $\mathbf{r}$ is then the position of the incident particle, and we treat the potential from the fixed scattering-particle as a regular "external" field)

Setup

  • Beam of particles
  • Each of momentum $\mathbf{p} = \hbar \mathbf{k}$

quantum_scattering.png

The incident flux is the number of incident particles crossing unit area perpendicular to the beam direcion per unit time.

The scattered flux is the number of scattered particles scattered into the element of solid angle $d \Omega$ about the direction $\theta$, $\phi$ perunit time per unit solid angle.

The differential cross-section is usually denoted $\frac{d \sigma}{d \Omega}$ and is defined to be the ratio of the scattered flux to the incident flux:

\begin{equation*}
\frac{d \sigma}{d \Omega} = \frac{\text{scattered flux}}{\text{incident flux}}
\end{equation*}

The differential cross-section thus has dimensions of area: $[L]^2$

Total cross-section is then

\begin{equation*}
\sigma_T = \int_{4 \pi} \frac{d \sigma}{d \Omega} d \Omega = \int \int \frac{d \sigma}{d \Omega} \sin \theta \ d \theta d \phi
\end{equation*}

Born approximation

  • Can use time-dependent perturbation teory theto approximate the cross-section
  • Assume interaction between particle and scattering centre is localized to the region around $r = 0$
  • Hamiltonian

    \begin{equation*}
\hat{H} = \hat{H}_0 + \hat{V}(\mathbf{r}), \quad \hat{H}_0 = \frac{\hat{p}^2}{2 m}
\end{equation*}

    and treat $\hat{V}(\mathbf{r})$ as perturbation

  • Wave-functions are non-normalizable, therefore we restrict to "potential-well" scenario, as we can take the width of the well as large as we "want"
    • Since we're working in 3D: box-normalization

Density of final states

  • Final-state vector $\mathbf{k}'$ is a point in k-space
  • All $\mathbf{k}'$ form a cubic lattice with lattice spacing $2 \pi / L$ (because of the potential well approx. discretizing the energy)
  • Volume of k'-sphere per lattice point is $(2 \pi / L)^3$
  • # of states in volume element $d^3 k'$ is

    \begin{equation*}
\Bigg( \frac{L}{2 \pi} \Bigg)^3 \ d^3 k' = \Bigg( \frac{L}{2 \pi} \Bigg)^3 k'^2 \ dk' \ d \Omega  
\end{equation*}

    using spherical coordinates

  • Energy is

    \begin{equation*}
  E_{k'} = \frac{\hbar^2 k'^2}{2m}
\end{equation*}

    thus, the $\rho(E_{k'})$, the density of states per unit energy, is the enrgy corresponding to wave-vector $\mathbf{k}'$. Therefore,

    \begin{equation*}
\rho(E_{k'}) \ d E_{k'} = \Bigg( \frac{L}{2 \pi} \Bigg)^3 k'^2 \ dk' \ d \Omega  
\end{equation*}

    is the # of states with energy in the desried interval and with $\mathbf{k}'$ pointing into the solid angle $d \Omega$ about the direction $(\theta, \phi)$.

  • Final result for density of states

    \begin{equation*}
\rho(E_{k'}) = \frac{L^3 m k'}{8 \pi^3 \hbar^2} \ d \Omega  
\end{equation*}

Incident flux

  • Box normalization corresponds to one particle per volume $L^3$

    \begin{equation*} 
\text{incident flux} = \frac{|\mathbf{v}|}{L^3} = \frac{\hbar k}{m L^3}
\end{equation*}

Scattered flux

  • By Fermi's Golden Rule

    \begin{equation*}
R = \frac{2 \pi}{ \hbar} | V_{\mathbf{k}', \mathbf{k}} |^2 \frac{L^3}{8 \pi} \frac{m k'}{\hbar^2} \ d \Omega  
\end{equation*}
  • Is # of particles scattered into $d \Omega$ per unit time
  • Scattered flux therefore obtained by dividing by $d \Omega$ to get # of unit time per unit solid angle

Differential Cross-section for Elastic Scattering

  • Can compute the cross-section using scattered flux and incident flux found earlier

    \begin{equation*}
\frac{d \sigma}{d \Omega} = \frac{\text{scattered flux}}{\text{incident flux}} = \frac{m L^3 }{h\bar k} \frac{2 \pi}{\hbar} | V_{\mathbf{k}', \mathbf{k}}|^2 \frac{L^3}{8 \pi^3} \frac{m k'}{\hbar^2}  
\end{equation*}
  • If potential is real, energy conservation implies elastic scattering, i.e.

    \begin{equation*}
k = k'  
\end{equation*}

The Born approximation to the differential cross-section then becomes

\begin{equation*}
\frac{d \sigma}{d \Omega} = \frac{m^2}{4 \pi^2 \hbar^4} L^6 \left| \mel{\mathbf{k}'}{\hat{V}}{\mathbf{k}} \right|^2
\end{equation*}

with

\begin{equation*}
\mel{\mathbf{k}'}{\hat{V}}{\mathbf{k}} = \frac{1}{L^3} \int V(\mathbf{r}) \exp \big( - i \mathbf{K} \cdot \mathbf{r} \big) \ d^3 r, \qquad \mathbf{K} = \mathbf{k}' - \mathbf{k}
\end{equation*}

where $\mathbf{K}$ is called the wave-vector transfer.

(this is just the 3-dimensional Fourier transform of the potential energy function.)

Scattering by Central potentials

  • Can simplify further
  • Work in polar coordinates $\Theta, \Phi$, which refer to the wave-vector transfer $\mathbf{K}$ so that

    \begin{equation*}
\mathbf{K} \cdot \mathbf{r} = K r \cos \Theta  
\end{equation*}
  • Then

    \begin{equation*}
\int V(r) \exp \big( - i \mathbf{K} \cdot \mathbf{r} \big) \ d^3 r = \frac{4 \pi}{ K} \int_0^{\infty} r V(r) \sin (Kr) \ dr
\end{equation*}
  • Born approximation then becomes

    \begin{equation*}
\frac{d \sigma}{d \Omega} = \frac{4 m^2 }{\hbar^4 K^2} \left| \int_0 ^{\infty} r V(r) \sin(Kr) \ dr \right|^2  
\end{equation*}

    which is independent of $\phi$ but depends on the scattering angle, $\theta$, through $K$.

  • Trigonometry gives

    \begin{equation*}
K = 2 k \sin \bigg( \frac{\theta}{2} \bigg) \quad \text{since} \quad k = k'  
\end{equation*}

Quantum Rotator

  • Two particles of mass $m_1$ and $m_2$ separated by fixed distance $r_e$
  • Effective description of the rotational degrees of freedom of a diatomic molecule
  • Choose centre-of-mass as origin: system is completely specified by angles $\theta$ and $\phi$ (since $r_e$ is fixed)
  • Neglecting vibrational degrees of freedom, both $r_1$ and $r_2$ are constant, and $r_e = r_1 + r_2$
  • Since origin is frame of CM:

    \begin{equation*}
m_1 r_1 = m_2 r_2 \implies \frac{r_1}{m_2} = \frac{r_2}{m_1} = \frac{r_e}{m_1 + m_2}
\end{equation*}
  • (Classically) Moment of intertia of the system is:

    \begin{equation*}
I = m_1 r_1^2 + m_2 r_2^2 = \mu r_e^2   
\end{equation*}

    where

    \begin{equation*}
\mu = \frac{m_2 + m_1}{m_1 m_2}   
\end{equation*}
  • Classical mechanics:

    \begin{equation*}
|\mathbf{L}| = I \omega_R   
\end{equation*}

    where $\omega_R$ is the angular velocity of the system. The energy can be expressed as

    \begin{equation*}
H = \frac{1}{2} I \omega_R^2 = \frac{L^2}{2I} = \frac{L^2}{2 \mu r_e^2}
\end{equation*}
  • Correspondance principle:

    \begin{equation*}
\hat{H} = \frac{\hat{L}^2}{2 \mu r_e^2}
\end{equation*}
  • Since we're neglecting vibration, $r$ is fixed, hence the wave function is independent of $r$:

    \begin{equation*}
\hat{H} Y_{\ell}^m(\theta, \phi) = \frac{\ell (\ell + 1) \hbar^2}{2 \mu r_e^2} Y_{\ell}^m (\theta, \phi)
\end{equation*}

Two-body scattering

  • So far assumed beam of particle scattered by fixed scattering centre; interaction described by $\hat{V}(\mathbf{r})$
    • Useful for electron-atom scattering due to regarding the atom as infinitively heavy
  • Consider two-particle system; turns out it's the same!
  • Hamiltonian with relative separation

    \begin{equation*}
\hat{H} = - \frac{\hbar^2}{2 m_1 } \nabla_1^2 - \frac{\hbar^2}{2m_2} \nabla_2^2 + V(\mathbf{r}_1 - \mathbf{r}_2)
\end{equation*}
  • Centre of mass and relative position vectors

    \begin{equation*}
\mathbf{R} = \frac{m_1 \mathbf{r}_1 + m_2 \mathbf{r}_2}{m_1 + m_2} \quad \text{and} \quad \mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2
\end{equation*}
  • Then

    \begin{equation*}
\mathbf{r}_1 = \mathbf{R} + \frac{m_2}{m_1 + m_2} \mathbf{r} \quad \text{and} \quad \mathbf{r}_2 = \mathbf{R} - \frac{m_1}{m_1 + m_2} \mathbf{r}
\end{equation*}
  • Rewrite gradient operators $\nabla_1$ and $\nabla_2$ by

    \begin{equation*}
\frac{\partial }{\partial x_1} = \frac{\partial X}{\partial x_1} \frac{\partial }{\partial X} + \frac{\partial x}{\partial x_1} \frac{\partial }{\partial x} = \frac{m_1}{m_1 + m_2} \frac{\partial }{\partial X} + \frac{\partial }{\partial x}  
\end{equation*}

    where

    \begin{equation*}
\nabla_R = \Bigg( \frac{\partial }{\partial X}, \frac{\partial }{\partial Y}, \frac{\partial }{\partial Z} \Bigg) \quad \text{and} \quad \nabla = \Bigg( \frac{\partial }{\partial x}, \frac{\partial }{\partial y}, \frac{\partial }{\partial z} \Bigg)  
\end{equation*}

    and so on.

To see this just apply the differential operator wrt. $x_1$ to some function $f$:

\begin{equation*}
\frac{\partial f}{\partial x_1} = \frac{\partial X}{\partial x_1} \frac{\partial f}{\partial X} + \frac{\partial x}{\partial x_1} \frac{\partial f}{\partial x}
\end{equation*}
  • Then

    \begin{equation*}
\hat{H} = - \frac{\hbar^2}{2 M}   \nabla_R^2 - \frac{\hbar^2 }{2 \mu} \nabla^2 + V(\mathbf{r})
\end{equation*}

    where

    \begin{equation*}
  M = m_1 + m_2 \quad \text{and} \quad \mu = \frac{m_1 m_2}{M}
\end{equation*}

    with $\mu$ called the reduced mass

  • Can be viewed as

    \begin{equation*}
\hat{H} = \hat{H}_{\text{CM}} + \hat{H}_{\text{rel}}  \quad \text{where} \quad \hat{H}_{\text{CM}} = - \frac{\hbar^2}{2M} \nabla_R^2
\end{equation*}
  • $\hat{H}_{\text{CM}}$ describes free motation (kinetic energy) of centre of mass
  • In CM frame the CM is at rest, hence

    \begin{equation*}
\hat{H} = \hat{H}_{\text{rel} } = - \frac{\hbar^2}{2 \mu} \nabla^2 + V(\mathbf{r})
\end{equation*}

    which is identical to the form of Hamiltonian of a single particle moving in the fixed potential

$V(\mathbf{r})$

  • Implies CM cross-section for two-body scattering obtained from solution to single particle of mass $\mu$ scattering from fixed potential

$H_2^+$ Ion and Bonding

Notation

  • $M$ is the mass of a proton
  • $m$ is the mass of an electron
  • $\mu_e$ reduced mass of the electron/two-proton system

    \begin{equation*}
  \mu_e = \frac{m (2M)}{m + 2M} \approx m
\end{equation*}
  • $\mu_{12}$ is the reduced mass of the two-proton system

    \begin{equation*}
\mu_{12} = \frac{M}{2}  
\end{equation*}
  • eigenfunction is called gerade if parity is even, and ungerade if the parity is odd

Stuff

  • Schrödinger equation is

    \begin{equation*}
\Bigg[ - \frac{\hbar^2}{2 \mu_{12}} \nabla_R^2 - \frac{\hbar^2}{2 \mu_e} \nabla_r^2 - \frac{e^2}{(4 \pi \epsilon_0) r_1} - \frac{e^2 }{(4 \pi \epsilon) r_2} + \frac{e^2}{(4 \pi \epsilon_0) R} \Bigg] \psi(\mathbf{r}, \mathbf{R}) = E \psi(\mathbf{r}, \mathbf{R})  
\end{equation*}
  • Nuclei massive compared to electrons, thus motion of nuclei much slower than electron
  • Therefore, treat nuclear and electronic motions independently, further treat the nuclei as fixed, i.e. we only care about $\mathbf{R}$

Central Field Approximation

Notation

  • $\ell$ is angular momentum quantum number
  • $m$ is magnetic quantum number
  • $\mu$ is mass

Stuff

Starting with the TISE, we have:

\begin{equation*}
\Bigg[ - \frac{\hbar^2}{2 \mu} \nabla^2 + V(r) \Bigg] u(\mathbf{r}) = E u(\mathbf{r})
\end{equation*}

Rewriting in spherical coordinates, using the Laplacian in spherical coordinates and assuming

\begin{equation*}
u(\mathbf{r}) = R(r) Y_{\ell}^{m}(\theta, \phi)
\end{equation*}

we have

\begin{equation*}
\frac{\hbar^2}{2 \mu} \Bigg[ - \frac{1}{r^2} \frac{\partial}{\partial r} \bigg( r^2 \frac{\partial}{\partial r} \bigg) + \frac{1}{\hbar^2 r^2}\hat{L}^2 \Bigg] R(r) Y_{\ell}^m(\theta, \hpi) = \Big( E - V(r) \Big) R(r) Y_{\ell}^m(\theta, \phi)
\end{equation*}

which becomes:

\begin{equation*}
\frac{1}{r^2} \frac{d}{dr} \Bigg( r^2 \frac{dR}{dr} \Bigg) - \frac{\ell (\ell + 1)}{r^2} R + \frac{2 \mu}{\hbar^2} \big[ E - V(r) \big] R = 0
\end{equation*}

Suppose

\begin{equation*}
R(r) = \frac{\chi(r)}{r}
\end{equation*}

then the above equation becomes

\begin{equation*}
\frac{dR}{dr} = \frac{d}{dr} \frac{\choo(r)}{r} = \frac{\chi'(r) r - \chi}{r^2}
\end{equation*}

Thus,

\begin{equation*}
\begin{split}
  \frac{1}{r^2} \frac{d}{dr} \Bigg( r^2 \frac{dR}{dr}  \Bigg) &= \frac{1}{r^2} \frac{d}{dr} \Bigg( \chi'(r) r - \chi \Bigg) \\
  &= \frac{1}{r^2}\Bigg( \chi''(r) r + \chi'(r) - \chi'(r) \Bigg) \\
  &= \frac{\chi''(r)}{r}
\end{split}
\end{equation*}

Substituting back into the equation we just arrived at, we obtain

\begin{equation*}
\frac{\chi''(r)}{r} - \frac{\ell(\ell + 1)}{r^2} \frac{\chi}{r} + \frac{2 \mu}{\hbar^2}\Big( E - V(r) \Big) \frac{\chi}{r} = 0
\end{equation*}

Multiplying by $r$

\begin{equation*}
\chi'' - \frac{\ell(\ell + 1)}{r^2} \chi + \Big( E - V(r) \Big) \chi = 0
\end{equation*}

Which we can rewrite as

\begin{equation*}
- \frac{\hbar^2}{2 \mu} \chi'' \Bigg( \Big( V(r) - E \Big) + \frac{2 \mu}{ \hbar^2} \frac{\ell (\ell + 1)}{r^2} \Bigg) \chi = 0
\end{equation*}

Therefore we consider this is a effective potential:

\begin{equation*}
V_{\text{eff}} = V(r) + \frac{2 \mu}{ \hbar^2} \frac{\ell (\ell + 1)}{r^2}
\end{equation*}

Example: charged nucleus

Consider the problem of an atom or ion containing a nucleus withcharge $Ze$ and $N$ elecrons.

We assume nucleus to be infinitively heavy and we assume only the following interactions are present:

  • Coulomb interaction between each electron and the nucleus
  • Mutual electronic repulsion

Then the Hamiltonian is

\begin{equation*}
\hat{H} = \sum_{i=1}^{N} \Bigg( \frac{\hat{p}_i^2}{2m} - \frac{Ze^2}{(4 \pi \varepsilon_0) r_i} \Bigg) + \sum_{i > j = 1}^{N} \frac{e^2}{(4 \pi \varepsilon_0) r_{ij}}
\end{equation*}

Presence of electron-electron interaction terms (2nd sum) means that the Schödinger equation is not separable.

Therefore we introduce the Central Field Approximation, which is an example of a independent particle model , in which we assume that each electron moves in an effective potential $V_{\text{eff}}(r)$, which incorporates the nuclear attraction and the average effect of the repuslive interaction with the other $(N - 1)$ electrons.

We expect this effective potential to have the following properties:

\begin{equation*}
\lim_{r \to 0} V_{\text{eff}}(r) = - \frac{Ze^2}{(4 \pi \varepsilon_0) r} \quad \text{and} \quad \lim_{r \to \infty} V_{\text{eff}}(r) = - \frac{[Z - (N - 1)]^2 e^2}{(4 \pi \varepsilon_0) r}
\end{equation*}
  • $r \to 0$ means the nucleus dominates
  • $r \to \infty$ means that we can treat (nuclues) + (the rest of the electrons) as a point-charge

Variational Method

  • Especially useful for estimating the ground-state energy of a system
  • This is closesly (if not "exactly") related to the Rayleigh quotient from Linear algebra

Main problem is guessing a trial function $\psi_T$:

  • Recognize properties of the system and observe that $\psi_T$ ought to have the same properties (e.g. symmetry, etc.)

Useful identity:

\begin{equation*}
\int r e^{-ar} \ dr = - \frac{\partial}{\partial a} \int e^{- ar} \ dr
\end{equation*}

Consider a system described by a Hamiltonian $\hat{H}$, with a complete orthonormal set of eigenstates $\{ \ket{n} \}$ and corresponding energy eigenvalues $\{ E_n \}$ ordered in increasing value

\begin{equation*}
E_1 \le E_2 \le E_3 \le \dots
\end{equation*}

We observe that

\begin{equation*}
\langle E \rangle = \frac{\mel{\psi}{\hat{H}}{\psi}}{\bra{\psi}\ket{\psi}} = \frac{\sum_{n} |c_n|^2 E_n}{\sum_{n} |c_n|^2} \ge \frac{\sum_n |c_n|^2 E_1}{\sum_n |c_n|^2} = E_1
\end{equation*}

since $E_n \ge E_1$ for all $n > 1$. Thus we have the inequality

\begin{equation*}
\langle E \rangle \ge E_1
\end{equation*}

Thus, we can estimate $E_1$ as follows:

  1. We chose a trial state $\ket{\psi_T}$ which depends on one or more paramters $\alpha_1, \dots, \alpha_r$.
  2. Calculate

    \begin{equation*}
 E(\alpha_1, \alpha_2, \dots, \alpha_r) = \frac{\mel{\psi_T}{\hat{H}}{\psi_T}}{\bra{\psi_T}\ket{\psi_T}}
\end{equation*}
  3. Minimize $E((\alpha_1, \alpha_2, \dots, \alpha_r)$ wrt. the variational parameters $\alpha_1, \dots, \alpha_r$, by solving

    \begin{equation*}
 \frac{\partial E(\alpha_1, \dots, \alpha_r)}{\partial \alpha_i} = 0, \quad i = 1, 2, \dots, r
\end{equation*}

The resulting minimum value of $E(\alpha_1, \dots, \alpha_r)$ is then our best estimate for the ground-state energy of the given trial function.

Hidden Variables, EPR Paradox, Bell's Inequality

  • Hidden variable theories suppose that there exist parameters which fully determine the values of the observables, and that QM is an incomplete theory where the wavefunction simply reflects the probability distribution wrt. to these parameters

EPR though experiment

  • Two measuring devices (Alice and Bob)
  • Electron sent from the middle where due to conservation of momentum (total spin $s = 0$ needs to be conserved) we need:
    • Alice to receive $\ket{- \frac{1}{2}}$ or $\ket{\frac{1}{2}}$
    • Bob receives the state which Alice does not
  • Distance between measuring devices is too large to exchange any information between the the two measuring devices (even at speed of light)

Consider the following cases:

  • Both measures spin-state in z-direction: measuring Alice's completely determines what we measures at Bob, due to spin being conserved
  • Alice measures along z-direction, Bob measures along x-direction: measuring Alice's does not completely determine Bob's measure, rather, both spin-half states will be equally likely for Bob

The state of the "electron" measured by Bob will be correlated to the direction in which Alice makes the measurement!

Bell's Inequality

Notation

  • $\sigma_i$ spin component in direction $\mathbf{n}_i$ of the particle travelling to $A$, e.g. $(\sigma_1, \sigma_2, \sigma_3) = (- \ + \ -)$ refers to taking on a specific configuration (remember we can only measure one of these directions)
  • $\tau_i$ spin component in direction $\mathbf{n}_i$ of the particle travelling to $B$
  • $f(\sigma_1, \sigma_2, \sigma_3, \tau_1, \tau_2, \tau_3)$ is the fraction of particle-pairs belong to the group $(\sigma_1, \sigma_2, \sigma_3, \tau_1, \tau_2, \tau_3)$
  • $P_{++}(\mathbf{n}_1, \mathbf{n}_2)$ denotes the probability of measuring $\sigma_1 = +$ and $\tau_2 = +$ together

Stuff

Since we're assuming that the spin-half pair is produced in a singlet state, this tells us immediately that

\begin{equation*}
f(\sigma_1, \sigma_2, \sigma_3, \tau_1, \tau_2, \tau_3) = 0 \quad \text{unless} \quad \sigma_i = - \tau_i, \quad i = 1, 2, 3
\end{equation*}

Suppose then that we're measuring $\sigma_1$ and $\tau_2$, we marginalize out all the other possible states to obtain the distribution:

\begin{equation*}
P_{++}(\mathbf{n}_1, \mathbf{n}_2) = \sum_{\sigma_2, \sigma_3} \sum_{\tau_1, \tau_3} f( + \ \sigma_2 \ \sigma_3,\ \tau_1 \ + \ \tau_3)
\end{equation*}

Observe that

\begin{equation*}
f( + \ \sigma_2 \ \sigma_3,\ \tau_1 \ + \ \tau_3) = 
\begin{cases}
  f( + \ \sigma_2 \ \sigma_3,\ \tau_1 \ + \ \tau_3) & \text{if } \tau_1 = \ -, \sigma_2 = - \\
  0 & \text{otherwise}
\end{cases}
\end{equation*}

Thus,

\begin{equation*}
P_{++}(\mathbf{n}_1, \mathbf{n}_2) = f( +, -, +; -, +, -) + f( +, - ,- ; -, +, + )
\end{equation*}

Doing the same for the pairs $\mathbf{n}_2, \mathbf{n}_3$ and $\mathbf{n}_1, \mathbf{n}_3$ we observe that

The Bell's Inequality is for measurements made of two spin-half particles in directions $\mathbf{n}_1, \mathbf{n}_2, \mathbf{n}_3$:

\begin{equation*}
P_{ ++ } (\mathbf{n}_1, \mathbf{n}_2) \le P_{ ++ } (\mathbf{n}_2, \mathbf{n}_3) + P_{ ++ } (\mathbf{n}_1, \mathbf{n}_3)
\end{equation*}

which defines a requirement which needs to be satisfied for a theory to be a realistic local theory.

Components possessing objective properties which exist independently of any mesaurements by observers and that the result of a measurement of a property at point $B$ cannot depend on an event at point $A$ sufficiently far from $B$ that information about the event at $A$, even travelling at the speed of light, could not reach $B$ until after the measurement has taken place.

Theories meeting these requiremenets are called realistic local theories.

Quantum communication

Notation

  • $\ket{j} \otimes \ket{\chi} = \ket{j} \ket{\chi} = \ket{j \chi}$, i.e. these all denote the same thing
  • Convention to represent these two states

    \begin{equation*}
\ket{+} \leftrightarrow \ket{0} \longrightarrow 
\begin{pmatrix}
  1 \\ 0
\end{pmatrix}, \quad
\ket{-} \leftrightarrow \ket{1} \longrightarrow
\begin{pmatrix}
  0 \\ 1
\end{pmatrix}
\end{equation*}

    which is often parametrized in polar angles

    \begin{equation*}
\ket{\psi} = \cos \frac{\theta}{2} \ket{+} + e^{i \phi} \sin \frac{\theta}{2} \ket{-}
\end{equation*}

    with $\theta \in [0, \pi]$ and $\phi \in [0, 2 \pi]$.

  • Bell's states:

    \begin{equation*}
\begin{split}
  \ket{\Phi_{AB}^{\pm}} &= \frac{1}{\sqrt{2}} \Big( \ket{+_A} \ket{ +_B } \pm \ket{-_A} \ket{-_B} \Big) \\
  \ket{\Psi_{AB}^{\pm}} &= \frac{1}{\sqrt{2}} \Big( \ket{ +_A } \ket{ -_B } \pm \ket{-_A} \ket{ +_B} \Big)
\end{split}
\end{equation*}

Secure communication

An entangled state between systems $A$ and $B$ implies the wavefunction of the combined $A$ and $B$ systems cannot be written as a tensor-product of independent wavefunctions for each system, i.e.

\begin{equation*}
\Psi_{A, B} \ne \psi_A \otimes \psi_B
\end{equation*}

The Bloch sphere is just the parametrization of a two-qubit system in polar angles in 3d:

\begin{equation*}
\ket{\psi} = \cos \frac{\theta}{2} \ket{+} + e^{i \phi} \sin \frac{\theta}{2} \ket{-}
\end{equation*}

with $\theta \in [0, \pi]$ and $\phi \in [0, 2 \pi]$.

A frequently used complete set of two-qubit states in which both qubits are entangled, are the Bell states,

\begin{equation*}
\begin{split}
  \ket{\Phi^{\pm}} &= \frac{1}{\sqrt{2}} \big( \ket{+ \ +} \pm \ket{- \ -} \big) \\
  \ket{\Psi^{\pm}} &= \frac{1}{\sqrt{2}} \big( \ket{ + \ -} \pm \ket{- \ +} \big)
\end{split}
\end{equation*}

No-cloning theorem

It is impossible to create an identical copy of an arbitrary unknown quantum state.

Formally, let $\ket{\chi} \in \mathcal{H}$ where $\mathcal{H}$ denotes a Hilbert space.

Suppose we're in some initial state $\ket{j}$, then initially we have

\begin{equation*}
\ket{j} \otimes \ket{\chi} = \ket{j} \ket{\chi} = \ket{j \chi} \in \mathcal{H} \otimes \mathcal{H}
\end{equation*}

Then the question is: can we turn $\ket{j}$ into a duplicate state of $\ket{\chi}$?

  1. Observing $\ket{\chi}$ will collapse the state, thus corrupting the state we want to copy
  2. Can control $\hat{H}$ of the system and evolve $\hat{H}$ with unitary time evolution operator $\hat{U}(t)$, where by unitary operator we mean that it preserves the norm and coherence.

Answer is no.

Suppose $\exists \hat{U}: \mathcal{H} \otimes \mathcal{H} \to \mathcal{H} \otimes \mathcal{H}$ on $\mathcal{H} \otimes \mathcal{H}$ such that

\begin{equation*}
\begin{split}
  \hat{U} \ket{j} \ket{\chi} &= \exp \bigg( - i \Theta (j, \chi) \bigg) \ket{\chi} \ket{\chi} \\
  \hat{U} \ket{j} \ket{\phi} &= \exp \bigg( - i \Theta(j, \phi) \bigg) \ket{\phi} \ket{\phi}
\end{split}
\end{equation*}

where $\chi$ and $\phi$ are two arbitrary states in $\mathcal{H}$.

$\Theta(j, \chi)$ is the angle which in general can be a function of the states on which $\hat{U}$ is acting. Then we have

\begin{equation*}
\begin{split}
  \bra{\chi}\ket{\psi} = \bra{\chi} \bra{j} \hat{U}^\dagger \hat{U} \ket{j} \ket{\psi} &= \exp \big[ i \big( - \Theta(j, \phi) + \Theta(j, \chi) \big) \big] \bra{\chi} \bra{\chi}\ket{\phi} \ket{\phi} \\
  &= \exp \big[ i \big( - \Theta(j, \phi) + \Theta(j, \chi) \big) \big] \bra{\chi}\ket{\phi}^2
\end{split}
\end{equation*}

This equation leads to the condition

\begin{equation*}
| \bra{\chi}\ket{\phi} | = | \bra{\chi}\ket{\phi}|^2 \quad \implies \quad | \bra{\chi}\ket{\phi} | = 0 \text{ or } 1
\end{equation*}

I.e. states are either orthogonal or the same.

Quantum teleportation of a qubit

  1. Two spin-half particles are prepared in the Bell state

    \begin{equation*}
 \ket{\Psi_{AB}^-} = \frac{1}{\sqrt{2}} \Big[ \ket{+_A} \ket{-_B} - \ket{-_A} \ket{+_B} \Big]
\end{equation*}
  2. Establish a quantum channel by giving one of the electrons to Alice ($A$) and one to Bob ($B$); due to the EPR-stuffs we know that if Alice makes a measurement, she can use a classicial channel to tell Bob which state she measured, which tells Bob that his electron is in the opposite state.
  3. New particle given to Alice (Bob also knows that $\ket{\chi_{A'}}$ also takes this form)

    \begin{equation*}
\ket{\chi_{A'}} = c \ket{+_{A'}} + d \ket{-_{A'}}   
\end{equation*}
  4. Combine with the EPR states (referring to the ones Alice and Bob received beforehand)

    \begin{equation*}
\begin{split}
  \ket{\Phi_{A' A B}} &= \frac{c}{\sqrt{2}} \ket{+_{A'}} \ket{\Psi_{AB}^-} + \frac{d}{\sqrt{2}} \ket{-_{A'}} \ket{\Psi_{AB}^-} \\
  &= \frac{c}{2} \Big[ \ket{+_{A'}} \ket{+_A} \ket{-_B} - \ket{ +_{A'}} \ket{-_A} \ket{ +_B } \Big] \\
  & \ + \frac{d}{2} \Big[ \ket{-_{A'}} \ket{+_{A}} \ket{-_B} - \ket{-_{A'}} \ket{-_A} \ket{+_B} \Big]
\end{split}
\end{equation*}

    which we can rewrite in terms of the Bell-basis:

    \begin{equation*}
\begin{split}
   \ket{\Phi_{A' A B}} =& \frac{1}{2} \Big[ \ket{\Psi_{A' A}^-} \big( - c \ket{ +_B} - d \ket{ -_B} \big) \\
   & + \ket{\Psi_{A' A}^+ } \big( - c \ket{+_B} + d \ket{-_B} \big) \\
   & + \ket{\Phi_{A' A}^-} \big( c \ket{-_B} + d \ket{+_B} \big) \\
   & + \ket{\Phi_{A' A}^+} \big( c \ket{-_B} - d \ket{+_B} \big) \Big]
\end{split}
\end{equation*}
  5. Alice makes measurement of the two-state system consisting of $A$ and $A'$ (this can in fact be performed physically)
    • Collapses $\Phi_{A' A B}$ to one of the $\ket{\Psi_{A'A}^\pm}$ or $\ket{\Phi_{A' A}^\pm}$
    • Alice now knows which of the states $\ket{\Psi_{A'A}^\pm}$ or $\ket{\Phi_{A' A}^\pm}$ her system is in
    • Breaks entanglement with $B$ and forces $B$ to take on the "remainder" of the system, i.e. one of the

      \begin{equation*}
- c \ket{+_B} \pm d \ket{-_B} \quad \text{OR} \quad c \ket{-_B} \pm d \ket{+_B}
\end{equation*}
  6. Alice sends back to Bob which state $\ket{\Phi_{A' AB}}$ collapsed to
    • Since Bob knew the expression for $\chi_{A'}$, by learning which state $\ket{\Phi_{A' A B}}$ is in, he can fully determine what state his own particle is in, i.e. the coefficient of $\ket{-_B}$ and $\ket{+_B}$
    • This tells Bob which transformation he needs to perform to get his own particle into the state which $\chi_{A'}$ was originally in!

Superdense coding

Superdense coding is a procedure that utilises an entangled state between two participants in a way that one participant can transmit two bits of information through the act of sending over just one qubit.

Q & A

DONE Is there a correspondance between "unitary transformation / operator" and unitary matrices (when representing the operator as a matrix)?

Check out the definition of a

TODO Not quite sure about how representations used in the Superdense coding step-by-step procedure in the notes

Quantum computing

A quantum register is simply the entire state vector.

Usually we write it in the following form:

\begin{equation*}
\sum_{c, t} a_{ct} \ket{c}_n \ket{t}_m
\end{equation*}

where:

  • $c$ are called control or input register (does not necessarily make sense, it's just a term)
  • $t$ are called the target or output register (does not necessarily make sense, it's just a term)
  • $m$ and $n$ are states labelled by $n$ and $m$ bits

Example: $\ket{c}_2$ is a control ket specified by a binary number of length 2, i.e. a linear combination of the states $\ket{00}$, $\ket{10}$, $\ket{01}$, and $\ket{11}$.

Algorithms

Notation

  • $\hat{O}_f$ is called the oracle
  • Quantum register :

    \begin{equation*}
\sum_{c, t} a_{ct} \ket{c}_n \ket{t}_m
\end{equation*}

Deutch's algorithm

\begin{equation*}
\begin{split}
  f(x) : \left\{ 0, 1 \right\} \to \left\{ 0, 1 \right\}
\end{split}
\end{equation*}

And we're wondering

\begin{equation*}
f(0) = f(1) \quad \text{or} \quad f(0) \ne f(1)
\end{equation*}
\begin{equation*}
\hat{O}_f \ket{c} \ket{t} = \ket{c} \ket{t \oplus f(c)}
\end{equation*}

$\oplus$ is working in $\{ 0, 1 \}$, thus

\begin{equation*}
t \oplus f(c) \equiv t + f(c) \mod 2
\end{equation*}
\begin{equation*}
\big( \hat{H}_d \otimes \hat{I} \big) \hat{O}_{f_i} \big( \hat{H}_d \otimes \hat{H}_d \big) \big( \hat{X} \oplus \hat{X} \big) \ket{0} \ket{0}
\end{equation*}

where

\begin{equation*}
\hat{X} = 
\begin{pmatrix}
  0 & 1  \\
  1 & 0
\end{pmatrix},
\quad
\hat{H}_d = \frac{1}{\sqrt{2}}
\begin{pmatrix}
  1 & 1 \\
  1 & - 1
\end{pmatrix}
\end{equation*}

and

\begin{equation*}
\ket{0} \rightarrow 
\begin{pmatrix}
  1 \\ 0
\end{pmatrix}, \quad 
\ket{1} \rightarrow
\begin{pmatrix}
  0 \\ 1
\end{pmatrix}
\end{equation*}

Applying the operator above, we get

\begin{equation*}
\begin{split}
  & \frac{1}{2 \sqrt{2}} \ket{0} \Big[ \ket{f_i(0)} - \ket{1 + f_i(0)} - \ket{f_i(1)} + \ket{1 + f_i(1)} \Big] \\
  + & \frac{1}{2 \sqrt{2}} \ket{1} \Big[ \ket{f_i(0)} - \ket{1 + f_i(0)} + \ket{f_i(1)} - \ket{1 + f_i(1)} \Big]
\end{split}
\end{equation*}

Thus, for $f_i(0) \ne f_i(1)$ this leads to

\begin{equation*}
\frac{1}{2} \ket{0} \Big[ \ket{f_i(0)} - \ket{1  + f_i(0)} \Big]
\end{equation*}

and for $f_i(0) = f_i(1)$, we have

\begin{equation*}
\frac{1}{2} \ket{1} \Big[ \ket{f_i(0)} - \ket{1 + f_i(0)} \Big]
\end{equation*}

i.e. just by looking at the control gate $c$, we get the answer!

Grover's algorithm

  • Search algorithm
  • Reduces time-complexity to $\sqrt{n}$ wrt. number of items $n$

Cohen-Tannoudji - Quantum Mechanics

1. Waves and particles

Equations

Planck-Einstein relations
\begin{equation*}
\begin{split}
  E &= hv = \hbar \omega \\
  \mathbf{p} &= \hbar \mathbf{k}
\end{split}
\end{equation*}

where $\mathbf{k}$ is the wave vector s.t. $|\mathbf{k}| = \frac{2 \pi}{\lambda}$.

de Broglie
\begin{equation*}
  \lambda = \frac{2 \pi}{| \mathbf{k} |} = \frac{h}{| \mathbf{p} |}
\end{equation*}
Eigenstates

With each eigenvalu $a$ is associated an eigenstate, i.e. an eigenfunction $\psi_a(\mathbf{r})$ s.t. $\psi(\mathbf{r}, t_0) = \psi_a ( \mathbf{r} ), \forall t_0$ where $t_0$ is the time the measurement is performed, i.e. the measurement will always yield the same $a$.

Scröding equation
\begin{equation*}
  i \hbar \frac{\partial}{\partial t}\psi(\mathbf{r}, t) = - \frac{\hbar^2}{2m} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}, t) \psi (\mathbf{r}, t)
\end{equation*}

where $\nabla^2$ is the Laplacian operator $\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial z^2}$.

  • Free particle

    No force acting on the particle → $V(\mathbf{r}, t)$

    \begin{equation*}
  i \hbar \frac{\partial}{\partial t}\psi(\mathbf{r}, t) = - \frac{\hbar^2}{2m} \nabla^2 \psi(\mathbf{r}, t)
\end{equation*}

    which has the solution

    \begin{equation*}
  \psi(\mathbf{r}, t) = A \exp (\mathbf{k} \cdot \mathbf{r} - \omega(\mathbf{k}) t )
\end{equation*}

    on the condition that $\mathbf{k}$ and $\omega$ satisty the relation:

    \begin{equation*}
  \omega = \frac{\hbar \mathbf{k}^2}{2m}
\end{equation*}

    According to de Broglie we have:

    \begin{equation*}
  E = \frac{\mathbf{p}^2}{2m}
\end{equation*}

    which is the same as in the classical case (no potential → only kinetic → get the above).

    A plane wave of this type represents a particle whose probability of presence is uniform throughout all space :

    \begin{equation*}
  | \psi(\mathbf{r}, t) |^2 = |A|^2
\end{equation*}
    • Form of the superposition

      Principle of superposition tells us that every lin. comb. of plane waves satistfying the relation for $\omega$ specified above, will also be a solution of the free-particle Schrödinger equation.

      This super-position can be written:

      \begin{equation*}
  \psi(\mathbf{r}, t) = \frac{1}{(2 \pi)^{3/2}} \int g(\mathbf{k}) e^{i [\mathbf{k} \cdot \mathbf{r} - \omega(\mathbf{k}) t]} d^3 k
\end{equation*}

      where $g(\mathbf{k})$ represents the coefficients, and we integrate over all possible $\mathbf{k}$ ($d^3 k = dk_x \ dk_y \ dk_z$).

  • Time-independent potential

    In the case of a time-independent potential, $V(\mathbf{r}, t) = v(\mathbf{r})$ we have:

    \begin{equation*}
  i \hbar \frac{\partial}{\partial t} \psi(\mathbf{r}, t) = - \frac{\hbar^2}{2m} \boldsymbol{\Delta} \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t)
\end{equation*}

    where $\boldsymbol{\Delta} = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}$ is the Laplacian operator .

    • Separation of variables. Stationary states
      • Separation of variables

        By making the assumption that $\psi(\mathbf{r}, t)$ can be separated as follows:

        \begin{equation*}
  \psi(\mathbf{r}, t) = \varphi(\mathbf{r}) \chi(t)
\end{equation*}

        We can rearrange the Schrödinger equation as follows:

        \begin{equation*}
  \frac{i \hbar}{\chi(t)} \frac{d \chi(t)}{dt} = \frac{1}{\varphi (\mathbf{r})} \Big[ - \frac{\hbar^2}{2m} \boldsymbol{\Delta} \varphi(\mathbf{r}) \Big] + V(\mathbf{r})
\end{equation*}

        Where LHS only depends on $t$ while RHS only depends on $\mathbf{r}$, which is only true if both sides are equal to a constant.

        Setting LHS to $\hbar \omega$ (just treating $\omega$ as some arbitrary constant at this point), we obtain a solution for LHS:

        \begin{equation*}
  \chi(t) = A e^{-i \omega t}
\end{equation*}

        Figure why we can set it to $\hbar \omega$.

        Now do the same for RHS we get the final solution for the full plane wave function:

        \begin{equation*}
  \psi(\mathbf{r}, t) = \varphi(\mathbf{r}) e^{-i \omega t}
\end{equation*}

        where we've let $A = 1$ (which is fine as long as we incorporate the constant $A$ into $\varphi(\mathbf{r})$ ).

        The time and space variables are said to be separated .

      • Stationary States

        The time-independent wave function makes it so that the Schrödinger Equation only involves a single angular frequency $\omega$. Thus, according to the Planck-Einstein relation, a stationary state is a state with a well-defined energy $E = \hbar \omega$ (energy eigenstate).

        We can therefore write the time-independent Schrödinger Equation as:

        \begin{equation*}
  \Bigg[- \frac{\hbar^2}{2m} \boldsymbol{\Delta} + V( \mathbf{r} ) \Bigg] \varphi(\mathbf{r}) = E \varphi(\mathbf{r})
\end{equation*}

        or:

        \begin{equation*}
  H \varphi(\mathbf{r}) = E \varphi (\mathbf{r})
\end{equation*}

        where $H$ is the differential operator :

        \begin{equation*}
  H  = - \frac{\hbar^2}{2m} \boldsymbol{\Delta} + V( \mathbf{r} )
\end{equation*}

        Which is a linear operator.

        Thus $H \varphi(\mathbf{r}) = E \varphi (\mathbf{r})$ tells us that $H$ applied to the "eigenfunction" $\varphi(\mathbf{r})$ (analogous to eigenvector) yields the same function, but multiplied by a constant "eigenvalue" $E$.

        The allowed energies are therefore the eigenvalues of the operator $H$.

        I'm not sure about the remark by about the eigenvalues being the only allowed energies..

      • Superposition of stationary states

        We can then distinguish between the various possible values of the energy $E$ (and the corresponding eigenfunctions $\varphi(\mathbf{r})$ ), by labeling them with an index $n$ :

        \begin{equation*}
  H \varphi_n(\mathbf{r}) = E_n \varphi_n(\mathbf{r})
\end{equation*}

        and the stationary states of the particle have as wave functions:

        \begin{equation*}
  \psi_n(\mathbf{r}, t) = \varphi_n(\mathbf{r}) e^{-i E_n t / \hbar}
\end{equation*}

        And since $H$ is a linear operator , we have a have series of other solutions of the form:

        \begin{equation*}
  \psi(\mathbf{r}, t) = \sum_n \varphi_n(\mathbf{r})e^{- i E_n t / \hbar}
\end{equation*}
  • Requirements

    A system composed of only one particle → total probability of finding the particle anywhere in space, at time $t$, is equal to $1$ :

    \begin{equation*}
  \int dP(\mathbf{r}, t) = 1
\end{equation*}

    We therefore require that the wave-function $\psi(\mathbf{r}, t)$ must be square-integrable :

    \begin{equation*}
  \int |\psi(\mathbf{r}, t)|^2 d^3 r \quad \text{is finite}
\end{equation*}

    This is NOT true for the simplest solution to the Scröding equation:

    \begin{equation*}
  \psi(\mathbf{r}, t) = A \exp \Big( k_0 x - \omega_0 t \Big)
\end{equation*}

    Where we have

    \begin{equation*}
  |\psi(\mathbf{r}, t)|^2 = |A|^2
\end{equation*}

    And thus,

    \begin{equation*}
  \int_{-\infty}^\infty |\psi(\mathbf{r}, t)|^2 d^3 r = \infty
\end{equation*}

    Hence it's NOT square-integrable.

    They then say: "Therefore, rigorously, it cannot represent a physical state of the particle. On the other hand, a superposition of plane waves like [this one] can be square-integrable."

Heisenberg
  • Deduction

    Here we only consider the 1D case, allowing us to write the wave function as as superposition of over all possible $k$ :

    \begin{equation*}
  \psi(x, 0) = \frac{1}{\sqrt{2 \pi}} \int g(k) e^{ikx} dk
\end{equation*}

    Suppose that $|g(k)|$ has the following shape:

    modulus_fourier_transform_of_wave_function.png

    Now let $\alpha(k)$ be a function of $k$ s.t. :

    \begin{equation*}
  g(k) = |g(k)| \ e^{i \alpha(k)}
\end{equation*}

    Further, we assume that $\alpha(k)$ varies sufficently smoothly within the interval $\Big[ k_0 - \frac{\Delta k}{2}, k_0 + \frac{\Delta k}{2} \Big]$. Then, when $\Delta k$ is sufficiently small, we can approx. $\alpha(k)$ using it's tangent / linear approximation about the point $k = k_0$ :

    \begin{equation*}
  \alpha(k) \simeq \alpha(k_0) + (k - k_0) \Big[ \frac{d \alpha}{d k} \Big]_{k = k_0}
\end{equation*}

    which enables us to write the wave-function as:

    \begin{equation*}
  \psi(x, 0) \simeq \frac{e^{i [k_0 x + \alpha(k_0)]}}{\sqrt{2 \pi}} \int_{-\infty}^{+\infty} |g(k)| e^{i(k - k_0)(x - x_0)} dk
\end{equation*}

    where

    \begin{equation*}
  x_0 = - \Big[ \frac{d \alpha}{d k} \Big]_{k=k_0}
\end{equation*}

    I belive this note turned out to be that it was in fact integrating from $-\infty$ to $\infty$.

    I presume that when they write $\int_{-\alpha}^{+ \alpha}$ they mean "integrate from $k_0 - \frac{\Delta k}{2}$ to $k_0 + \frac{\Delta k}{2}$..right? Since the "width" $\Delta k$ "contains" most of the integral.

    This gives us a useful way of studying the variations of $|\psi(x, 0)|$ in terms of $x$.

    • If $|x - x_0|$ is large, i.e. $x$ is far from $x_0$, $|g(k)| e^{i(k - k_0)(x - x_0)}$ will oscillate a very large number of times within the interval $\Delta k$. Due to the high frequency the osciallations will cancel eachother out, thus $|\psi(x, 0)| \rightarrow 0$
    • If $|x - x_0|$ is small, i.e. $x$ is close to $x_0$, $|g(k)| e^{i(k - k_0)(x - x_0)}$ will barely oscillate, and so we will end up with $|\psi(x, 0)|$ being a maximum.
    • Relating to momentum

      As seen previously, $\psi(x, 0)$ appears as a linear superposition of the momentum eigenfunctions in which the coefficient of $e^{ikx}$ is $g(k)$. We are thus led to interpret $|g(k)|^2$ (to within a constant factor) as the probability of finding $p = \hbar k$ if one measures, at $t=0$, the momentum of a particle whose state is described by $\psi(x, t)$.

      The possible values of $p$, like those of $x$, form a continuous set, and $|g(k)|^2$ is proportional to a probability density : the probability $\bar{dP(k)}$ of obtaining a value of between $\hbar k$ and $\hbar (k + dk)$ is, to within a constant factor $|g(k)|^2 dk$. More precisely, we can rewrite the formula as:

      \begin{equation*}
  \psi(x, 0) = \frac{1}{\sqrt{2 \pi \hbar}} \int \overline{\psi}(p) e^{ipx / \hbar} dp
\end{equation*}

      we know that $\overline{\psi}(p)$ and $\psi(x, 0)$ satisfy the Bessel-Parseval relation:

      \begin{equation*}
 \int_{-\infty}^{\infty} |\psi(x, 0)|^2 dx = \int_{-\infty}^{\infty} |\overline{\psi}(p)|^2 dp
\end{equation*}

      We then have

      \begin{equation*}
  \overline{dP}(p) = \frac{1}{C} | \overline{\psi}(p) |^2 dp
\end{equation*}

      which is the probability that the measurement of the momentum will yield a result included between $p$ and $p + dp$.

      Then, writing the relation

      Honestly, I'm not seeing this.

      This is based on the argument that $\Delta x \cdot \Delta k \ge 1$ for the $\Delta x$ to be significant, which I don't get.

  • Q & A
    • DONE We're assuming a certain "family" of distribution for k; does this affect our deduction?

      This $g(k)$ represents the coefficients for each of the different wave-functions ("different" meaning parametrized with different $k$, and thus different wavelength / frequency), allowing us to write some arbitrary wave-function as a superposition of an infinite number of plane waves.

      Why should it follow the "distribution" shown above? Why can't $g(k)$ have an arbitrary "distribution", e.g. multiple peaks?

      • Answer

        Yeah, this is wierd.

        Let's just wait until we learn about how this relates to the Fourier Transform.

    • DONE What's up with the alphas on the boundary of the integrals?
      • Answer

        It's supposed to be $\infty$ not $\alpha$.

    • DONE Why is it the complex conjugate of the wave equation when writing it as a function of the momentum?
      • Answer

        It's NOT the complex conjugate, it's just a notation to say it's a different function, i.e. $\psi \neq \overline{\psi}$.

Wave equation for free particle

Separable solution in as a function of $p$ and $E$

\begin{equation*}
\Psi(\mathbf{r}, t) = e^{i(px - Et)/\hbar}
\end{equation*}

or

\begin{equation*}
\Psi(\mathbf{r}, t) = e^{i(kx - \omega t)}
\end{equation*}

Complements

H: Stationary States of particle in one-dimensional square potential "well"
  • Setup

    Here we consider the time-independent Schrödinger Equation with a step-wise constant potential .

    Thus, the time-independent Schrödinger Equation becomes:

    \begin{equation*}
\frac{d^2}{dx^2} \varphi(x) + \frac{2m}{\hbar^2} (E - V) \varphi(x) = 0
\end{equation*}
  • Stationary wave-function in different potentials

    We can solve the TISE with constant potential using integrating factors:

    \begin{equation*}
\begin{split}
  \frac{d^2}{dx^2} (e^{rx}) + \frac{2m}{\hbar^2} (E - V) e^{rx} &= 0 \\
  \implies r^2 e^{rx} + \frac{2m}{\hbar^2} (E - V) e^{rx} &= 0 \\ 
  \implies r^2 + \frac{2m}{\hbar^2} (E - V) = 0 \\
\end{split}
\end{equation*}

    which we can be solved as a regular quadratic:

    \begin{equation*}
  \Bigg(r - \sqrt{\frac{2m}{\hbar^2}(V - E)} \Bigg) \Bigg(r + \sqrt{\frac{2m}{\hbar^2}(V - E)} \Bigg) = 0
\end{equation*}
    \begin{equation*}
  r = \pm \frac{\sqrt{ 2m (V - E)}}{\hbar}
\end{equation*}

    then whether or not we have a complex solution depends on whether or not $V - E$ is non-negative or not.

    • E > V - energy of particle is GREATER than potential barrier

      We write ($k$ being defined by the the following equation)

      \begin{equation*}
  E - V = \frac{\hbar^2 k^2}{2m} \implies r = \pm ik$
\end{equation*}

      Then the stationary solution to the TISE with constant potential can be written

      \begin{equation*}
  \varphi(x) = A e^{i k x} + A' e^{-ikx}
\end{equation*}

      where $A$ and $A'$ are complex constants.

    • E < V - energy of particle is LOWER than potential barrier

      Let $\rho$ be defined by the following equation

      \begin{equation*}
  V - E = \frac{\hbar^2 \rho^2}{2m} \implies r = \pm \rho
\end{equation*}

      with solution to TISE with constant potential can be written:

      \begin{equation*}
  \varphi(x) = B e^{\rho x} + B' e^{- \rho x}
\end{equation*}

      where $B$ and $B'$ are complex constants

      Different order of $V$ and $E$ on LHS.

      Also notice the fact that using the "integrating factor" to solve the ODE, our exponentials aren't positive in this case.

    • E = V - energy of particle is equivalent to the potential barrier

      In this case we simply end up with the differential equation

      \begin{equation*}
  \frac{\partial^2}{\partial x^2} \varphi(x) = 0
\end{equation*}

      Which we can simply integrate twice and obtain a linear function expression for $\varphi(x)$.

  • Stationary wave-function at potential energy discontinuity
    • Computation outline
      1. Solve the TISE with constant potential using integrating factors, leaving you with a quadratic.
      2. Let $k$ be such that $E - V = \frac{\hbar^2 k^2}{2m}$, for convenience
      3. Obtain a general solution for both wave-functions depending on the $E - V$ in the different regions.
      4. Assume the wave-function to be continues across the boundaries (discontinuities of the potential), i.e. we require
      \begin{equation*}
\begin{split}
  \varphi_1(x_0) &amp;= \varphi_2(x_0) \\
  \frac{\partial}{\partial x} \varphi_1(x) |_{x = x_0} &amp;= \frac{\partial}{\partial x} \varphi_2(x) |_{x = x_0}
\end{split}
\end{equation*}
      1. Solve for the coefficients of the general solutions obtained.

      qm_potential_step.png

      In some cases, solving for all coefficients of the different wave-functions is not possible due to not having enough constraints.

      In this case it might make sense to set some constant to zero, and instead solve for the ratios , not the coefficients themselves. Solving for ratios reduces the order of the system of equations by 1, and setting one of them to zero reduces it further by 1. In the case were we have a simple potential barrier in 1D, using the above methods to reduce the order then allows us to compute solve equations for the ratios, providing us with a reflection coefficient (which also implies a transmission coefficient ) and thus the probability of reflecting of the barrier.

    • Potential well

      qm_potential_well.png

      We would follow the same as described above, but now we would have three wave functions, with boundary conditions:

      \begin{equation*}
  \begin{split}
    \varphi_1(x_1) &amp;= \varphi_2(x_1) \\
    \frac{\partial}{\partial x} \varphi_1(x) |_{x=x_1} &amp;= \frac{\partial}{\partial x} \varphi_2(x) |_{x=x_1}
  \end{split}
\end{equation*}

      for the left side of the potential-well at $x_1$, and for the right side at $x_2$:

      \begin{equation*}
  \begin{split}
    \varphi_3(x_2) &amp;= \varphi_2(x_2) \\
    \frac{\partial}{\partial x} \varphi_3(x) |_{x=x_2} &amp;= \frac{\partial}{\partial x} \varphi_2(x) |_{x=x_2}
  \end{split}
\end{equation*}

      i.e. we simply use the exact same method, but now we have an additional boundary to consider.

      The solution ends up looking as:

      \begin{equation*}
\varphi(x) = \begin{cases}
  A e^{i \frac{p x}{\hbar}} + B e^{-i \frac{px}{\hbar}} \quad \text{where } x &lt; 0 \\
  C e^{i \frac{\bar{p} x}{\hbar}} + D e^{-i \frac{\bar{p} x}{\hbar}} \quad \text{where } x \ge 0 
\end{cases}
\end{equation*}

      where we note that if we have a particle coming from the left, we set the coefficient of the left-traveling before the barrier to 0:

      \begin{equation*}
D = 0
\end{equation*}

      i.e. we can

      We can "think" of each of these terms in the wave-function to be a super-position of eigenstates. Viewing it in that way, it makes a bit more sense I suppose.

    • Infinite potential well

      Same procedure as regular potential well, but now the wave-functions $\varphi_1$ and $\varphi_3$ (i.e. wave-functions outside of the potential well) are equal to zero for all $x$ ! That is,

      \begin{equation*}
\varphi_2(x_1) = \varphi_2(x_2) = 0
\end{equation*}

      This leads us to the quantization of energies , where we have multiple solutions satisfying the boundary conditions specified above. If the potential well is of "width" $L$, we end up with the solution

      \begin{equation*}
E_n = \hbar \omega_n = \frac{n^2 \hbar^2 \pi^2}{2m L^2}
\end{equation*}

      Which falls out of the fact that the boundary conditions above corresponds to the inverse wave-length being a integer multiple of the width of the potential well.

      Each of these energy levels have their own eigen-function $\varphi_n$.

2. The mathematical tools of quantum mechanics

Notation

  • $\mathcal{F} \subset L^2$ is the set of all square-integrable functions which are everywhere defined, continuous and infinitively differentiable. Together with the inner product $\langle \varphi, \psi \rangle = \int d^3 r \ \varphi^*(\mathbf{r}) \psi(\mathbf{r})$ this defines a Hilbert space
  • $\psi(\mathbf{r}) \in \mathcal{F}$ denotes a wave function
  • $| \ \rangle$ is a ket or ket vector, which is an element or vector of $\mathcal{E}$ space, e.g. $\ket{\psi}$
  • $\mathcal{E}_\mathbf{r}$ is the state space of a particle, and is a subspace of a Hilbert space. It's defined such that we associate each square-integrable function $\psi(\mathbf{r})$ with a ket vector, i.e. $\psi(\mathbf{r}) \in \mathcal{F} \iff \ket{\psi} \in \mathcal{E}_\mathbf{r}$
  • $\mathcal{E}_x$ is the state space of a (spinless) particle in only one dimension
  • $( \ket{\varphi}, \ket{\psi} )$ is associated with a complex number, which is the scalar product, satisfying the properties of an inner product.
  • $\mathcal{E}^*$ denotes the set of linear functionals defined on the kets $\ket{\psi} \in \mathcal{E}$, which constitutes a vector space, called the dual space of $\mathcal{E}$.
  • $\bra{\chi}$ is a bra or bra vector of the space $\mathcal{E}^*$, which represents a linear functional $\chi$
  • $\bra{\chi}\ket{\psi} = \chi(\ket{\psi})$, i.e. the linear function $\bra{\chi} \in \mathcal{E}^*$ acting on the ket $\ket{\psi} \in \mathcal{E}$
  • $\{ \ket{u_i} \}$ denotes a discrete basis
  • $\{ \ket{w_\alpha} \}$ denotes a continuous basis
  • $P_{\{ u_i \}}$ ($P_{\{ w_\alpha \}}$) denote a projection operator for a discrete (continuous) basis
  • $g_n$ denotes the degeneracy of the eigenvalue $a_n$
  • $\ket{u_n^i}$ denotes the i-th (degenerate) eigenvector corresponding to the eigenvalue $a_n$, if this eigenvalue is non-degenerate then the $i$ in the super-script can be dropped
  • $\mathcal{E}_n$ denotes the eigensubspace of the eigenvalue $a_n$ of $A$

Dirac notation

Overview

The quantum state of any physical system is characterized by a state vector, belonging to a space $\mathcal{E}$ which is the state space of the system.

"Ket" vectors and "bra" vectors
  • Dual space

    A linear function $\chi$ is a linear operation which associates an complex number with every ket $\ket{\psi}$ :

    \begin{equation*}
  \ket{\psi} \in \mathcal{E} \overset{x}{\rightarrow} \chi(\ket{\psi}) \in \mathbb{C}
\end{equation*}

    Linear functional and linear operator must NOT be confused. In both cases we're dealing with linear operations, but the former associates each ket with a complex number, while the latter associates another ket.

  • Generalized kets

    "Generalized kets" are functions which are not necessarily square-integrable, but whose scalar product with every function of $\mathcal{F}$ exists.

    These cannot, strictly speaking, represent physical states. They are merely intermediaries.

    The counter-examples to the "normal" definition of a ket are in the form of limiting cases, where the result of applying the bra to the ket is well-defined, but it's not square-integrable (it diverges when taking the limit) and thus not in $\mathcal{E}$.

    See the book at p.115 for more information about this.

    But in short, in general, the dual space $\mathcal{E}^*$ and the state space $\mathcal{E}$ are NOT isomorphic, except if $\mathcal{E}$ is finite-dimensional.1 I.e. the following is true:

    \begin{equation*}
  \ket{\psi} \in \mathcal{E} \implies \bra{\varphi} \in \mathcal{E}^*
\end{equation*}
2

    But the otherway around is NOT true.

Linear operators
  • Overview

    Consider the operations defined by:

    \begin{equation*}
  \ket{\psi}\bra{\varphi}
\end{equation*}

    Choose an arbitrary ket $\ket{\chi}$ and consider:

    \begin{equation*}
  \ket{\psi} \bra{\varphi}\ket{\chi}
\end{equation*}

    We already know that $\bra{\varphi}\ket{\chi} \in \mathbb{C}$ ; consequently, the equation above is a ket, obtained by multiplying $\ket{\psi}$ by the scalar $\bra{\varphi}\ket{\chi}$. Therefore the $\ket{\psi}\bra{\varphi}$ applied to some arbitrary ket $\ket{\chi}$ gives another ket, i.e. it's an operator.

    If $\lambda \in \mathbb{C}$ :

    \begin{equation*}
 \begin{split}
  \ket{\psi} \lambda &amp;= \lambda \ket{\psi} \\
  \bra{\psi} \lambda &amp;= \lambda \bra{\psi} \\
  A \lambda \ket{\psi} &amp;= \lambda A \ket{\psi} \quad \text{where A is a linear operator} \\
  \bra{\varphi} \lambda \ket{\psi} &amp;= \lambda \bra{\varphi}\ket{\psi} = \bra{\varphi}\ket{\psi}\lambda
 \end{split}
\end{equation*}
  • Projections

    Let $\ket{\psi}$ be a ket which is normalized to one:

    \begin{equation*}
\bra{\psi}\ket{\psi} = 1
\end{equation*}

    Consider the operator $P_\psi$ defined by:

    \begin{equation*}
P_\psi = \ket{\psi} \bra{\psi}
\end{equation*}

    and apply it to an arbitrary ket $\ket{\varphi}$ :

    \begin{equation*}
P_\psi \ket{\varphi} = \ket{\psi} \bra{\psi}\ket{\varphi}
\end{equation*}

    Which is simply the projection of $\ket{\varphi}$ onto $\ket{\psi}$ : first take the inner product, not needing to normalize wrt. $\ket{\psi}$ due $\bra{\psi}\ket{\psi} = 1$, and then multiply by $\ket{\psi}$ to get the component.

    We can also project onto a basis by taking the um over $P_{\psi_i}$, and using this as an operator (applying each of them to the target ket as above). Thus we get the linear superposition .

Hermitian conjugation
  • Linear operator on a bra
    \begin{equation*}
  \big(\bra{\varphi} A \big) \ket{\psi} = \bra{\varphi} \big( A \ket{\psi} \big) = \bra{\varphi} A \ket{\psi}
\end{equation*}
  • Adjoint operator

    "Correspondence" between kets and bras.

    With every linear operator $A$ we associate another linear operator $A^\dagger$ called the adjoint operator (or Hermitian conjugate ) of $A$.

    \begin{equation*}
\ket{\varphi'} = A \ket{\varphi} \iff \bra{\varphi'} = \bra{\varphi} A^\dagger
\end{equation*}

Representations in state space

  • Choosing a representation amounts to choosing an orthonormal basis, either discrete or continuous, in the state space $\mathcal{E}$

For a discrete basis $\{ \ket{u_i} \}$

\begin{equation*}
P_{\{ u_i \}} = \sum_{i} \ket{u_i} \bra{u_i} = \mathbf{1}
\end{equation*}

For a continuous basis $\{ \ket{w_\alpha} \}$

\begin{equation*}
P_{\{ w_\alpha \}} = \int d \alpha \ \ket{w_\alpha} \bra{w_\alpha} = \mathbf{1}
\end{equation*}
Representation of operators

Given a linear operator $A$, we can, in a $\{ \ket{u_i} \}$ or $\{ \ket{w_\alpha} \}$ basis, associate with it a series of numbers defined by

\begin{equation*}
A_{ij} = \bra{u_i} A \ket{u_j}
\end{equation*}

or for a continuous basis

\begin{equation*}
A(\alpha, \alpha') = \bra{w_\alpha} A \ket{w_\alpha}
\end{equation*}

We can then use the closure relation to compute the matrix which represents the operator $AB$ in the $\{ \ket{u_i} \}$ basis:

\begin{equation*}
\begin{split}
  \bra{u_i} AB \ket{u_j} &amp;= \bra{u_i} A \ \mathbf{1} \ B \ket{u_j} \\
  &amp;= \bra{u_i} A P_{\{ u_i \}} B \ket{u_j} \\
  &amp;= \sum_{k} \bra{u_i} A \ket{u_k} \bra{u_k} B \ket{u_j}
\end{split}
\end{equation*}

And equivalently for the continuous basis $\{ \ket{w_\alpha} \}$

Matrix representation of a ket

Problem: we know the components of $\ket{\psi}$ and the matrix elements of the operator $A$ in some representation. How can we compute the components of $\ket{\psi'} = A \ket{\psi}$?

In the $\{ \ket{u_i} \}$ basis, the "coordinate" $c_i'$ of $\ket{\psi'}$ are given by:

\begin{equation*}
c_i' = \bra{u_i}\ket{\psi'} = \bra{u_i} A \ket{\psi}
\end{equation*}

Inserting the closure relation between $A$ and $\ket{\psi}$, we obtain:

\begin{equation*}
\begin{split}
  c_i' &amp;= \bra{u_i} A \ \mathbf{1} \ket{\psi} = \bra{u_i} A P_{\{ u_j \}} \ket{\psi} \\
  &amp;= \sum_j \bra{u_i} A \ket{u_j} \bra{u_j}\ket{\psi} \\
  &amp;= \sum_{j} A_{ij} c_j
\end{split}
\end{equation*}

Or for a continuous basis $\{ \ket{w_\alpha} \}$ we have:

\begin{equation*}
\begin{split}
  c'(\alpha) &amp;= \bra{w_\alpha}\ket{\psi'} = \bra{w_\alpha} A \ket{\psi} \\
  &amp;= \bra{w_\alpha} A \ \mathbf{1} \ket{\psi} \\
  &amp;= \int \dd \alpha' \ \bra{w_{\alpha}} A \ket{w_{\alpha'}} \bra{w_{\alpha'}}\ket{\psi} \\
  &amp;= \int \dd \alpha' \ A(\alpha, \alpha') \ c(\alpha')
\end{split}
\end{equation*}

Eigenvalue equations for observables

$\ket{\psi}$ is said to be an eigenvector (or eigenket ) of the linear operator $A$ if :

\begin{equation*}
A \ket{\psi} = \lambda \ket{\psi}
\end{equation*}

where $\lambda \in \mathbb{C}$. We call the equation above the eigenvalue equation of the linear operator $A$.

We say the eigenvalue $\lambda$ is nondegenerate if and only if the corresponding eigenvector is unique within a constant factor, i.e. when all associated eigenkets are collinear.

If there exists as least two linearly independent kets which are eigenvectors of $A$ with the same eigenvalue, this eigenvalue is said to be degenerate.

To be completely rigorous, one should solve the eigenvalue equation in the space $\mathcal{E}$, i.e. we ought to only consider those eigenvectors $\ket{\psi}$ which have a finite norm.

Unfortunately, we will be obliged to use operatores for which eigenkets do not satisfy this condition. Therefore, we shall grant that vectors which are solutions of the eigenvalue equation can be

Two eigenvectors of a Herimition operator corresponding to two different eigenvalues are orthogonal.

Consider two eigenvectors $\ket{\psi}$ and $\ket{\varphi}$ of the Hermitian operator $A$.

\begin{equation*}
\begin{split}
  A \ket{\psi} &amp;= \lambda \ket{\psi} \\
  A \ket{\varphi} &amp;= \mu \ket{\varphi}
\end{split}
\end{equation*}

Since $A$ is Herimition, we can write the above using the corresponding bras:

\begin{equation*}
\bra{\varphi} A = \mu \bra{\varphi}
\end{equation*}

Multiplying the above on the left with $\ket{\psi}$ on the right:

\begin{equation*}
\begin{split}
  \bra{\varphi} A \ket{\psi} &amp;= \lambda \bra{\varphi}\ket{\psi} \\
  \bra{\varphi} A \ket{\psi} &amp;= \mu \bra{\varphi}\ket{\psi}
\end{split}
\end{equation*}

Which implies

\begin{equation*}
(\lambda - \mu) \bra{\varphi}\ket{\psi} = 0
\end{equation*}

Hence, if $\lambda \ne \mu$ we have orthogonality, i.e. $\bra{\psi}\ket{\varphi} = 0$.

The Hermitian operator $A$ is an observable if the orthonormal system of eigenvectors of $A$, described by

\begin{equation*}
\bra{\psi_n^i}\ket{\psi_{n'}^{i'}} = \delta_{nn'} \delta_{ii'}
\end{equation*}

form a basis in the state space. That is,

\begin{equation*}
\sum_{n = 1}^{\infty} \sum_{i = 1}^{\infty} \ket{\psi_n^i} \bra{\psi_n^i} = \mathbf{1}
\end{equation*}

Sets of commuting observables

Consider an observable $A$ and a basis of $\mathcal{E}$ composed of eigenvectors $\ket{u_n^i}$ of $A$.

If none of the eigenvalues of $A$ is degenerate, then the various basis vectors of $\mathcal{E}$ can be labelled by the eigenvalue $a_n$, and all the eigensubspaces $\mathcal{E}_n$ are then one-dimensional. That is, there exists a unique basis of $\mathcal{E}$ formed by the eigenvectors of $A$. We then say that the observable $A$ constitutes, by itself, a C.S.C.O.

If, on the other hand, one or several eigenvalues of $A$ are degenerate, then the basis of eigenvectors of $A$ is not unqiue. We then choose another observable $B$ which commutes with $A$, and construct an orthonormal basis of eigenvectors common to $A$ and $B$. By definition, $A$ and $B$ form a C.S.C.O. if this basis is unique (to within a phase factor for each of the basis vectors), that is, if, to each of the possible pairs of eigenvalues $\{ a_n, b_p \}$, there corresponds only one basis vector.

If we still don't have a C.S.C.O., we can introduce another Hermitian operator $C$ which commutes with $A$ and $B$, and then try to construct unique triples, and so on. This can be performed an arbitrary number of times in an attempt to obtain a C.S.C.O.

A set of observables $A, B, C, \dots$ is called a complete set of commuting observables if

  1. All the observables $A, B, C, \dots$ commute by pairs
  2. Specifying the eigenvalues of all the operators $A, B, C, \dots$ determines aa unique (to within a multiplicative factor) common eigenvector

If $\{A, B, C\}$ is a C.S.C.O., the specification of the eigenvalues $a_n, b_p, c_r, \dots$ determines a ket of the corresponding basis (to within a constant factor), which we sometimes denote by $\ket{a_n, b_p, c_r, \dots}$

Principles of Quantum Mechanics - Dirac

Notation

  • $\ket{\xi'}$ denotes an eigenket belonging to the eigenvalue $\xi'$ of the dynamical variable or a real linear operator $\xi$
  • If $\xi'$ is an eigenvalue with multiple corresponding eigenkets, then $\xi' \mathbf{i}$ denotes the ith corresponding eigenket

Definitions

Words

conjugate complex
refers to the complex conjugate of a number
conjugate imaginary
the bra $\bra{\varphi}$ corresponding to the ket $\ket{\varphi}$
commutative operators
$\alpha \beta \ket{\varphi} = \beta \alpha \ket{\varphi}$

Linear Operators

Notation

  • $\Big(\bra{B} \alpha \Big) \ket{A} = \bra{B} \Big( \alpha \ket{A} \Big)$ defines applying an operator to a bra

Theorems

Orthogonality theorem

Two eigenvectors of a real dynamical variable belonging to different eigenvalues are orthogonal .

\begin{equation*}
\bra{P}\ket{\varphi} = \lambda_1 \ket{\varphi} \quad \land \quad \bra{P}\ket{\psi} = \lambda_2 \ket{\psi} \implies \bra{\varphi}\ket{\psi} = 0
\end{equation*}
Existence of eigenvectors / eigenvalues
  • Simple case

    Assume that the real linear operator $\xi$ satisfies the algebraic equation:

    \begin{equation*}
\phi(\xi) = \xi^n + a_1 \xi^{n-1} + a_2 \xi^{n - 2} + ... + a_n = 0
\end{equation*}

    which means that the linear operator $\phi(\xi)$ produces the result zero when applied to any ket vector or to any bra vector.

    Further, let the equation above be the simplest algebraic equation that $\xi$ satisfies. Then

    1. The number of eigenvalues of $\xi$ is $n$
    2. There are so many eigenkets of $\xi$ that any ket can be expressed as a sum of such eigenkets.
    • Proof of 2)

      Let $\chi_r(\xi)$ be s.t.

      \begin{equation*}
\phi(\xi) = (\xi - c_r) \chi_r(\xi)
\end{equation*}

      then

      \begin{equation*}
\sum_r \Bigg( \frac{\chi_r (\xi)}{\chi_r (c_r)} - 1 \Bigg)
\end{equation*}

      Consider the case where we substitute $\xi$ with $\c_s$ in the above expression. In this case, each term of the sum above will be of the form:

      \begin{equation*}
\frac{(c_1 - c_s)}{(c_1 - c_r)} \frac{(c_2 - c_s)}{(c_2 - c_r)} \dots \frac{(c_{r-1} - c_s)}{(c_{r-1} - c_r)} \frac{(c_{r+1} - c_s)}{(c_{r+1} - c_r)} \dots \frac{(c_n - c_s)}{(c_n - c_r)} - 1
\end{equation*}

      where

      I'm not seeing why every term except $r = s$ should vanish. I get the fact that $r = s$ does NOT vanish, but why do all the other terms vanish?

Observables

Assumptions

If the dynamical system is in an eigenstate of a rea dynamical variable $\xi$, belonging to the eigenvalue $\chi'$, then a measurement of $\xi$ will certainly give as result the number $\xi'$. Conversely, if the system is in a state such that a measurement of a real dynamical variable $\xi$ is certain to give on particular result (instead of giving one or other of several possible results according to a probability law, as in the general case) then the state is an eigenstate of $\xi$ and the result of the measurement isthe eigenvalue of $\xi$ to which this eigenstate belongs.

I.e. we assume eigenstates to be the the case when a measurement of a real dynamical variable $\xi$ is certain to give one particular result.

Equations

Q & A

p.34 Eqn. 41

See here. Why do the terms vanish?

Quantum Theory for Mathematicians

Notation

2. A First Approach to Classical Mechanics

2.5 Poisson Brackets and Hamiltonian Mechanics

Let $f$ and $g$ be two smooth functions on $\mathbb{R}^{2n}$, where an element of $\mathbb{R}^{2n}$ is thought of as a pair $(\mathbf{x}, \mathbf{p})$, with

  • $\mathbf{x} \in \mathbb{R}^n$ representing position of a particle $\mathbf{p} \in \mathbb{R}^n$ representing the momentum of a particle

Then the Poisson bracket of $f$ and $g$, denoted $\pb{f}{g}$ is the function on $\mathbb{R}^{2n}$ given by

\begin{equation*}
\pb{f}{g} (\mathbf{x}, \mathbf{p}) = \sum_{j=1}^{n} \bigg( \frac{\partial f}{\partial x_j} \frac{\partial g}{\partial p_j} - \frac{\partial f}{\partial p_j} \frac{\partial f}{\partial x_j} \bigg)
\end{equation*}

For all smooth functions $f$, $g$ and $h$ on $\mathbb{R}^{2n}$ we have the following:

  1. $\pb{f}{g + ch} = \pb{f}{g} + c \pb{f}{h}$ for all $c \in \mathbb{R}$
  2. $\pb{g}{f} = - \pb{f}{g}$
  3. $\pb{f}{gh} = \pb{f}{g} h + g \pb{f}{h}$
  4. Jacobi identity:

    \begin{equation*}
  \pb{f}{\pb{g}{h}} + \pb{h}{\pb{f}{g}} + \pb{g}{\pb{h}{f}} = 0
\end{equation*}

The position and momentum functions satisfy the following Poisson bracket relations:

\begin{equation*}
\begin{split}
  \pb{x_j}{x_k} &amp;= 0 \\
  \pb{p_j}{p_k} &amp;= 0 \\
  \pb{x_j}{p_k} &amp;= \delta_{jk}
\end{split}
\end{equation*}

If a particle in $\mathbb{R}^n$ has the usual sort of energy function (kinetic energy plus potential energy), we have

\begin{equation*}
H(\mathbf{x}, \mathbf{p}) = \frac{1}{2m} \sum_{j=1}^{n} p_:j^2 + V(\mathbf{x})
\end{equation*}

With the Hamiltonian, and as usual, having $p_j = m_j \dot{x}_j$, we can write Netwon's laws as:

\begin{equation*}
\begin{split}
  \frac{d x_j}{dt} &amp;= \frac{\partial H}{\partial p_j} \\
  \frac{d p_j}{dt} &amp;= - \frac{\partial H}{\partial x_j}
\end{split}
\end{equation*}

These equations we refer to has Hamilton's equations.

If $\big( \mathbf{x}(t), \mathbf{p}(t) \big)$ is a solution of the Hamilton's equation, then for any function $f$ on $\mathbb{R}^{2n}$, we have

\begin{equation*}
\frac{d}{dt} f \big( \mathbf{x}(t), \mathbf{p}(t) \big) = \pb{f}{H} \big( \mathbf{x}(t), \mathbf{p}(t) \big)
\end{equation*}

Call a smooth function $f$ on $\mathbb{R}^{2n}$ a conserved quantity if $f \big( \mathbf{x}(t), \mathbf{p}(t) \big)$ is independent of $t$ for each solution $\big( \mathbf{x}(t), \mathbf{p}(t) \big)$ of Hamilton's equations.

Then $f$ is a conserved quantity if and only if

\begin{equation*}
\pb{f}{H} = 0
\end{equation*}

In particular, the Hamiltonian $H$ is a conserved quantity.

Solving Hamilton's equatons on $\mathbb{R}^{2n}$ gives rise to a flow on $\mathbb{R}^{2n}$, that is, a family $\Phi_t$ of diffeomorphisms of $\mathbb{R}^{2n}$, where $\Phi_t(\mathbf{x}, \mathbf{p})$ is equal to the solution at time $t$ of Hamilton's equations with initial conditions $(\mathbf{x}, \mathbf{p})$.

Since it is possible (depending on the choice of potential function $v$ ) that a particle can escape to infinity in finite time, the maps $\Phi_t$ are not necessarily defined on all of $\mathbb{R}^{2n}$, but only on some subset therof.

If $\Phi_t$ is defined on all of $\mathbb{R}^{2n}$ we say it's complete.

The flow associated with Hamilton's equations, for an arbitrary Hamitonian function $H$, preserves the (2n)-dimensional volume measure

\begin{equation*}
dx_1 dx_2 \dots dx_n dp_1 dp_2 \dots dp-n
\end{equation*}

What this means, more precisely, is that if a measurable set $E$ is contained in the domain of $\Phi_t$ for some $t \in \mathbb{R}$, then the volume of $\Phi_t(E)$ is equal to the volume of $E$.

3. A First Approach to Quantum Mechanics

3.2 A Few Words About Operators and Their Adjoints

  • Linear operator $A: \mathbf{H} \to \mathbf{H}$ is bounded if

    \begin{equation*}
\exists C \in \mathbb{C} : \norm{A \psi} \le C \norm{\psi}, \qquad \forall \psi \in \mathbf{H}
\end{equation*}
  • For any bounded operator $A$, there is a unique bounded operator $A^*$, called the adjoint of $A$, such that

    \begin{equation*}
\left\langle \phi, A \psi \right\rangle = \left\langle A^* \phi, \psi \right\rangle, \quad \forall \phi, \psi \in \mathbf{H}
\end{equation*}
  • Existence of $A^*$ follows from the Riesz Theorem

For any bounded linear operator $A: \mathbf{H} \to \mathbf{H}$ there is a unique bounded operator $A^*$, called the adjoint of $A$, such that

\begin{equation*}
\left\langle \phi, A \psi \right\rangle = \left\langle A^* \phi, \psi \right\rangle, \quad \forall \phi, \psi \in \mathbf{H}
\end{equation*}

The existence of $A^*$ follows from Riesz Theorem.

We say $A$ is self-adjoint if $A = A^*$.

Further, if $A$ is a linear operator defined on all of $\mathbf{H}$ and having the property that

\begin{equation*}
\left\langle \phi, A \psi \right\rangle = \left\langle A \phi, \psi \right\rangle \quad \forall \phi, \psi \in \mathbf{H}
\end{equation*}

then $A$ is automatically bounded.

This means that an unbounded operator cannot be defined on the entire $\mathbf{H}$!

An unbounded operator $A$ on $\mathbf{H}$ is a linear map from a dense subspace $\text{Dom}(A) \subset \mathbf{H}$ into $\mathbf{H}$.

Then $A$ is "not necessarily bounded", since nothing in the definition prevents us from having $\text{Dom}(A) = \mathbf{H}$ and having $A$ be bounded.

For an unbounded operator $A$ on $\mathbf{H}$, the adjoint $A^*$ of $A$ is defined as follows:

A vector $\phi \in \mathbf{H}$ belongs to the domain $\text{Dom}(A^*)$ of $A^*$ if the linear functional

\begin{equation*}
\left\langle \phi, A \cdot \right\rangle
\end{equation*}

defined on $\text{Dom}(A)$, is bounded.

For $\phi \in \text{Dom}(A^*)$, let $A^* \hphi$ be the unique vector $\chi$ such that

\begin{equation*}
\left\langle \chi, \psi \right\rangle = \left\langle \phi, A \psi \right\rangle, \quad \forall \psi \in \text{Dom}(A)
\end{equation*}

Since $\left\langle \phi, A \cdot \right\rangle$ is bounded and $\text{Dom}(A)$ is, by definition of a unbounded operator, dense, the BLT theorem tells us that $\left\langle \phi, A \cdot \right\rangle$ has a unique bounded extension to all of $\mathbf{H}$.

Further, Riesz theorem then guarantees the existence and uniqueness of $\chi$, the corresponding vector such that

\begin{equation*}
\left\langle \chi, \psi \right\rangle = \left\langle \phi, A \psi \right\rangle, \quad \forall \psi \in \text{Dom}(a)
\end{equation*}

Thus, the adjoint of a unbounded operator is a linear operator on $\text{Dom}(A)$.

An unbounded operator $A$ on $\mathbf{H}$ is symmetric if

\begin{equation*}
\left\langle \phi, A \psi \right\rangle = \left\langle A \phi, \psi \right\rangle, \quad \forall \phi, \psi \in \text{Dom}(A)
\end{equation*}

The operator $A$ is self-adjoint if

\begin{equation*}
\text{Dom}(A^*) = \text{Dom}(A) \quad \text{and} \quad A^* \phi = A \phi, \quad \forall \phi \in \text{Dom}(A)
\end{equation*}

Finally, $A$ is essentially self-adjoint if the closure in $\mathbf{H} \times \mathbf{H}$ of the graph of $A$ is the graph of a self-adjoint operator.

3.6 Axioms of Quantum Mechanics: Operators and Measurements

The state of the system is represent by a unit vector $\psi$ in an appropriate Hilbert space $\mathbf{H}$.

If $\psi_1$ and $\psi_2$ are two unit vectors in $\mathbf{H}$ with $\psi_2 = c \psi_1$ for some constant $c \in \mathbb{C}$, then $\psi_1$ and $\psi_2$ represent the same physical state.

There is a more general notion of a "mixed state", which we will consider later.

To each real-valued function $f$ on the classical phase space there is associated a self-adjoint operator $\hat{f}$ on the quantum Hilbert space.

"Quantum Hilbert space" simply means "the Hilbert space associated with a given quantum system".

If a quantum system is in a state described by a unit vector $\psi \in \mathbf{H}$, the probability distribution for the measurement of some observable $f$ satisfies

\begin{equation*}
\mathbb{E}[f^m] = \left\langle \psi, \big( \hat{f} \big)^m \psi \right\rangle
\end{equation*}

In particular, the expectation value for a measurement of $f$ is given by

\begin{equation*}
\left\langle \psi, \hat{f} \psi \right\rangle
\end{equation*}

Suppose a quantum system is initially in a state $\psi$ and that a measurement of an observable $f$ is performed.

If the result of the measurement is the number $\lambda \in \mathbb{R}$, then immediately after the measurement, the system will be in a state $\psi'$ that satisfies

\begin{equation*}
\hat{f} \psi' = \lambda \psi'
\end{equation*}

The passage from $\psi$ to $\psi'$ is called the collapse of the wave function. Here $\hat{f}$ is the self-adjoint operator associated with $f$ by Axiom 2.

The time-evolution of the wave function $\psi$ in a quantum system is given by the Schödinger equtaion,

\begin{equation*}
i \hbar \frac{d \psi}{d t} = \hat{H} \psi
\end{equation*}

Here $\hat{H}$ is the operator corresponding to the classical Hamiltonian $H$ by means of Axiom 2.

3.8 The Heisenberg Picture

In the Heisenberg picture, each self-adjoint operator $A$ evolves in time according to the operator-valued differential equation

\begin{equation*}
\frac{d A(t)}{dt} = \frac{1}{i \hbar} \comm{A(t)}{\hat{H}}
\end{equation*}

where $\hat{H}$ is the Hamiltonian operator of the system, where $\comm{\cdot}{\cdot}$ is the commutation, given by

\begin{equation*}
\comm{A}{B} = AB - BA
\end{equation*}

4. The Free Schrödinger Equation

Notation

  • $\omega(k) = \frac{k^2 \hbar}{2m}$

4.2 Solution as a convolution

"Free" means that there is no force acting on the particle, so that we may take the potential $V$ to be identically zero.

Thus, the free Scrhödinger equation is

\begin{equation*}
\frac{\partial \psi}{\partial t} = \frac{i\hbar}{2m} \frac{\partial^2 \psi}{\partial x^2}
\end{equation*}

subject to an initial condition of the form

\begin{equation*}
\psi(x, 0) = \psi_0(x)
\end{equation*}

Suppose that $\psi_0$ is a "nice" function, for example, a Schwartz function.

Let $\hat{\psi}_0$ denote the Fourier transform of $\psi_0$ and define $\psi(x, t)$ by

\begin{equation*}
\psi(x, t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \hat{\psi}_0(k) e^{i \big(kx - \omega(k) t\big)} \ dk
\end{equation*}

where $\omega(k)$ is defined by

\begin{equation*}
\omega(k) = \frac{\hbar k^2}{2m}
\end{equation*}

Then $\psi(x, t)$ solves the free Schödinger equation with initial condition $\varphi_0$.

5. A Particle in a Square Well

6. Perspectives on the Spectral Theorem

Notation

  • $X$ denotes the position operator given by

    \begin{equation*}
\big( X \psi \big)(x) = x \psi(x)
\end{equation*}

    acting on $\mathbf{H} = L^2 (\mathbb{R})$

  • $A$ is a self-adjoint operator
  • Borel set $E$ of $\mathbb{R}$
  • $V_E$ denotes the closed span of all eigenvectors of $A$ with eigenvalues in $E$
    • In cases where $A$ does not have true orthonormal basis, $V_E$ is called spectral subspace
  • $P_E$ is the orthogonal projection onto $V_E$
  • For any unit vector $\psi$, we have

    \begin{equation*}
\text{prob}_{\psi}(A \in E) = \left\langle \psi, P_E \psi \right\rangle
\end{equation*}
  • Indicator function

    \begin{equation*}
1_E \psi = 
\begin{cases}
  1 &amp; \text{ if } \\
  0 &amp; \text{otherwise}
\end{cases}
\end{equation*}

Goals of Spectral Theory

  • Recall that if the eigenvalues are distinct and $\psi$ decomposes as

    \begin{equation*}
\psi = \sum_{j}^{} c_j e_j
\end{equation*}

    the probability of observing the value $\lambda_j$ will be $|c_j|^2$., since $P_{\left\{ \lambda_j \right\}}$ is just the projection onto $e_j$.

  • In cases where $A$ does not have a true orthonormal basis of eigenvectors, we would like the spectral theorem to provide a family of projection operators $P_E$
    • One for each Borel subset $E \subset \mathbb{R}$
    • Will allow us to define probabilities as in "standard" case above
  • Call these projection operators spectral projects and the associated subspaces $V_E$ spectral subspaces.
  • Intuitively, $V_E$ may be thought of as the closed span of all the generalized eigenvectors with eigenvalues in $E$.

Position operator

  • Has no true eigenvectors, i.e. no eigenvecors that are actually in $\mathbf{H}$
  • If we think that "generalized eigenvectors" for $X$ are the distributions given by

    \begin{equation*}
\delta (x - \lambda), \quad \lambda \in \mathbb{R}
\end{equation*}

    then one might guess that spectral subspace $V_E$ should consist of those functions that are "supported" on $E$, i.e. a superposition of the "funtions" $\delta(x - \lambda)$ with $\lambda \in E$ should define a function supported on $E$.

  • Spectral projection $P_E$ is then orthogonal projection onto $V_E$:

    \begin{equation*}
P_E \psi = 1_E \psi
\end{equation*}

    then

    \begin{equation*}
\text{prob}_{\psi}(X \in E) = \left\langle \psi, P_E \psi \right\rangle = \int_E |\psi(x)|^2 \ dx
\end{equation*}
  • The functional calculus of $X$
    • If $f(\lambda) = \lambda^m$ then we should have $f(X) \equiv X^m$

7. Spectral Theorem for Bounded Self-Adjoint Operators

Notation

  • $\mathbf{H}$ is the separable complex Hilbert space
  • Operator norm of $A$ on $\mathbf{H}$ is

    \begin{equation*}
||A|| := \sup_{\psi \in \mathbf{H} \setminus \left\{ 0 \right\}} \frac{||A \psi||}{||\psi||}
\end{equation*}

    is finite.

  • Banach space of bounded operators on $\mathbf{H}$, wrt. operator norm is denoted $\mathcal{B}(\mathbf{H})$.
  • $\rho(A)$ denotes the resolvent set of $A$
  • $\sigma(A)$ denotes the spectrum of $A$
  • $\mu^A$ denotes the projection-valued measure associated with the operator self-adjoint $A$
  • For any projection-vauled measure $\mu$ and $\psi \in \mathbf{H}$, we have an ordinary (positive) real-valued measure $\mu_{\psi}$ given by

    \begin{equation*}
\mu_{\psi} (E) = \left\langle \psi, \mu(E) \psi \right\rangle
\end{equation*}
  • $Q_f: \mathbf{H} \to \mathbb{C}$ is a map defined by

    \begin{equation*}
Q_f (\psi) = \int_X f \ d \mu_{\psi} = \left\langle \psi, \bigg( \int_X f \ d \mu \bigg) \psi \right\rangle
\end{equation*}
  • Spectral subspace for each Borel set $E \subset \mathbb{R}$

    \begin{equation*}
V_E = \text{Range} \big( \mu^A (E) \big)
\end{equation*}

    of $\mathbf{H}$

  • $\{ e_j( \cdot ) \}_{j = 1}^\infty$ defines a simultanouesly orthonormal basis for a family $\{ \mathbf{H}_{\lambda}, \lambda \in X \}$ of separable Hilbert spaces

Properties of Bounded Operators

  • Linear operator $A$ on $\mathbf{H}$ is said to be bounded if the operator norm of $A$

    \begin{equation*}
||A|| := \sup_{\psi \in \mathbf{H} \setminus \left\{ 0 \right\}} \frac{||A \psi||}{||\psi||}
\end{equation*}

    is finite.

  • Space of bounded operators on $\mathbf{H}$ forms a Banach space under the operator norm, and we have the inequality

    \begin{equation*}
||AB|| \le ||A|| \ ||B||
\end{equation*}

    for all bounded operators on $A$ and $B$.

For $A \in \mathcal{B}(\mathbf{H})$, the resolvent set of $A$, denoted $\rho(A)$ is the set of all $\lambda \in \mathbb{C}$ such that the operator $\big( A - \lambda I \big)$ has a bounded inverse.

The spectrum of $A$, denoted by $\sigma(A)$, is the complement in $\mathbb{C}$ of the resolvent set.

For $\lambda$ in the resolvent set of $A$, the operator $\big( A - \lambda I \big)^{-1}$ is called the resolvent of $A$ at $\lambda$.

Alternatively, the resolvent set of $A$ can be described as the set of $\lambda \in \mathbb{C}$ for which $\big( A - \lambda I \big)$ is one-to-one and onto.

For all $A \in \mathcal{B}(\mathbf{H})$, the following results hold.

  1. The spectrum $\sigma(A)$ of $A$ is closed, bounded and nonempty subset of $\mathbb{C}$.
  2. If $|\lambda| &gt; ||A||$, then $\lambda$ is in the resolvent set of $A$

Point 2 in proposition:hall13-quant-7.5 establishes that $\sigma(A)$ is bounded if $A$ is bounded.

Suppose $A \in \mathcal{B}(\mathbf{H})$ satisfies $||A|| &lt; 1$.

Then the operator $\big( I - A \big)$ is invertible, with the inverse given by the following convergent series in $\mathcal{B}(\mathbf{H})$:

\begin{equation*}
\big( I - A \big)^{-1} = I + A + A^2 + A^3 + \dots
\end{equation*}

For all $A \in \mathcal{B}(\mathbf{H})$, we have

\begin{equation*}
\big[ \text{Range}(A) \big]^{\perp} = \text{ker}(A^*)
\end{equation*}

Spectral Theorem for Bounded Self-Adjoint Operators

Given a bounded self-adjoint operator $A$, we hope to associate with each Borel set $E \subset \sigma(A)$ a closed subspace $V_E$ of $\mathbf{H}$, where we think intuitively that $V_E$ is the closed span of the generalized eigenvectors for $A$ with eigenvalues in $E$.

We would expect the following properties of these subspaces:

  1. $V_{\sigma(A)} = \mathbf{H}$ and $V_{\emptyset} = \left\{ 0 \right\}$
    • Captures idea that generalized eigenvectors should span $\mathbf{H}$
  2. If $E$ and $F$ are disjoint, then $V_E \perp V_F$
    • Generalized eigenvectors ought to have some sort of orthogonality for distinct eigenvalues (even if not actually in $\mathbf{H}$)
  3. For any $E$ and $F$, $V_{E \cap F} = V_E \cap V_F$
  4. If $E_1, E_2, \dots$ are disjoint and $E = \cup_j E_j$, then

    \begin{equation*}
V_E = \bigoplus_j V_{E_j}
\end{equation*}
  5. For any $E$, $V_E$ is invariant under $A$.
  6. If $E \subset [\lambda_0 - \varepsilon, \lambda_0 + \varepsilon]$ and $\spi \in V_E$, then

    \begin{equation*}
|| (A - \lambda_0 I) \psi || \le \varepsilon ||\psi||
\end{equation*}
Projection-Valued measures

For any closed subspace $V \subset \mathbf{H}$, there exists a unique bounded operator $P$ such that

\begin{equation*}
P v = 
\begin{cases}
  v &amp; \text{if } v \in V \\
  0 &amp; \text{otherwise (or equiv. } V^{\perp} \text{)}
\end{cases}
\end{equation*}

where $V^{\perp}$ is the orthogonal complement.

This operator is called the orthogonal projection onto $V$ and it satisfies

\begin{equation*}
P^2 = P \quad \text{and} \quad P^* = P
\end{equation*}

i.e. it's self-adjoint.

One also has the properties

\begin{equation*}
\left\langle Px, (y - Py) \right\rangle = \left\langle (x - Px), Py \right\rangle = 0
\end{equation*}

or equivalently,

\begin{equation*}
\left\langle x, Py \right\rangle = \left\langle Px, Py \right\rangle = \left\langle Px, y \right\rangle
\end{equation*}

Conversely, if $P$ is any bounded operator on $\mathbf{H}$ satisfying $P^2 = P$ and $P^* = P$, then $P$ is the orthogonal projection onto a closed subspace $V$, where

\begin{equation*}
V = \text{range}(P)
\end{equation*}
  • Convenient ot describe closed subspaces of $\mathbf{H}$ in terms of associated orthogonal projection operators
  • Projection operator expresses the first four properties of the spectral subspaces; those properties are similar to those of a measures, so we use the term projection-valued measure

Let $X$ be a set and $\Omega$ an $\sigma \text{-algebra}$ in $X$.

A map $\mu : \Omega \to \mathcal{B}(\mathbf{H})$ is called a projection-valued measure if the following properties are satisfied:

  1. For each $E \in \Omega$, $\mu(E)$ is an orthogonal projection
  2. $\mu(\emptyset) = 0$ and $\mu(X) = I$
  3. If $E_1, E_2, \dots \in \Omega$ are disjoint, then for all $v \in \mathbf{H}$, we have

    \begin{equation*}
\mu \Bigg( \bigcup_{j = 1}^\infty E_j \Bigg) v = \sum_{j=1}^{\infty} \mu(E_j) v
\end{equation*}

    where the convergence of the sum is in the norm-topology on $\mathbf{H}$.

  4. For all $E_1, E_2 \in \Omega$, we have $\mu(E_1 \cap E_2) = \mu(E_1) \mu(E_2)$

Properties 2 and 4 in of a projection-valued measure tells us that if $E_1$ and $E_2$ are disjoint, then

\begin{equation*}
\mu(E_1) \mu(E_2) = 0
\end{equation*}

from which it follows that the range of $\mu(E_1)$ and the range of $\mu(E_2)$ are perpendicular.

Let $\Omega$ be a $\sigma \text{-algebra}$ in a set $X$ and let $\mu: \Omega \to \mathcal{B}(\mathbf{H})$ be a projection-valued measure.

Then there exists a unique linear map, denoted

\begin{equation*}
f \mapsto \int_{\Omega} f \ d \mu
\end{equation*}

from the space of bounded, measurable, complex-valued functions on $\Omega$ into $\mathcal{B}(\mathbf{H})$ with the property that

\begin{equation*}
\left\langle \psi, \bigg( \int_X f \ d\mu \bigg) \psi \right\rangle = \int_X f \ d \mu_{\psi}
\end{equation*}

for all $f$ and all $\psi \in \mathbf{H}$.

This integral has the following properties:

  1. For all $E \in \Omega$, we have

    \begin{equation*}
\int_X 1_E \ d\mu = \mu(E)
\end{equation*}

    In particular, the integral of the constant function $1$ is $I$.

  2. For all $f$, we have

    \begin{equation*}
\norm{\int_X f \ d \mu} \le \sup_{\lambda \in X} | f(\lambda)|
\end{equation*}
  3. Integration is multiplicative: For all $f$ and $g$, we have

    \begin{equation*}
\int_X fg \ d \mu = \bigg( \int_X f \ d \mu \bigg) \bigg( \int_X g \ d \mu \bigg)
\end{equation*}
  4. For all $f$, we have

    \begin{equation*}
\int_X \bar{f} \  d \mu = \bigg( \int_X f \ d \mu \bigg)^*
\end{equation*}

    In particular, if $f$ is real-valued, then $\int_X f \ d \mu$ is self-adjoint.

By Property 1 and linearity, integration wrt. $\mu$ has the expected behavior on simple functions. It then follows from Property 2 that the integral of an arbitrary bounded measurable function $f$ can be comptued as follows:

  1. Take sequence $s_n$ of simple functions converging uniformly to $f$
  2. The integral of $f$ is then the limit, in the norm-topology, of the integral of the $s_n$.

A quadratic form on a Hilbert space $\mathbf{H}$ is a map $Q: \mathbf{H} \to \mathbb{C}$ with the following properties:

  1. $Q(\lambda \psi) = |\lambda|^2 Q(\psi)$ for all $\psi \in \mathbf{H}$ and $\lambda \in \mathbb{C}$
  2. the map $L: \mathbf{H} \times \mathbf{H} \to \mathbb{C}$ defined by

    \begin{equation*}
\begin{split}
  L(\phi, \psi) =&amp; \frac{1}{2} \big[ Q(\phi + \psi) - Q(\phi) - Q(\psi) \big] \\
  &amp; - \frac{i}{2} \big[ Q(\phi + i \psi) - Q(\phi) - Q(i \psi) \big]
\end{split}
\end{equation*}

is a sesquilinear form.

A quadratic form $Q$ is bounded if there eixsts a constant $C$ such that

\begin{equation*}
\left| Q(\phi) \right| \le C \norm{\phi}^2, \quad \forall \phi \in \mathbf{H}
\end{equation*}

The smallest such constant $C$ is the norm of $Q$.

If $Q$ is a bounded quadratic form on $\mathbf{H}$, there is a unique $A \in \mathcal{B}(\mathbf{H})$ such that

\begin{equation*}
Q(\psi) = \left\langle \psi, A \psi \right\rangle, \quad \forall \psi \in \mathbf{H}
\end{equation*}

If $Q(\psi)$ belongs to $\mathbb{R}$ for all $\psi \in \mathbf{H}$, then the operator $A$ is self-adjoint.

Spectral Theorem for Bounded Self-Adjoint Operators: direct integral approach

Notation
  • $\mu$ is a $\sigma \text{-finite}$ measure on a $\sigma \text{-algebra}$ $\Omega$ of sets in $X$
  • For each $\lambda \in X$ we have a separable Hilbert space $\mathbf{H}_{\lambda}$ with inner product $\left\langle \cdot, \cdot \right\rangle_{\lambda}$
  • Elements of the direct integral are called sections $s$
Stuff

There are several benefits to this approach compared to the simpler "multiplication operator" approach.

  1. The set $X$ and the function $h$ become canonical:
    • $X = \sigma(A)$
    • $h(\lambda) = \lambda$
  2. The direct integral carries with it a notion of generalized eigenvectors / kets, since the space $\mathbf{H}_{\lambda}$ can be thought of as the space of generalized eigenvectors with eigenvalue $\lambda$.
  3. A simple way to classify self-adjoint operators up to unitary equivalence: two self-adjoint operators are unitarily equivalent if and only if their direct integral representations are equivalent in a natural sense.

Elements of the direct integral are called sections $s$, which are functions on $X$ with values in the union of the $\mathbf{H}_{\lambda}$, with property

\begin{equation*}
s(\lambda) \in \mathbf{H}_{\lambda} \quad \forall \lambda \in X
\end{equation*}

We define the norm of a section $s$ by the formula

\begin{equation*}
\norm{s}^2 = \int_X \left\langle s(\lambda), s(\lambda) \right\rangle_{\lambda} \ d \mu(\lambda)
\end{equation*}

provided that the integral on the RHS is finite.

The inner product between two sections $s_1$ and $s_2$ (with finite norm) should then be given by the formula

\begin{equation*}
\left\langle s_1, s_2 \right\rangle := \int_X \left\langle s_1(\lambda), s_2(\lambda) \right\rangle \ d \mu(\lambda)
\end{equation*}

Seems very much like the differential geometry section we know of.

  • $\mathbf{H}_{\lambda}$ is the fibre at each point $\lambda$ in the mfd.
  • $X$ is the mfd.

First we slightly alter the concept of an orthonormal basis. We say a family $\{ e_j \}$ of vectors is an orthonormal basis for a Hilbert space $\mathbf{H}$ if

\begin{equation*}
\left\langle e_j, e_k \right\rangle = 0, \quad j \ne k
\end{equation*}

and

\begin{equation*}
\norm{e_j} = 1 \text{ or } 0
\end{equation*}

This just menas that we allow some of the vectors in our basis to be zero.

We define a simultanouesly orthonormal basis for a family $\{ \mathbf{H}_{\lambda}, \lambda \in X \}$ of separable Hilbert spaces to be a collection $\{ e_j( \cdot ) \}_{j = 1}^\infty$ of sections with the property that

\begin{equation*}
\left\{ e_j(\lambda) \right\}_{j = 1}^\infty \text{ is a basis for } \mathbf{H}_{\lambda}, \quad \forall \lambda \in X
\end{equation*}

Provided that the function $\lambda \mapsto \dim \mathbf{H}_{\lambda}$ is a measurable function from $X$ into $[0, \infty]$, it is possible to choose a /simultaneous orthonormal basis $\{ e_j(\cdot) \}$ such that

\begin{equation*}
\left\langle e_j(\lambda), e_k(\lambda) \right\rangle
\end{equation*}

is measurable for all $j$ and $k$.

Choosing a simultaneous orthonormal basis with the property that the function

\begin{equation*}
\lambda \mapsto \dim \mathbf{H}_{\lambda}
\end{equation*}

is a measurable function from $X$ into $[0, \infty]$, we can define a section to be measurable if the function

\begin{equation*}
\lambda \mapsto \left\langle e_j(\lambda), s(\lambda) \right\rangle_{\lambda}
\end{equation*}

is a measurable complex-valued function for each $j$. This aslo means that the $e_j$ are also measurable sections.

We refer to such a choice of simultaneous orthonormal basis as a measurability structure on the collection $\{ \mathbf{H}_{\lambda}, \lambda \in X \}$.

Given two measurable sections $s_1$ and $s_2$, the function

\begin{equation*}
\lambda \mapsto \left\langle s_1(\lambda), s_2(\lambda) \right\rangle_{\lambda} = \sum_{j=1}^{\infty} \left\langle s_1(\lambda), e_j(\lambda) \right\rangle_{\lambda} \left\langle e_j(\lambda), s_2(\lambda) \right\rangle_{\lambda}
\end{equation*}

is also measurable.

Suppose the following structures are given:

  1. a $\sigma \text{-finite}$ measure space $(X, \Omega, \mu)$
  2. a collection $\{ \mathbf{H}_{\lambda} \}_{\lambda \in X}$ of separable Hilbert spaces for which the dimension function is measurable
  3. a measurability structure on $\{ \mathbf{H}_{\lambda} \}_{\lambda \in X}$

Then the direct integral of $\mathbf{H}_{\lambda}$ wrt. $\mu$, denoted

\begin{equation*}
\int_X^{\oplus} \mathbf{H}_{\lambda} \ d \mu(\lambda)
\end{equation*}

is the space of equivalence classes of almost-everywhere-equal measurable sections $s$ for which

\begin{equation*}
\norm{s}^2 := \int_X \left\langle s(\lambda), s(\lambda) \right\rangle_{\lambda} \ d \mu(\lambda) &lt; \infty
\end{equation*}

The inner product $\left\langle s_1, s_2 \right\rangle$ of two sections $s_1$ and $s_2$ is given by the formula

\begin{equation*}
\left\langle s_1, s_2 \right\rangle := \int_X \left\langle s_1(\lambda), s_2(\lambda) \right\rangle_{\lambda} \ d \mu(\lambda)
\end{equation*}

8. Spectral Theorem for Bounded Self-Adjoint Operators: Proofs

Notation

  • $A \in \mathcal{B}(\mathbf{H})$ with spectral radius

    \begin{equation*}
R(A) := \sup_{\lambda \in \sigma(A)} \left| \lambda \right|
\end{equation*}

Stage 1: Continuous Functional Calculus

Stage 2: An Operator-Valued Riesz Representation Theorem

Let $X$ be a compact metric space and let $\mathcal{C}(X; \mathbb{R})$ denote the space of continuous, real-valued functions on $X$.

Suppose $\Lambda: \mathcal{C}(X ; \mathbb{R}) \to \mathbb{R}$ is a linear functioanl with the property that $\Lambda(f)$ is non-negative if $f$ is non-negative.

Then there exists a unique (real-valued, positive) measure $\mu$ on the Borel sigma-algebra in $X$ for which

\begin{equation*}
\Lambda(f) = \int_X f \ d \mu, \qquad \forall f \in \mathcal{C}(X ; \mathbb{R})
\end{equation*}

Observe that $\mu$ is a finite measure, with

\begin{equation*}
\mu(X) = \Lambda(\mathbf{1})
\end{equation*}

where $\mathbf{1}$ is the constant function.

Practice problems

"Point-potential"

We have a potential of the form

\begin{equation*}
V(x) = - \alpha \delta(x)
\end{equation*}

which gives us the TISE:

\begin{equation*}
- \frac{\hbar^2}{2m} \frac{d^2}{dx^2} \varhpi(x) - \alpha \delta(x) \varphi(x) = E \varphi(x)
\end{equation*}

Then we impose the following conditions on the solution $\varphi(x)$ :

  • continuity at $x = 0$, i.e. require $\underset{x \rightarrow 0^-}{\lim} \varphi(x) = \underset{x \rightarrow 0^+}{\lim} \varphi(x)$
  • $\int_{-\infty}^\infty | \varphi(x) |^2 dx = 1$

The continuity restriction does NOT necessarily mean that the derivative itself is continuous!

If we then integrate both sides of the TISE above over the interval $(- \varepsilon, \varepsilon)$ taking the limit $\varepsilon \to 0$:

\begin{equation*}
\underset{\varepsilon \to 0}{\lim} \int_{-\varepsilon}^\varepsilon \Big( - \frac{\hbar^2}{2m} \frac{d^2}{dx^2} \varphi(x) - \alpha \delta(x) \varphi(x) \Big) dx = \underset{\varepsilon \to 0}{\lim} \int_{- \varepsilon}^\varepsilon \varphi(x) dx
\end{equation*}

which gives us

\begin{equation*}
- \frac{\hbar^2}{2m} \Big( \underset{\varepsilon \to 0^+}{\lim} \frac{d}{dx} \varphi(x) - \underset{\varepsilon \to 0^-}{\lim} \frac{d}{dx} \varphi(x) \Big) - \alpha \varpsi(0) = E \int_{-\varepsilon}^\varepsilon \varphi(x) dx
\end{equation*}

And since $\int_{-\varepsilon}^\varepsilon \varphi(x) dx \to 0$, we get:

\begin{equation*}
\Big( \underset{\varepsilon \to 0^+}{\lim} \frac{d}{dx} \varphi(x) - \underset{\varepsilon \to 0^-}{\lim} \frac{d}{dx} \varphi(x) \Big) = - \frac{2m}{\hbar^2} \alpha \varphi(0)
\end{equation*}

Then we compute the derivaties and take the limits, giving us something better to work with!

Harmonic Oscillator

Consider a potential energy of the form $V(x) = \frac{1}{2} \omega_0^2 x^2$. The classical energy is then:

\begin{equation*}
E = \frac{p^2}{2m} + \frac{1}{2} m \omega_0^2 x^2
\end{equation*}

Using the Hamiltonian operator $H$ and the momentum $p = i \hbar \frac{\partial}{\partial x}$ . We can then write the Schödinger equation, letting $\lambda = \frac{2Em}{\hbar}$ and $\alpha = \frac{m^2 \omega_0^2}{\hbar^2}$ for convenience, as:

\begin{equation*}
- \frac{\partial^2}{\partial x^2} \varphi(x) + \alpha^2 x^2 \varphi(x) = \lambda \varphi(x)
\end{equation*}

or equivalently,

\begin{equation*}
\frac{\partial^2}{\partial x^2} \varphi(x) + (\lambda - \alpha^2 x^2) \varphi(x) = 0
\end{equation*}

Goal is the to find solutions $\varphi(x)$ that are physical and suitable for all $x$ .

One approach is to perform the following:

  1. Obtain solution for limiting behavior
  2. Perform power series expansion

Our requirements for the solution:

  • single valued over the region (well-behaved)

First consider when $|x|$ is large and $\alpha^2 x^2$ dominates with $\lambda$ being negible by design . Then

\begin{equation*}
\frac{\partial^2}{\partial x^2} \varphi(x) = \alpha^2 x^2 \varphi(x)
\end{equation*}

which we can guess ourselves to a solution of

\begin{equation*}
\varphi(x) = e^{\pm \frac{\alpha}{2}x^2 }
\end{equation*}

to which we then note that $e^{+ \frac{\alpha}{2} x^2}$ is not satisfactory given that it diverges (due to restricting out attention to square-integrable functions only). Hence,

\begin{equation*}
\varphi(x) = e^{- \frac{\alpha}{2}x^2}
\end{equation*}

Why does it make sense to consider the limit behavior?

Due to the continuity imposed on the solution / wave function $\varphi(x)$ we now that the solution to the limiting behavior is related to the solution across the entire domain of $\varphi(x)$.

From this solution we can then derive more solutions using reduction of order, i.e. we multiply our previous solution with some arbitrary function $f(x)$, which gives use

\begin{equation*}
\varphi(x) = e^{- \frac{\alpha}{2} x^2} f(x)
\end{equation*}

Taking the derivatives and substituting into the original differential equation we get:

\begin{equation*}
f'' - 2 \alpha x f' + (\lambda - \alpha) f = 0
\end{equation*}

To make the connection with a "special" function in a few steps, let's make the substituion $\xi = \sqrt{\alpha} x$ and replace the function $f(x)$ with $H(\xi)$ to get the following equation:

\begin{equation*}
\frac{d^2}{d \xi^2} H - 2 \xi \frac{d}{d \xi} H + \Big( \frac{\lambda}{\alpha} - 1 \Big) H = 0
\end{equation*}

We can now represent $H(\xi)$ as a power series:

\begin{equation*}
H(\xi) = \sum_{n=0}^\infty a_n \xi^n
\end{equation*}

where we've made the assumption that this series converges for all $\xi$ and that the function is well defined.

Physically, we know the above has to be case for the solutions to make sense and satisfy our overall constraint that the integral of the probability density must be finite: $\int_{-\infty}^\infty | \varphi(x)|^2 dx = 1$

Differentiating $H(\xi)$ and substituting back into the differential equation we (eventually) obtain the recursion formula:

\begin{equation*}
(n + 1) (n + 2) a_{n + 2} + \Big( \frac{\lambda}{\alpha} - 1 - 2n \Big) a_n = 0
\end{equation*}

And thus we have a solution! BUT we have a serious problem :

  • We assumed the series of $f(x)$ to be convergent for all $x$ which is not generally true in the recursion relation for all values $\lambda$ (the energy values) and $\alpha$ (potential well curvature)
  • The function is square integrable if $f(x)$ is truncated and does not contain an infininte number of terms in its series expansion

Thus, when we then require the

\begin{equation*}
\int_{-\infty}^\infty | \varphi(x)|^2 dx = 1
\end{equation*}

it can be seen from the recursion relation that the series expansion is finite (and hence converges $\forall x$ ) if

\begin{equation*}
\Big( \frac{\lambda}{\alpha} - 1 - 2n \Big) = 0
\end{equation*}

or the energy values have the folloiwgn discrete spectra,

\begin{equation*}
\lambda = (2n + 1) \alpha
\end{equation*}

Whiiiich we can finally write as:

\begin{equation*}
E_n = \Big( n + \frac{1}{2} \Big) h \nu_0
\end{equation*}

Short problems

Eigenvalue expansion of in angular momentum basis

The wavefunction of a particle at $t = 0$ is known to have the form

\begin{equation*}
u(r, \theta, \phi) = A R(r) f(\theta) \cos 2 \phi
\end{equation*}

where $f$ is an unknown function of $\theta$.

What can be predicted about the results of measuring:

  1. the z-component of angular momentum?
  2. the square of the angular momentum?

Hint: expand $u(r, \theta, \phi)$ in eigenfunctions of $\hat{L}_z$, which are of the form $\exp (im \phi)$, where $m = 0, \pm 1, \pm 2, \dots$.

Answer

Observe that

\begin{equation*}
\cos 2 \phi = \frac{1}{2} \Big[ e^{2 i \phi} + e^{- 2 i \phi} \Big]
\end{equation*}

Thus,

\begin{equation*}
u(r, \theta, \phi) = \frac{1}{2} A R(r) f(\theta) \Big[ e^{2 i \phi} + e^{- 2 i \phi} \Big]
\end{equation*}

Thus, when measuring $\hat{L}_z$, we can obtain either $m = -2$ or $m = 2$ with equal probability.

We can't really say anything about $\hat{L}^2$, without knowing more about $f(\theta)$, other than observing that $\ell \ge 2$ since $m = \pm 2$.

Constructing matrix for angular component of a system with $s = 1$

Construct the $3 \times 3$ matrix which represent the Cartesian component in the z-direction of the angular momentum for a system with $s = 1$.

Answer

For $s = 1$, $m \in \{ -1, 0, 1 \}$, and the matrix-elements of $\hat{S}_z$ are given by

\begin{equation*}
\mel{s, m'}{\hat{S}_z}{s, m} = m \hbar \delta_{m', m}
\end{equation*}

Thus, we have

\begin{equation*}
\hat{S}_z \longrightarrow \hbar
\begin{bmatrix}
  1 &amp; 0 &amp; 0 \\
  0 &amp; 0 &amp; 0 \\
  0 &amp; 0 &amp; -1
\end{bmatrix}
\end{equation*}

where we use $\longrightarrow$ instead of $=$ to indicate that $\hat{S}_z$ only corresponds to this matrix in this specific representation / space (Cartesian coordinates in this case).

Q & A

Simplification of potential well problem due to direction of travel

Footnotes:

1

Is is true that the Hilbert space $L^2$ and its dual space are isomorphic; however, we have taken for the wave function space $\mathcal{F}$ is a subspace of $L^2$, which explains why $\mathcal{F}^*$ is "larger" than $\mathcal{F}$.