Differential Equations
Table of Contents
Stuff
Spectral method
A class of techniques to numerical solve certain differential equations. The idea is to write a solution of the differential equation as a sum of certain "basis functions" (e.g. Fourier Series which is a sum of sinouids) and then choose the coefficients in the sum in order to satisfy the differential equation as well as possible.
Series solutions
Definitions
Suppose we have a second-order differential equation of the form
where
,
and
are polynomials.
We then rewrite the equation as
Then we say the point
is
- an ordinary point if the functions
and
are analytic at 
- singular point if the functions aren't analytic
Convergence
Have a look at p. 248 in "Elementary Differential Equations and Boundary Problems". The two pages that follow summarizes a lot of different methods which can be used to determine convergence of a power series.
Systems of ODEs
Overview
The main idea here is to transform some n-th order ODE into a system of
1st order ODEs, which we can the solve using "normal" linear algebra!
Procedure
Consider an arbitrary n-th order ODE
- Change dependent variables to
:
- Take derivatives of the new variables
Homogenous
Fundamental matrices
Suppose that
form a fundamental set of solutions for the equation
on some interval
. Then the matrix
whose columns are the vectors
, is said to be a fundamental matrix for the system.
Note that the fundamental matrix is nonsingular / invertible since the solutions are lin. indep.
We reserve the notation
for the fundament matrices of the form
i.e. it's just a fundamental matrix parametrised in such a way that our initial conditions gives us the identity matrix.
The solution of an IVP of the form:
can then be written
which can also be written
Finally, we note that
The exponential of a matrix is given by
and we note that it satisfies differential equations of the form
since
Hence, we can write the solution of the above differential equation as
Repeating eigenvalues
Suppose we have a repeating eigenvalue
(of algebraic multiplicity 2), with the found solution correspondig to
is
where
satisfies
We then assume the other solution to be of the form
where
is determined by the equation
Multiplying both sides by
we get
since
by definition. Solving the above equation for
and substituting that solution into the expression for
we get the second solution corresponding to the repeated eigenvalue.
We call the vector
the generalized eigenvector of
.
Non-homogenous
We want to solve the ODE systems of the form
Which has a general solution of the form
where
is the general solution to the corresponding homogenous ODE system.
We discuss the three methods of solving this:
Diagonolisation
We assume the corresponding homogenous system is solved with the eigenvalues
and eigenvectors
, and then introduce the change of variables
:
which gives us a system of
decoupled systems:
and finally we transform back to the original variables:
- Example
Consider
Since the system is linear and non-homogeneous, the general solution must be of the form
where
denotes the particular solution.
The homogenous solution can be found as follows:
When
: if
, hence we choose
When
: if
, hence we choose
Thus, the general homogenous solution is:
To find the particular solution, we make the change of variables
:
With the new variables the ODE looks as follows:
Which in terms of
is
Which, by the use of undetermined coefficients, we can solve as
Doing "some" algebra, we eventually get
Then, finally, substituting back into the equation for
, and we would get the final result.
Undetermined coefficients
This method only works if the coefficient matrix
for some constant matrix
, and the components of
are polynomial, exponential, or sinusoidal functions, or sums or products of these.
Variation of parameters
This is the most general way of doing this.
Suppose we have the n-th order linear differential equation
Suppose we have found the solutions to the corresponding homogenous diff. eqn.
With the method of variation of parameters we seek a particular solution of the form
Since we have
functions
to determine we have to specify
conditions.
One of these conditions is cleary that we need to satisfy the non-homongenous diff. eqn. above. Then the
other conditions are chosen to make the computations as simple as possible.
Taking the first partial derivative wrt.
of
we get (using the product rule)
We can hardly expect it to be simpler to determine
if we have to solve diff. eqns. of higher order than what we started out with; hence we try to surpress the terms that lead to higher derivatives of
by imposing the following condition
which we can do since we're just looking for some arbitrary functions
. The expression for
then reduces to
Continuing this process for the derivatives
we obtain our
condtions:
giving us the expression for the m-th derivative of
to be
Finally, imposing that
has to satisfy the original non-homogenous diff. eqn. we take the derivative of
and substitute back into the equation. Doing this, and grouping terms involving each of
together with their derivatives, most of the terms drop out due to
being a solution to the homogenous diff. eqn., yielding
Together with the previous
conditons we end up with a system of linear equations
(note the
at the end!).
The sufficient condition for the existence of a solution of the system of equations is that the determinant of coefficients is nonzero for each value of
. However, this is guaranteed since
form a fundamental set of solutions for the homogenous eqn.
In fact, using Cramers rule we can write the solution of the system of equations in the form
where
is the determinant of obtained from
by replacing the m-th column by
. This gives us the particular solution
where
is arbitrary.
Assume that a fundamental matrix
for the corresponding homogenous system
has been found. We can then use the method of variation of parameters to construct a particular solution, and hence the general solution, of the non-homogenous system.
The general solution to the homogenous system is
, we seek a solution to the non-homogenous system by replacing the constant vector
by a vector function
. Thus we assume that
is a solution, where
is a vector function to be found.
Upon differentiating
we get
Since
is a fundamental matrix,
; hence the above expression reduces to
Since
is nonsingular (i.e. invertible) on any interval where
is continuous. Hence
exists, and therefore
Thus for
we can select any vector from the class of vectors which satisfy the previous equation. These vectors are determined only up to an arbitrary additive constant vector; therefore, we denote
by
where the constant vector
is arbitrary.
Finally, this gives us the general solution for a non-homogenous system
Dynamical systems
In
, an arbitrary autonomous dynamical system can be written as
for some smooth
, for a the 2D case:
which in matrix notation we write
Notation
denotes a critical point
denotes the RHS of the autonomous system
denotes a specific solution
Theorems
The critical point
of the linear system
where we suppose
has eigenvalues
, is:
- asymptotically stable if
are real and negative - stable if
are pure imaginary - unstable if
are real and either positive, or have positive real part
Stability
The points
are called critical points of the autonomous system
Let
be a critical point of the system.
is said to be stable if for any
,
for all
.
I.e. for some solution
we parametrize it such that
for some
such that we stay close to
for all
, then we say it's stable.
is said to be asymptotically stable if it's stable and there exists a
s.t. that if a solution
satisfies
then
i.e. if we start near
, the limiting behaviour converges to
. Note that this is stronger than just being stable.
which is NOT stable, is of course unstable.
Intuitively what we're saying here is that:
- for any
we can find a trajectory (i.e. solution with a specific initial condition, thus we can find some initial condition ) such that the entire trajectory stays within
of the critical points for all
.
When we say a critical point is isolated, we mean that there are no other critical points "nearby".
In the case of a system
By solving the equation
if the solution
is a single vector, rather than a line or plane, the critical point
is not isolated.
Suppose that we have the system
and that
is a isolated critical point of the system. We also assume that
so that
is also a isolated critical point of the linear system
. If
that is,
is small in comparison to
near the critical point
, we say the system is a locally linear system in the neighborhood of the critical point
.
Linearized system
In the case where we have a locally linear system, we can approximate the system near isolated critical points by instead considering the Jacobian of the system. That is, if we have the dynamical system
with a critical point at
, we can use the linear approximation of
near
:
where
denotes the Jacobian of
, which is
Substituting back into the ODE
Letting
, we can rewrite this as
which, since
is constant given
, gives us a linear ODE wrt
, hence the name linearization of the dynamical system.
Hopefully
is a simpler expression than what we started out with.
General procedure
- Obtain critical points, i.e.

- If non-linear and locally linear, compute the Jacobian and use this is an linear approximation to the non-linear system, otherwise: do nothing.
- Inspect the following to determine the behavior of the system:
and 
- Compute eigenvalues and eigenvectors to obtain solution of the (locally) linear system
- Consider asymptotic behavior of the different terms in the general solution, i.e.

which provides insight into the phase diagram / surface. If non-linear, first do for linear approx., then for non-linear system
Points of interest
The following section is a very short summary of Ch. 9.1 [Boyce, 2013]. Look here for a deeper look into what's going on, AND some nice pictures!
Here we consider 2D systems of 1st order linear homogenous equations with constant coefficients:
where
and
. Suppose
are the eigenvalues for the matrix
, and thus gives us the expontentials for the solution
.
We have multiple different cases which we can analyse:
:
- Exponential decay: node or nodal sink
- Exponential growth. node or nodal source
: saddle point
:
- two independent eigenvectors: proper node (sometimes a star point )
- one independent eigenvector: improper or degenerate node
: spiral point
- spiral sink refers to decaying spiral point
- spiral source refers to growing spiral point
: center
Basin of attraction
This denotes the set of all points
in the xy-plane s.t. the trajectory passing through
approaches the critical point as
.
A trajectory that bounds the basin of attraction is called a separatix.
Limit cycle
Limit cycles are periodic solutions such that at least one other non-closed trajectory asymptotes to them as
and / or
.
Let
and
have continuous first partial derivatives in a simply connected domain
.
If
has the same sign in the entire
, then there are no closed trajectories in
.
Consider the system
Let
and
have continious first partial derivatives in a domain
.
Let
be a bounded subdomain of
and let
.
Suppose
contains no critical points.
If
then
- solution is periodic (closed trajectory)
- OR it spirals towards a closed trajectory
Hence, there exists a closed trajectory.
Where
denotes a particular trajectory
denotes the boundary of 
Useful stuff
- Example
Say we have the following system:
where
, has periodic solutions corresponding to the zeroes of
.
What is the direction of the motion on the closed trajectories in the phase plane?
Using the identities above, we can rewrite the system as
Which tells us that the trajectories are moving in counter-clockwise direction.
Lyapunov's Second Method
Goal is to obtain information about the stability or instability of the system without explicitly obtaining the solutions of the system. This method allows us to do exactly that through the construction of a suitable auxillary function, called the Lyapunov function.
For the 2D-case, we consider such a function of the following form:
where
and
are as given in the autonomous system definition.
We choose the notation above because
can be identified as the rate of change of
along the trajectory of the system that passes through the point
. That is, if
is a solution of the system, then
Suppose that the autonomous system has an isolated critical point at the origin.
If there exists a continuous and positive definite function
which also has continuous first partial derivatives
:
- if
( negative definite ) on some domain
, then the origin is an asymptotically stable critical point - if
( negative semidefinite ) then the origin is a stable point
Suppose that the autonomous system has an isolated critical point at the origin.
Let
be a function that is continuous and has continuous first partial derivatives.
Suppose that
and that in every neighborhood of the origin there is a least one point at which
is positive (negative). If there exists a domain
containing the origin such that the function
is positive definite ( negative definite ) on
, then the origin is an unstable point.
Let the origin be an isolated critical point of the autonomous system. Let the function
be continous and have continuous first partial derivatives. If there is a bounded domain
containing the origin where
for some positive
,
is positive definite and
is negative definite , and then every solution of the system that starts at a point in
approaches the origin as
.
We're not told how to construct such a function. In the case of a physical system, the actual energy function would work, but in general it's more trial-and-error.
Lyapunov function
Lyapunov functions are scalar functions which can be used to prove stability of an equilibrium of an ODE.
Lyapunov's Second Method (alternative)
Notation
Definitions
Let
be a continuous scalar function.
is a Lyapunov-candidate-function if it's /locally positive-definite function, i.e.
with
bein a neighborhood region around
.
Further, let
be a critical or equilibrium point of the autonomous system
And let
Then, we say that the Lyapunov-candidate-function
is a Lyapunov function of the system if and only if
where
denotes a specific trajectory of the system.
For a given autonomous system, if there exists a Lyapunov function
in some neigborhood
of some critical / equilibrium point
, then the system is stable (in a Lyapunov sense).
Further, if
is negative-definite, i.e. we have a strict inequality
then the system is asymptotically stable about the critical / equilibrium point
.
When talking about critical / equilibrium point
, when considering a Lyapunov function, we have to make sure that the critical point corresponds to
. This can easily be achieved by "centering" the system about the point original critical point
, i.e. creating some new system with
, which will then have the corresponding critical point at
.
Observe that this in no way affects the qualitative analysis of the critical point.
Let
be a continuous scalar function.
is a Lyapunov-candidate-function if it's a locally postitive-definite function, i.e.
with
being a neighborhood region around
.
Let
be a critical / equilibrium point of the autonomous system
And let
be the time-derivative of the Lyapunov-candidate-function
.
Let
be a locally positive definite function (a candidate Lyapunov function) and let
be its derivative wrt. time along the trajectories of the system.
If
is locally negative semi-definite, then
is called a Lyapunov function of the system.
If there exists a Lyapunov function
of a system, then
is a stable equilibrium point in the sense of Lyapunov.
If in addition
and
for some
, i.e.
is locally negative definite , then
is asymptotically stable.
Remember, we can always re-parametrize a system to be centered around a critical point, and come to an equivalent analysis of the system about the critical point, since we're simply "adding" a constant.
Proof
First we want to prove that if
is a Lyapunov function then
is a stable critical point.
Suppose
is given. We need to find
such that for all
, it follows that
.
Then let
, where
is the radius of a ball describing the "neighborhood" were we know that
is a Lyapunov candidate function, and define
Since
is continuous, the above
is well-defined and positive.
Choose
satisfying
such that for all
. Such a choise is always possible, again because of the continuity of
.
Now, consider any
such that
and thus
and let
be the resulting trajectory. Observe that
is non-increasing, that is
which results in
.
We will now show that this implies that
, for a trajectory defined such that
and thus
as described above.
Suppose there exists
such that
, then since we're assuming
is such that
as described above, then clearly we also have
Further, by continuity (more specifically, the IVT) we must have that at an earlier time
we had
But since
, we clearly have
due to
. But the above implies that
Which is a contradiction, since
implying that
.
Now we prove the theorem for asymptotically stable solutions! As stated in the theorem, we're now assuming
to be locall negative-definite, that is,
We then need to show that
which by continuity of
, implies that
.
Since
is strictly decreasing and
we know that
And we then want to show that
. We do this by /contradiction.
Suppose that
. Let the set
be defined as
and let
be a ball inside
of radius
, that is,
Suppose
is a trajectory of the system that starts at
.
We know that
is decreasing monotonically to
and
for all
. Therefore,
, since
which is defined as all elements for which
(and to drive the point home, we just established that the
).
In the first part of the proof, we established that
We can define the largest derivative of
as
Clearly,
since
is locally negative-definite. Observe that,
which implies that
, resulting in a contradiction established by the
, hence
.
Remember that in the last step where say suppose "there exists
such that
" we've already assumed that the initial point of our trajectory, i.e. for
, was within a distance
from the critical point
! Thus we would have to cross the "boundary" defined by
from
.
Therefore, intuitively, we can think of the Lyapunov function providing a "boundary" which the solution will not cross given it starts with this "boundary"!
Side-notes
- When reading about this subject, often people will refer to Lyapunov stable or Lyapunov asymptotically stable , which is exactly the same as we define to be stable solutions.
Examples of finding Lyapunov functions
- Quadratic
The function
is positive definite if and only if
and is negative definite if and only if
- Example problem
Show that the critical point
of the autonomous system
is asymptotically stable.
We try to construct a Liapunov function of the form
then
thus,
We observe that if we choose
, and
and
to be positive numbers, then
is negative-definite and
is positive-definite. Hence, the critical point
is asymptotically stable.
- Example problem
Partial Differential Equations
Consider the differential equation
with the boundary conditions
We call this a two-point boundary value problem.
This is in contrast to what we're used to, which is initial value problems, i.e. where the restrictions are on the initial value of the differential equation, rather than the boundaries of the differential equations.
Eigenvalues and eigenfunctions
Consider the problem consisting of the differential equation
together with the boundary conditions
By extension of terminology from linear algebra, we call each nontrivial (non-zero) constant
an eigenvalue and the corresponding nontrivial (not everywhere zero) solution
an eigenfunction.
Specific functions
Heat equation
Models heat distribution in a row of length
.
with boundary conditions:
and initial conditions:
Wave equation
Models vertical displacement
wrt. horizontal position
and time
.
where
,
being the tension in the string, and
being the mass per unit length of the string material.
Reasonable to have the boundary conditions:
i.e. the ends of the string are fixed.
And the initial conditions:
i.e. we have some initial displacement
and initial (vertical) velocity
. To keep our boundary condtions wrt.
specified above consistent:
Laplace equation
or
Since we're dealing with multi-dimensional space, we have a couple of different ways to specify the boundary conditions:
- Dirichlet problem : the boundary of the surface takes on specific values, i.e. we have a function
which specifies the values on the "edges" of the surface - Neumann problem : the values of the normal derivative are prescribed on the boundary
We don't have any initial conditions in this case, as there's no time-dependence.
- Laplace's equation in cylindrical coordinates
Let
Laplace's equation becomes:
Consider BCs:
Using separation of variables:
we can rewrite as:
Using
,
, hence
Thus, if
, solutions are periodic.
The radial equation can be written in the SL-form
Thus,
: vanish at the origin 
: unbounded as 
Finally, making the change of variables
amd write
, we have
Which is the Bessel equation, and we have the general solution:
Imposing the boundary conditions:

as
needs to be bounded
and
for 
as
(due to log)
Hence, we require
Plotting these Bessel functions, we conclude there exists a countable infinite number of eigenvalues.
We now superpose the solutions:
to write the general solution as
The constants
and
are found from the remaining BC,
by projection on
and
.
Orthogonality: the Besseul functions
satisfy
Which has the Sturm-Liouville form, with
provided that
The functions
satisfy the above at
, where the contribution at
vanishes,
and at
we have
since
and
and
are bounded.
We conclude
for
. We can therefore identify the constants
and
by projection:
Boundary Value Problems and Sturm-Liouville Theory
Notation
![$L[y] = - [p(x) y']' + q(x) y$](../../assets/latex/differential_equations_ffa3173a593a95d2d82ccb2156f377dee8b5f737.png)
are the ordered eigenvalues
are the corresponding normalized eigenfunctions
and equivalent for 
Sturm-Liouville BVP
Equation of the form:
OR equivalently
with boundary conditions:
where we the boundary value problem to be regular, i.e.:
and
are continuous on the interval ![$[0, 1]$](../../assets/latex/differential_equations_68c8fa38d960e53d4308cbf1e65d04c66a554817.png)
and
for all ![$x \in [0, 1]$](../../assets/latex/differential_equations_1abcf6a6ba0996351c652ec55b2a137f25774cfc.png)
and the boundary conditions to be separated, i.e.:
- each involves only on the of the boundary points
- these are the most general boundary conditions you can have for a 2nd order differential equation
Lagrange's Identity
Let
and
be functions having continous second derivatives on the interval
, then
where
is given by
In the case of a Sturm-Liouville problem we have observe that the Lagrange's identity gives us
Which we can expand to
Now, using the boundary conditions from the Sturm-Liouville problem, and assuming that
and
, we get
And if either
or
are zero,
or
and the statement still holds.
I.e. the Lagrange's identity under the Sturm-Liouville boundary conditions equals zero.
If
are solutions to a boundary value problem of the form def:sturm-lioville-problem, then
or in the form of an inner product
If a singular boundary value problem of the form def:sturm-lioville-problem satisfies thm:lagranges-identity-sturm-liouville-boundary-conditions then we say the problem is self-adjoint. More specifically,
for an n-th order operator
subject to
linear homogenous boundary conditions at the endpoints is self-adjoint provided that
E.g. for 4th order we can have
plus suitable BCs.
Most results from the 2nd order problems extends to the n-th order problems.
Some theorems
All the eigenvalues of the Sturm-Liouville problem are real.
If
and
are two eigenfunctions of the Sturm-Lioville problem corresponding to the eigevalues
and
, respectively, and if
, then
That is, the eigenfunctions
are *orthogonal to each other wrt. the weight function
.
We note that
and
, with
, satisfy the differential equations
and
respectively. If we let
and
, and substitute
and
into Lagrange's Identity with Sturm-Liouville boundary conditions, we get
which implies that
where
represents the inner-product wrt.
.
The above theorem has further consequences that, if some function
satisfies is continuous on
, then we can expand
as a linear combination of the eigenfunctions of the Sturm-Liouville equation!
Let
be the normalized eigenfunctions of the Sturm-Liouville problem.
Let
and
be piecewise continuous on
. Then the series
whose coefficients
are given by
converges to
at each point in the open interval
.
The eigenvalues of the Sturm-Liouville problem are all simple; that is, to each eigenvalue there corresponds only one linearly independent eigenfunction.
Further, the eigenvalues form an infinite sequence and can be ordered according to increasing magnitude so that
Moreover,
as
.
Non-homogenous Sturm-Liouville BVP
"Derivation"
Consider the BVP consisting of the nonhomogenous differential equation
where
is a given constant and
is a given function on
, and the boundary conditions are as in homogeonous Sturm-Lioville.
To find a solution to this non-homogenous case we're going to start by obtaining the solution for the corresponding homogenous system, i.e.
where we let
be the eigenvalues and
be the eigenfunctions of this differential equation. Suppose
i.e. we write the solution as a linear combination of the eigenfunctions.
In the homogenous case we would now obtain the coefficients
by
which we're allowed to do as a result of the orthogonality of the eigenfunctions wrt.
.
The problem here is that we actually don't know the eigenfunctions
yet, hence we need a different approach.
We now notice that
always satisfies the boundary conditions of the problem, since each
does! Therefore we only need to deduce
such that the differential equation is also satisifed. We start by substituing our expansion of
into the LHS of the differential equation
since
from the homogenous SL-problem, where we have assumed that we interchange the operations of summations and differentiation.
Observing that the weight function
occurs in all terms except
. We therefore decide to rewrite the nonhomogenous term
as
, i.e.
If the function
satisfies the conditions in thm:boyce-11.2.4, we can expand it in the eigenfunctions:
where
Now, substituting this back into our differential equation we get
Dividing by
, we get
Collecting terms of the same
we get
Now, for this to be true for all
, we need each of the
terms to equal zero. To make our life super-simple, we assume
for all
, then
and thus,
And remember, we already have an expression for
and know
from the corresponding homogeonous problem.
If
for some
then we have two possible cases:
and thus there exists no solution of the form we just described
in which case
, thus we can only solve the problem if
is orthogonal to
; in this case we have an infinite number of solutions since the m-th coefficient can be arbitrary.
Summary
with boundary conditions:
Expanding
, which means that we can rewrite the diff. eqn. as
We have the solution
where
and
are the eigenvalues and eigenfunctions of the corresponding homogenous problem, and
If
for some
we have:
in which case there exist no solution of the form described above
i which case
; in this case we have an infinite number of solutions since the m-th coefficient can be arbitrary
Where we have made the following assumptions:
- Can rewrite the nonhomogenous part as
![$f(x) = r(x) \ [ f(x) / r(x) ]$](../../assets/latex/differential_equations_64231872871ce79f27540405ec60e365398313d1.png)
- Can expand
using the eigenfunctions
wrt.
, which requires
and
to be continuous on the domain ![$x \in [0, 1]$](../../assets/latex/differential_equations_1abcf6a6ba0996351c652ec55b2a137f25774cfc.png)
Inhomogenous BCs Sturm-Liouville BVP
Derivation (sort-of-ish)
Consider we have at our hands a Sturm-Liouville problem of the sort:
for any
, satisfying the boundary conditions
This is just a specific Sturm-Liouville problem we will use to motivate how to handle these inhomogenous boundary-conditions.
Suppose we've already obtained the solutions for the above SL-problem using separation of variables, and ended up with:
as the eigenfunctions, with the general solution:
where
such that we satisfy the
BC given above.
Then, suddenly, the examinator desires you to solve the system for a slightly different set of BCs:
What an arse, right?! Nonetheless, motivated by the potential of showing that you are not fooled by such simple trickeries, you assume a solution of the form:
where
is just as you found previously for the homogenous BCs.
Eigenfunctions (i.e. (countable) infinite number of orthogonal solutions)) arise only in the case of homogenous BCs, as we then still have the Lagrange's identity being satisifed. For inhomogenous BCs, it's not satisfied, and we're not anymore working with a Sturm-Liouville problem.
Nonetheless, we're still solving differential equations, and so we still have the principle of superposition available to work with.
Why would you do that? Well, setting up the new BCs:
Now, we can then quickly observe that we have some quite major simplifications:
If we then go on to use separation of variables for
too, we have
Substituting into the simplified BCs we just obtained:
Here it becomes quite apparent that this can only work if
, and if we simply include this constant factor into our
, we're left with the satisfactory simple expressions:
This is neat and all, but we're not quuite ready to throw gang-signs in front of the examinator in celebration quite yet. Our expression for the additional
has now been reduced to
Substituting this into the differential equation (as we still of course have to satisfy this), we get
where the t-dependent factor has vanished due to our previous reasoning (magic!). Recalling that
, the general solution is simply
Substituting into the BCs from before:
where the last BC
gives us
thus,
Great! We still have to satisfy the initial-values, i.e. the BCs for
. We observe that they have now become:
where the last t-dependent BC stays the same due to
being independent of
. For the first BC, the implication arises from the fact that we cannot alter
any further to accomodate the change to the complete solution, hence we alter
. With this alteration, we end up with the complete and final solution to the inhomogenous boundary-condition problem
where
At this point it's fine to throw gang-signs and walk out.
Observe that
is kept outside of the sum. It's a tiny tid-bit that might be forgotten in all of this mess: we're adding a function to the general solution to the "complete" PDE, that is
NOT something like this
.
I sort of did that right away.. But quickly realized I was being, uhmm, not-so-clever. Though I guess we could actually do it if we knew
to be square-integrable! Buuut..yeah, I was still being not-so-clever.
Example
Suppose we we're presented with a modified wave equation of the following form:
for any
, satisfying the boundary conditions
and we found the general solution to be
where
is as given above.
Now, one might wanted, what would the solution be if we suddenly decided on a set of non-homogenous boundary conditions :
To deal with non-homogenous boundary conditions, we look for a time independent solution
solving the boundary problem
Once
is known, we determine the solution to the modified wave equation using
where
satisfies the same modified equation with different initial conditions, but homogenous boundary conditions. Indeed,
To determine
, we solve the second order ODE with constant coefficients. Since
, its general solution is given by
We fix the arbitrary constants using the given boundary conditions
Thus, the general solution is given by
By construction, the remaining
is as in the previous section, but with the coefficients
satisfying
Thus, the very final solution is given by
with
as given above.
Singular Sturm-Liouville Boundary Value Problems
We use the term singluar Sturm-Liouville problem to refer to a certain class of boundary value problems for the differential equation
in which the functions
and
satisfy the conditions (same as in the "regular" Sturm-Liouville problem) on the open interval
, but at least one of the functions fails to satisfy them at one or both of the boundary points.
Discussion
Here we're curious about the following questions:
- What type of boundary conditions can be allowed in a singular Sturm-Liouville problem?
- What's the same as in the regular Sturm-Liouville problem?
- Are the eigenvalues real?
- Are the eigenfunctions orthogonal?
- Can a given function be expanded as a series of eigenfunctions?
These questions are answered by studying Lagrange's identity again.
To be concrete we assume
is a singular point while
is not.
Since the boundary value problem is singular at
we choose
and consider the improper integral
If we assume the boundary conditions at
are still satisfied, then we end up with
Taking the limit as
:
Therefore, if we have
we have the same properties as we did for the regular Sturm-Liouville problem!
General
In fact, it turns out that the Sturm-Liouville operator
on the space of function satisfying the boundary conditions of the singular Sturm-Liouville problem
, is a self-adjoint operator, i.e. it satisfies the Lagrange's identity on this space!
And this as we have seen indications for above a sufficient property for us to construct a space of solutions to the differential equation with these eigenfunctions as a basis.
A bit more information
These lecture notes provide a bit more general approach to the case of singular Sturm-Liouville problems: http://www.iitg.ernet.in/physics/fac/charu/courses/ph402/SturmLiouville.pdf.
Frobenius method
Frobenius method refers to the method where we assume the solution of a differential equation to be analytic, i.e. we can expand it as a power series:
Substituting this into the differential equation, we can quite often derive recurrence-relations by grouping powers of
and solving for
.
Honours Differential Equations
Equations / Theorems
Reduction of order
Go-to way
If we have some repeated roots of the characteristic equation
, then the general solution is
where we've taken the linear combination of the "original" solution
and the one obtained from reduction of order
General
If we know a solution
to an ODE, we can reduce the order by 1, by assuming another solution
of the form
where
is some arbitrary function which we can find from the linear system obtained from
,
and
. Solving this system gives us
i.e. we multiply our first solution with a first-order polynomial to obtain a second solution.
Integrating factor
Assume
to be a solution, then in a 2nd order homogeneous ODE we get the
Which means we can obtain a solution by simply solving the quadratic above.
This can also be performed for higher order homogeneous ODEs.
Repeated roots
What if we have a root of algebraic multiplicity greater than one?
If the algebraic multiplicity is 2, then we can use reduction of order and multiply the first solution for this root with the
.
Factorization of higher order polynomials (Long division of polynomials)
Suppose we want to compute the following
- Divide the first highest order factor of the polynomial by the highest order factor of the divisor, i.e. divide
by
, which gives 
- Multiply the
from the above through by the divisor
, thus giving us 
- We subtract this from the corresponding terms of the polynomial, i.e.

- Divide this remainder by the highest order term in of the divisor again, and repeating this process until we cannot divide by the highest order term of the divisor.
Existence and uniqueness theorem
Consider the IVP
where
,
, and
are continuous on an open interval
that contains the point
. There is exactly one solution
of this problem, and the solution exists throughout the interval
.
Exact differential equation
Let the functions
and
, where subscripts denote the partial derivatives, be continuous in the rectangular region
. Then
is an exact differential equation in
if and only if
at each point of
. That is, there exists a function
such that
if and only if
and
satisfy the equality above above.
Variation of Parameters
Suppose we have the n-th order linear differential equation
Suppose we have found the solutions to the corresponding homogenous diff. eqn.
With the method of variation of parameters we seek a particular solution of the form
Since we have
functions
to determine we have to specify
conditions.
One of these conditions is cleary that we need to satisfy the non-homongenous diff. eqn. above. Then the
other conditions are chosen to make the computations as simple as possible.
Taking the first partial derivative wrt.
of
we get (using the product rule)
We can hardly expect it to be simpler to determine
if we have to solve diff. eqns. of higher order than what we started out with; hence we try to surpress the terms that lead to higher derivatives of
by imposing the following condition
which we can do since we're just looking for some arbitrary functions
. The expression for
then reduces to
Continuing this process for the derivatives
we obtain our
condtions:
giving us the expression for the m-th derivative of
to be
Finally, imposing that
has to satisfy the original non-homogenous diff. eqn. we take the derivative of
and substitute back into the equation. Doing this, and grouping terms involving each of
together with their derivatives, most of the terms drop out due to
being a solution to the homogenous diff. eqn., yielding
Together with the previous
conditons we end up with a system of linear equations
(note the
at the end!).
The sufficient condition for the existence of a solution of the system of equations is that the determinant of coefficients is nonzero for each value of
. However, this is guaranteed since
form a fundamental set of solutions for the homogenous eqn.
In fact, using Cramers rule we can write the solution of the system of equations in the form
where
is the determinant of obtained from
by replacing the m-th column by
. This gives us the particular solution
where
is arbitrary.
Suppose you have a non-homogenous differential equation of the form
and we want to find the general solution, i.e. a lin. comb. of the solutions to the corresponding homogenous diff. eqn. and a particular solution. Suppose that we find the solution of the homogenous dif. eqn. to be
The idea in the method of variation of parameters is that we replace the constants
and
in the homogenous solution with functions
and
which we want to produce solutions to the non-homogenous diff. eqn. that are independent to the solutions obtained for the homogenous diff. eqn.
Substituting
into the non-homogenous diff. eqn. we eventually get
which we can solve to obtain our general solution for the non-homogenous diff. eqn.
If the functions
and
are continuous on an open interval
, and if the functions
and
are a fundamental set of solutions of the homogenous differential equation corresponding to the non-homogenous differential equation of the form
then a particular solution is
where
is any conventional chosen point in
. The general solution is
Parseval's Theorem
The square norm
of a periodic function with convergent Fourier series
satisfies
By definition and linearity,
For the last equality, we just need to remember that
Indeed,
We also have a Parseval's Identity for Sturm-Liouville problems:
where the set
is determined as
and
are the corresponding eigenfunctions of the SL-problem.
The proof is a simple use of the orthogonality of the eigenfunctions.
Binomial Theorem
Let
be a real number and
a postive integer. Further, define
Then, if
,
which converges for
.
This is due to the Mclaurin expansion (Taylor expansion about
).
Definitions
Words
- bifurication
- the appearance of a new solution at a certain parameter value
Wronskian
For a set of solutions
, we have
i.e. the Wronskian is just the determinant of the solutions to the ODE, which we can then use to tell whether or not the solutions are linearly independent, i.e. if
we have lin. indep. solutions and thus a unique solution for all initial conditions.
Why?
If you look at the Wronskian again, it's the determinant of the matrix representing the system of linear equations used to solve for a specific vector of initial conditions (if you set the matrix we're taking the determinant of above equal to some
which is the initial conditions).
The determinant of this system of linear equations then tells if it's possible to assign coefficients to these different solutions
s.t. we can satisfy any initial conditions, or more specific, there exists a unique solution for each initial condition!
- if
then the system of linear equations is an invertible matrix, and thus we have a unique solution for each 
- if
we do NOT have a unique solution for each initial condition 
Thus we're providing an answer to the question: do the linear combination of
include all possible equations?
Properties
- If
then
for all ![$x \in [\alpha, \beta]$](../../assets/latex/differential_equations_1994f890f501056112a9489a992c228cca78a71b.png)
- Any solution
can be written as a lin. comb. of any fundamental set of solutions 
- The solutions
form a fundamental set iff they are linearly independent
Proof of property 1
If
somewhere,
everywhere for
Assume
at
Consider
which can be written
with
Hence,
such that (*) holds.
Thus, we can construct
which satisfies the ODE at
Proof of property 2
We want to show that
can satisfy any
Linearly independent →
and since
, by property 1 we can solve it.
Homogenenous n-th order linear ODEs
Solutions
to homogeneous ODEs forms a vector space
Non-homogeneous n-th order linear ODEs
Solution
- Solve the corresponding homogeneous linear ODE
- Find special case solution for the non-homogeneous
- General solution is then the linear combination of these
Power series
Laplace transform
An integral transform is a relation of the form
where
is a given function, called the kernel of the transformation, and the limits of integration
and
are also given.
Suppose that
is piecewise continuous on the interval
for any positive
.
when
. In this inequality,
,
, and
are real constants,
and
necessarily positive.
Then the Laplace transform of
, denoted
or by
, is defined
and exists for
.
Under these restrictions a function
which increases faster than
is does not converge and thus the Laplace transform is not defined for this
.
Properties
- Identities
Let
, for
- Linear operator
The Laplace transform is a linear operator:
- Uniqueness of transform
We can use the Laplace transform to solve a differential equation, and by
- Inverse is linear operator
Frequently, a Laplace transform
is expressible as a sum of serveral terms
Suppose that
. Then the function
has the Laplace transform
. By the uniqueness property stated previously, no other continuous function
has the same transform. Thus
that is, the inverse Laplace transform is also a linear operator.
Solution to IVP
Suppose that functions
are continuous and that
is piecewise continuous on any interval
. Suppose further that
Then
exists for
and is given by
We only prove it for the first derivative case, since this can then be expanded for the nth order derivative.
Consider the integral
whose limit as
, if it exists, is the Laplace transform of
.
Suppose
is piecewise continuous, we can partition the interval and write the integral as follows
Integrating each term on the RHS by parts yields
since
is continuous, the contributions of the integrated terms at
cancel, and we can combine the integrals into a single integral, again due to continuity:
Now finally letting
(since that's what we do in the Laplace tranform ) we note that
And since
, we have
; consequently
Finally giving us
Typical ones
Fourier Series
Given a periodic function
with period
, it can be expressed as a Fourier series
where the coefficients are given by
Suppose
and
are piecewise continuous for
.
Suppose
is defined outside the interval so that it is periodic with period
. Then the Fourier series of
converges to
where
is continuous and to
where
is discontinuous.
Heavside function
Which has a discontinuity at
.
Convolution integral
If
and
are piecewise continuous functions on
then the convolution integral of
and
is,
Laplace transform
Phase plane
A phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coodinate plane with axes being the values of the two state variables, e.g.
.
It's a two-dimensional case of the general n-dimensional phase space .
Trajectory
A trajectory
of a system
simply refers to a solution for some specific initial conditions
, i.e.
Critical points
Points corresponding to constant solutions or equilibrium solutions, of the following equation
are often called critical points.
That is, points
such that
There always exists a coordinate transformation
, such that
such that the new system
has an equilibrium solution or critical point at the origin.
Special differential equations / solutions
Legendre equation
The Legendre equation is given below:
We only need to consider
since if
we can reparametrize the equation to be of the form above anyways.
In the case where we assume
, we note that for even
becomes a terminating series, rather than an infinite series, and if
is odd
is a terminating series, i.e. in both cases we get a polynomial of order
!
These polymomials corresponding to a solution of the Legendre equation for a specific integer
we call a Legendre Polynomial, denoted
.
Or the general Legendre equation :
Which gives rise to the polynomials defined
where
are the corresponding Legendre polynomial.
Moreover, using Rodrigues formula
If we let
we can use the relationship between
to obtain the coefficient
Series Solutions
The sum of the even terms is given by:
and the sum of the odd terms are given by:
TODO Legendre polynomials and Laplace equation using spherical coordinates
Van der Pol oscillator
where
is the position coordinate, which is a function of time
, and
is a scalar parameter indicating the nonlinearity and the strength of the damping.
Hermite polynomials
We are talking about the physicsts' Hermite polynomials given by
These are a orthonormal polynomial sequence, and they arise from the differential equation
These polynomials are orthogonal wrt. the weight function
, i.e. we have
Furthermore, the Hermite polynomials form an orthogonal basis of a the Hilbert space of functions satisfying
in which the inner product is given by the integral including the Gaussian inner product defined previously:
Spherical harmonics
In this section we're looking to tackle the spherical harmonics once and for all.
If we have a look at the Laplace equation in spherical coordinates, we have
If we then use separation of variables, and consider solutions of the form:
We get the following differential equations:
Further, assuming
, we can rewrite the second differential equation as
for some number
.
Solving for 
First we take a look at the differential equation involving
,
Which we can rewrite as
making the substitution
This is fine, since in spherical coordinates we have
, i.e.
is always defined.
Thus,
Substituting back into the differential equation, we get
which has the general solution
where
solves
or rather,
RECOGNIZE THIS YO?! How about trying to substitute this into the differential equation involving
?
Solving for 
The differential equation with
can be written
which has the solution
Further, we have the boundary condition
, which implies
.
Solving for 
We have the the ODE
which we can write out to get
Making the substitution
we have
And,
Substituting back into the ODE from above:
Observing that
We can write the above equation as
Whiiiich, we recognize as the general Legendre differential equation!
where we're only lacking the
, for which we have to turn to the differential equation involving
.
With
obtained from Solving for
, we can rewrite the differential equation involving
as
For which we know that the solutions are the corresponding associated Legendré polynomials
which are obtained by considering the series solution of the equation above, i.e. assuming:
where
is as above. Substituting into the differential equation above, and finding the recurrence-relation for
we recover the Legendre polynomials mentioned above.
Bringing
and
together
Now, finally bringing this together with the solution obtained for
, we have the spherical harmonics
where
is simply the normalization constant to such that the eigenfunctions form a basis, which turns out to be
Thus, the final expression for the spherical harmonics is
Combining everything
Finally, combining the radial solution and the spherical harmonics into the final expression for our solutions:
where we have set
to ensure regularity at
, and not have the following occur:
Boundary conditions
- Establish that
must be periodic (since we're working in spherical coordinates) whose period evenly divides
, which implies
and
is a linear combination of 
- Using what we observed in 1. and noting that
is "regular" at the poles of the sphere, where
, we have a boundary condition for
(which is a Sturm-Liouville problem) forcing the parameter
to be of the form 
- Making the change of variables
trnasforms this equation into the general Legendre equation, whose solution is a multiple of the associated Legendre Polynomial. Finally, the equation for
has solutions of the form
- If we want the solution to be "regular" throughout
, we have
.
When we say regular, we mean that the solution
has the property
Proper description
A priori
is a complex number, BUT because
must be a periodic function whose period evenly divides
,
is necessarily an integer and
is a linear combination of the complex exponentials
.
Further, the solution function
is regular at the poles of the sphere, where
. Imposing this regularity in the solution
of the second equation at the boundary of the domain is a Sturm-Liouville problem that forces the parameter
to be of the form
Furthermore, a change of variables
transforms this equation into the (general) Legendre equation, whose solution is a multiple of the associated Legendre polynomial
.
Finally, the equation for
has solutions of the form
And if we require
solution to be regular throughout
forces
.
Solutions
Here, the solution assumed to have the special form
. For a given value of
, there are
independent solutions of this form, one for each integer
. These angular solutions are a product of trigonometric functions, here represented as a complex exponential, and associated Legendre polynomials:
which fulfill
The general solution to the Laplace's equation in a ball centered at the origin is a linear combination of the spherical harmonic functions multiplied by the appropriate scale factor
,
where the
are constants the factors
are known as solid harmonics.
Q & A
DONE Why do we actually require the
to be an integer?
In solving for
we observe that
Thus, if we require
to be true, then we need
to "go full circle" every
, hence
.
If it does NOT go full circle, then it would be discontinuous.
DONE We have TWO explanations for why we need
, which one is true?
- Question
Here we state that
has to be "because it's a Sturm-Liouville problem". I don't understand why it being a Sturm-Liouville problem necessarily imposes this restriction.
Ah! Maybe it's because Sturm-Liouville problems require the limiting behavior of the boundary conditions to satisfy Lagrange's identity being zero!
While here, we simply solve the differential equation involving
, and
just falls out from this solution, and as far as I can tell, we're not actually saying anything about the boundary conditions in this case.
- Answer
From what I can understand, the eigenvalue
simply falls out from
as we found, and
being of this form does not seem to be related in any way to boundary conditions.
What is related to the boundary conditions, the fact that
is an integer.
What is related to the b
Bessel equation
This can be solved using Frobenius method.
In the case where
, we end up with the Bessel function of the 1st kind :
And the Bessel function of the 2nd kind :
Thus, the general solution will be the linear combination:
Where we have the following limiting behavior:
Note: Quite often we want the solution to be "regular" / nicely behaved as
, thus we often end up setting
in the expression for
.
Problems
10.2 Fourier Series
Ex. 10.2.1
Assume that there is a Fourier series converging to the function
defined by
with the property that
i.e. it's periodic with a period of 4.
Find the components of the Fourier series.
Solution
See p. 601 in (Boyce, 2013) for the full solution.
10.5 Separation of variables; Heat conduction in a Rod
Find the solution of the heat conduction problem
1
Solution
We suppose
And the boundary conditions are satisfied by
First we note that with
above we get the following
which gives
for some constant
, which gives us the system
The solution to the first equation simply gives us
Considering the boundary conditions, we get
thus we get the eigenfunctions and eigenvalues for
:
Substituting these eigenvalues into the differential equation with
:
which can be solved by use of integrating factors, thus
Finally, combining both into the eigenfunctions for the entire solution
Which gives us the general solution:
and solving for the coefficients using the initial conditions, i.e. when
Hence the final solution is
is some specific solution to the non-homogenous equation.
and
for
: