SturmLiouville theory
From Academic Kids

In mathematics and its applications, a SturmLiouville problem, named after Charles Francois Sturm (18031855) and Joseph Liouville (18091882), is a secondorder linear differential equation of the form
 <math>{d\over dx}\left(p(x){dy\over dx}\right)+q(x)y=\lambda w(x)y,<math> (1)
often together with specified boundary values of y and dy/dx. The value of λ is not specified by the problem; finding the values of λ for which there exist solutions satisfying the boundary conditions is part of the problem. The function w(x) is the "weight" or "density" function.
The solutions are eigenfunctions of a Hermitian differential operator in some function space defined by boundary conditions.
SturmLiouville theory is important in applied mathematics, where SL problems occur very commonly, particularly when dealing with linear partial differential equations which are separable.
Contents 
SturmLiouville theorem
The SturmLiouville theorem states:
 The eigenvalues <math>\lambda_n<math> of a regular ( p(x) is differentiable, q(x) and w(x) are continuous, p(x) > 0 and q(x) > 0 over the interval ) SturmLiouville problem are real and well ordered such that
 <math>\lambda_1 < \lambda_2 < \lambda_3 < ... < \lambda_n < ... \to \infty <math>.
 Corresponding to each eigenvalue <math>\lambda_n<math> is a unique eigenfunction <math>y_n(x)<math>.
 The eigenfunctions are mutually orthogonal and satisfy the orthogonality relation
 <math> \int_{a}^{b}y_n(x)y_m(x)w(x)dx = 0 , m \ne n <math>, where <math>w(x)<math> is the weighting function.
 If the set of eigenfunctions satisfy the orthogonality relation
 <math> \int_{a}^{b}y_n(x)y_m(x)w(x)dx = \delta_{mn}<math>, then it's said to form an orthonormal set.
 eigenvalues of the SturmLiouville problem are given by the Rayleigh quotient:
 <math> \lambda_n = \frac{p y_{n}(x) y' _{n}(x)_a^b + \int_a^b (p y'_{n}(x)^2  q y_{n}(x)^2)dx}{\int_a^b (y_{n}(x)^2 w(x))dx} <math>
SturmLiouville form
The differential equation
 <math>{d\over dx}(p(x){d\over dx}y(x))+q(x)y(x)=\lambda w(x)y(x)<math>
is said to be in SturmLiouville form. The function w is known as the weight function. All secondorder linear ordinary differential equations can be recast in the form to the left of "=" above by multiplying both sides of the equation by an appropriate "exponential multiplier" (although the same is not true of secondorder partial differential equations, or if y is a vector.)
Examples
The Legendre equation,
 <math>(1x^2)y''2xy'+\nu(\nu+1)y=0\;\!<math>
can easily be put into SturmLiouville form, since D(1x^{2})=2x, so, the Legendre equation is equivalent to
 <math>((1x^2)y')'+\nu(\nu+1)y=0\;\!<math>
Less simple is such a differential equation:
 <math>x^3y''xy'+2y=0\,<math>
Divide throughout by x^{3}:
 <math>y''{x\over x^3}y'+{2\over x^3}y=0<math>
Multiplying throughout by an integrating factor of
 <math>e^{\int {x / x^3}\,dx}=e^{\int {1 / x^2}\, dx}=e^{1 / x}<math>
giving
 <math>e^{1 / x}y''+{e^{1 / x} \over x^2} y'+ {2 e^{1 / x} \over x^3} y = 0<math>
which can be easily put into SturmLiouville form since
 <math>D e^{1 / x} = {e^{1 / x} \over x^2} <math>
so the differential equation is equivalent to
 <math>(e^{1 / x}y')'+{2 e^{1 / x} \over x^3} y =0<math>
In general, given a differential equation
 <math>P(x)y''+Q(x)y'+R(x)y=0\,<math>
dividing by P(x) and then multiply through by the integrating factor of
 <math>e^{\int Q(x) / P(x)}\,dx<math>
and then collect to give the SturmLiouville form.
SturmLiouville differential operators
The map
 <math>L u ={d\over dx}\left(p(x){du\over dx}\right)+q(x)u<math>
can be viewed as a linear operator mapping a function u to another function Lu. We may study this linear operator in the context of functional analysis. If we put w=1 in equation (1), it can be written as
 <math>L u = \lambda u \,<math>
This is precisely the eigenvalue problem; that is, we are trying to find the eigenvalues λ and eigenvectors u of the L operator. However, to be honest we must also include the boundary conditions. Let's say that we want to look at the problem over the interval [0,1] and that we pose the boundary conditions u(0) = u(1) = 0.
The importance of eigenvalue problems stems from the fact that they may help us to solve the associated inhomogeneous problem
 <math>L u = f \,<math> in the interval (0,1)
 <math>u = 0 \,<math> at 0 and 1.
Here, f is some function in L^{2}. If a solution u exists and is unique, we may write it as
 <math>u = A f \,<math>
because the mapping from f to u must be linear. Now observe that finding eigenvectors and eigenvalues of A is essentially the same as finding eigenvectors and eigenvalues of L. Indeed, if u is an eigenvector of L with eigenvalue λ it must be that u is also an eigenvector of A with eigenvalue 1/λ.
Some highly technical details
Under some assumptions on L, the map A will be continuous from L^{2} to the Sobolev space H^{2} of "twice differentiable" L^{2} functions (differentiability must be understood in terms of Sobolev spaces.) This is for instance the case if p is in H^{1}, q is in L^{2}, p ≤ c for some negative constant c, and q ≥ 0. However, this is not a necessary condition: there are other L which make A continuous.
Here we use three very important theorems:
 H^{2} is a subset of L^{2}; if B is the open unit ball in H^{2} then the closure of B in L^{2} is compact.
 Hence the map A regarded as a linear map from L^{2} to L^{2} is a compact linear map. (See the spectral theorem.)
 All hermitian compact linear maps have an orthonormal basis of eigenvectors; the eigenvalues form a sequence which must tend to zero.
The key words are not all that important, the only important conclusion is that A has an orthonormal basis of eigenvectors.
Useful consequences of the preceding technicalities
If we can find the eigenvectors of L, that is, find the solutions u_{k} of
 <math> L u_{k} = \lambda_k u_k \,<math> in (0, 1)
 <math> u = 0 \,<math> at 0 and 1,
along with the eigenvalues λ_{k}, we can attempt to solve the problem
 <math> L u = f \,<math> in (0,1)
 <math> u = 0 \,<math> at 0 and 1.
Indeed, from the technical property that the eigenvectors form an orthonormal basis and from Fourier series, we see that any solution u and data f can be written as
 <math> u = \sum_k a_k u_k \,<math>
 <math> f = \sum_k b_k u_k \,<math>
If we take the liberty of exchanging the summation sign and the operator L (which can be justified in Sobolev spaces) we obtain:
 <math> \sum_k \lambda_k a_k u_k = \sum_k b_k u_k \,<math>
We must use another theorem of Fourier series, which tells us that there is only one way of representing a function as a Fourier series. Hence, we obtain that
 <math> a_k = \frac{1}{\lambda_k} b_k <math> (2)
That is, given f (or equivalently its Fourier coefficients b_{k}) we may compute the Fourier coefficients a_{k} of u, which is almost as good as computing u directly. Also, as noted above, the coefficients 1/λ_{k} converge to zero hence (again by Fourier series) the vector u=∑a_{k}u_{k} is well defined as long as f=∑b_{k}u_{k} is well defined.
When implemented on a computer, this is the spectral method.
Example
We wish to find a function u(x) which solves the following SturmLiouville problem:
 <math> L u = \frac{d^2u}{dx^2} = \lambda u<math>
where the unknowns are λ and u(x). As above, we must add boundary conditions, we take for example
 <math> u(0) = u(\pi) = 0 \, <math>
Observe that if k is any integer, then the function
 <math> u(x) = \sin kx \, <math>
is a solution with eigenvalue λ = −k^{2}. We know that the solutions of a SL problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the SL problem in this case has no other eigenvectors.
Given the preceding, let us now solve the inhomogeneous problem
 <math>L u =x, x\in(0,\pi)<math>
with the same boundary conditions. In this case, we must write f(x)=x in a Fourier series. The reader may check, either by integrating ∫exp(ikx)xdx or by consulting a table of Fourier transforms, that we thus obtain
 <math>L u =\sum_{k=1}^{\infty}2\frac{(1)^k}{k}\sin kx.<math>
This particular Fourier series is troublesome because of its poor convergence properties. It is not clear apriori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "squaresummable", the Fourier series converges in L^{2} which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result that says that Fourier's series converges at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series).
Therefore, by using formula (2), we obtain that the solution is
 <math>u=\sum_{k=1}^{\infty}2\frac{(1)^k}{k^3}\sin kx.<math>
In this case, we could have found the answer using antidifferentiation. This technique yields u=(x^{3}π^{2}x)/6, whose Fourier series agrees with the solution we found. The antidifferentiation technique is no longer useful in most cases when the differential equation is in many variables.
Application to normal modes
Suppose we are interested in the modes of vibration of a thin membrane, held in a rectangular frame, 0 < x < L_{1}, 0 < y < L_{2}. We know the equation of motion for the vertical membrane's displacement, W(x, y, t) is given by the wave equation:
 <math>\frac{\partial^2W}{\partial x^2}+\frac{\partial^2W}{\partial y^2} = \frac{1}{c^2}\frac{\partial^2W}{\partial t^2}.<math>
The equation is separable (substituting W = X(x) × Y(y) × T(t)), and the normal mode solutions that have harmonic time dependence and satisfy the boundary conditions W = 0 at x = 0, L_{1} and y = 0, L_{2} are given by
 <math>W_{mn}(x,y,t) = A_{mn}\sin\left(\frac{m\pi x}{L_1}\right)\sin\left(\frac{n\pi y}{L_2}\right)\cos\left(\omega_{mn}t\right)<math>
where m and n are nonzero integers, A_{mn} is an arbitrary constant and
 <math>\omega^2_{mn} = c^2 \left(\frac{m^2\pi^2}{L_1^2}+\frac{n^2\pi^2}{L_2^2}\right).<math>
Since the eigenfunctions W_{mn} form a basis, an arbitrary initial displacement can be decomposed into a sum of these modes, which each vibrate at their individual frequencies <math>\omega_{mn}<math>. Infinite sums are also valid, as long they converge.
See also: normal mode.
Bibliography
 A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations, Chapman & Hall/CRC Press, Boca Raton, 2003 (2nd edition).