There will be, as you may have guessed, lots of problems.
Actually, there won't as many as last year, but they'll seem like a
lot as they'll be considerably harder. The structure and organization
of the course will be (approximately!):

30% of grade Homework.

30% Take-Home Midterm Exam.

30% Take-Home Final Exam.

10% Research/Computing project.

In more detail, Homework is Homework, the Exams are Homework (fancied up a
bit and with more stringent rules) and the Research Project is
described below. These figures are only approximate. I may make
homework worth a little more or less, but this is about right.

The way my grading scheme typically works is that if you get below a
50 and have not religiously done (well!) and handed in your homework,
you fail (U). If you get less than a 60 and have not religiously
handed in your homework, you get an (S). If you get 60 or more you
get a G or E of some sort and ``pass''. If you have religiously done
your homework, but have somehow managed to end up less than a 60 or
(worse) 50 you may make sorrowful and wounded noises and perhaps get a
G- or S, respectively. If you have not done and handed in your
homework on time or have not followed the rules with respect to your
homework, don't bother me about your grade - it will likely be bad
and you will deserve it. Note that if you get as little as 80 per
cent of your homework credit, you will only need 40 percent of your
exam credit to get at least a G-.

The Rules

The RULES are very serious. In previous years, certain students
have reportedly betrayed the trust inherent in the rules. This has
led to calls that the rules for this course (and all graduate courses
here) be stringently tightened. I would prefer NOT to see this
happen, as I think that the rules optimize the learning process as the
stand and minimize Mickey Mouse interactions as well, but IF I get any
hint of misbehavior (verified or not) I'll tighten things up very,
very quickly and we'll all have to work harder and be more frustrated
while working. So, PLEASE! Follow the spirit as well as the letter
of the rules. You are here to learn your chosen profession, and it
has never been truer that choosing an easy path is ultimately cheating
yourself. Besides, it will show up on prelims!

Rules:

You may collaborate with your classmates in any
permutation on the homework. In fact, I encourage you to
work in groups, as you will probably all learn more that way.
However, you must each work out and individually write up all the solutions even if your ``figure them out'' (and they are thus
all nearly the same) within a group. Working them out and writing
them up from scratch without looking (once you get the idea of the
solution) provides invaluable learning reinforcement. Unless the
assignment is a computer or mathematica project, I want handwritten solutions neatly done on white (laserprinter)
paper.

You may not get worked out solutions to specific
problems from more advanced graduate students, the (any) solution
manual (if you can find it) or anyplace else. It obviously removes
the whole point of the homework in the first place.

You may ask me, your grader, more advanced students,
other faculty, personal friends, or your household pets for help or
tutoring on particular problems, as long as no worked-out solutions
to the assigned problems are present.

You may (indeed must) use the library and all
available non-human resources to help solve the problems. I don't
even care if you find the solution somewhere and copy it
verbatim provided that you understand it afterwards (which is the
goal), cite your source, and provided that you do not use
the solution manual for Jackson problems (which exists, floating
around somewhere), or Arfken (which may or may not exist) see second
item above.

You may NOT collaborate with each other or get outside
human help on the take home exam problems. They are to be done
alone. There will be a time limit (typically 24 hours total working
time) on the take home exams, spread out over four days or so.

You may still use the library and non-human resources
on the take home exam problems, provided that (as usual) you don't
look up the answer to e.g. - a Jackson problem in a Jackson solution
set of one sort or another. Since solutions to difficult problems may
well be in the literature, this is an invaluable aid! Last semester,
the answer to 6.19c (for example) was the verbatim contents of a paper
by Brill and Goodman (as noted in both text and the lectures at the
appropriate time). I recommend immediately obtaining a copy of any
paper I refer to in lecture, as it may well figure prominently in an
exam question...

Research Project: I'm offering you instead the following
assignment, with several choices. You may prepare any of:

A set of lecture notes on a topic, relevant to the material
we will cover, that interests you. If you select this option you may
be asked to present the lecture for that topic in my place, time
permitting.

A review paper on a mathematical physics topic, relevant to
the material we will cover (or not cover!), that interests you.
Typically, in the past, students going into (e.g.) FEL have prepared
review papers on the electromechanism of the FEL. That is, relevance
to your future research is indicated but not mandated.

A computer demonstration or simulation of some important
electrodynamical principle or system. Possible projects here include
solving the Poisson and inhomogeneous Helmholtz equation numerically,
evaluating and plotting radiation patterns and cross-sections for
complicated but interesting time dependent charge density
distributions, etc. Resources here include Mathematica, maple,
SuperMongo, Numerical Recipes, and more. Obviously now is not the
time to learn to program; presumably you are all competent in f77 or c
or both. Better late than never, if not, and this is a possible
learning project.

If you choose to do a project, it is due TWO WEEKS before the
last class2 so don't blow them off until the
end. It is strongly recommended that you clear the topic with me, too.

I will grade you on: doing a decent job (good algebra), picking an
interesting topic (somewhat subjective, but I can't help it and that's
why I want to talk to you about it ahead of time), adequate
preparation (enough algebra), adequate documentation (where did you
find the algebra), organization, and Visual Aids (pictures or
interactive demos are sometimes worth a thousand equations). Those of
you who do numerical calculations (applying the algebra) must also
write it up and (ideally) submit some nifty graphics, if possible.

I'm not going to grade you particularly brutally on this -- it is
supposed to be fun as well as educational. However, if you do a
miserable job on the project, it doesn't count. If you do a decent job
(evidence of more than 20 hours of work) you get your ten percent of
your total grade (which works out to maybe a third-of-a-grade credit
and may be promoted from, say, a G+ to a E-).

I will usually be available for questions after class. It is best to
make appointments to see me via e-mail. My third department job is
managing the computer network (teaching this is my second and doing
research is my first) so I'm usually on the computer and always
insanely busy. However, I will nearly always try to answer
questions if/when you catch me. That doesn't mean that I will know
the answers, of course ...

Our grader is Matt Sexton, Room 203A, 660-2566, and is also generally
available for help in the coursework. He has tentatively set ``office
hours'' at 2:30 to 3:30 Tuesday afternoons (plus runover allowance).
He will answer questions too, when He can. Otherwise He'll look busy
and say that He needs to think about it and come bug me. I, in turn,
will look puzzled and say I'll think about it and spend all night in
the library trying to figure it out so I can tell her and she can tell
you. I find (e.g.) Jackson problems just as hard as you do -- and
I've done a bunch of them a bunch of times.

I welcome feedback and suggestions at any time during the year. I
would prefer to hear constructive suggestions early so that I have
time to implement them this semester.

I will TRY to put a complete set of lecture notes, printed out like
this up on the Web in both PS and html/gif form.
Exemplary problems from other sources may also be included.

Lecture 1

General, Nth Order Linear Homogeneous ODE's

(1)

If we divide out the
, this
becomes:

(2)

(making the leading order linearity obvious). Let be complex -
most general abelian algebra. Then a point is:

ordinary if are all analytic at .

singular if not:

regular singular if
are all
analytic, so that has no more than an th order pole.

irregular point or essential singularity (otherwise).

Fuch's Theorem

At an ordinary point there are linearly independent
solutions.

The Taylor series for these,
converges at least up to the nearest singular point.

At a regular singular point, there is at least one
solution of the form:

(3)

where can be any complex number. is called the indicial
exponent. This expansion, too, converges at least to the nearest
singular point.

Example: Bessel (Differential) Equation

(4)

or

(5)

so (obviously) is a regular singular point.

Solutions: Frobenius Method

...follows directly from Fuch's Theorem:

If you want to expand around , change variables so
that you are expanding around 0: or (if ) . This set of coordinates defines the radius
of convergence of the solution derived.

Decide which case/kind of point is:

Ordinary: Use
.

Regular singular: Use
with .

Irregular singular, essential: Use Famous Mathematician Method
(i.e. - look it up if you can find it) or Give Up.

Substitute into ODE, also expanding any coefficients
(e.g. -
. Match up
coefficients of each power of .

Take especial care with the first coefficients (usually 2,
for our purposes).

After that, write a recurrence relation and hence the whole
solution. Make pithy observations, if any exist to be made.

For the regular singular point case, the lowest remaining power
of determines via the indicial equation.

Examples

Airy's Equation

(6)

Note that is ordinary, no singularities but .

(7)

(8)

(9)

or

(10)

Examine the coefficients of:

:
or .

:

:

or

(11)

We can now reconstruct the entire solution:

(12)

Remarks:

Two linearly independent solutions with coefficients
and .

Converges everywhere (ratio test for or Fuch's Theorem).

and (Airy functions/Airy integrals) correspond
to two particular choices for . These are not too important
in E&M, but are important in quantum theory.

Legendre Equation:

(13)

is ordinary. are regular singular points. Start
with . Using same expansions for as above:

(14)

With a little work (and treating the first two carefully) we find that

(15)

or

(16)

Remarks:

The solution is supposed to exist on the interval (right up
to the nearest singular points). If one examines the limit of the
ratio of two successive terms (the ratio test) one finds that:

(17)

This indeed does converge for
, but the end
point behavior is ambiguous.

If is an integer, one of the two independent series
terminates, giving a finite polynomial solution that is defined at the
end points. The other (as it turns out, see e.g. - Courant and
Hilbert for discussion) diverges and must be rejected. We can tabulate
these solutions for each (including some optional normalization
so that ):

(18)

(19)

(20)

(21)

(22)

or in more familar form we have the Legendre Polynomials:

(23)

(24)

(25)

(26)

(27)

Another solution exists around the points with a
radius of converge of 1,
. We'll look at this later, maybe.

Bessel's Equation (with m = 0):

(28)

is a regular singular point, so we try:

(29)

(30)

(31)

Substituting these into the ODE we obtain:

(32)

The lowest power of in these sums is (or ):

(33)

where the latter follows because we insist that (or we
would be starting the series somewhere else). With , we look
at the power of :

(34)

Finally, for general :

(35)

which is the recursion relation:

(36)

and we see that all odd terms vanish.

Reconstructing the series is now easy:

(37)

and we identify the part in parentheses with:

(38)

We will shortly solve the general case for . Or, I should
say, you will for homework. The general theory of solutions around a
regular singular point suggests that we should try a second solution
of the form:

(39)

(which has a logarithmic singularity at !) and substitute to
try to solve for the or proceed with the Wronskian method or
proceed according to Wyld, Section 4.3. Eventually we'll probably do
one of these things, but if we don't you can always look it up.

Orthogonal Functions, Representations

Suppose we are given (by God, if you like) an interval , a
``weight function'' (or density function) , and a set of
reasonably smooth linearly independent functions
defined on the entire interval. These functions are orthogonal
if:

(40)

Let:

(41)

then

(42)

and we can always construct an orthonormal set:

(43)

such that

(44)

We could also (if we wished) absorb into the orthogonal
representation, but it is not always useful to do so. In fact, it is
not always useful to normalize to unity - sometimes we will use an
-dependent normalization to help us cancel or improve a factor that
appears elsewhere in our algebraic travails.

Note that orthogonal (or orthonormal) functions are very useful
in physics! They are the basis of functional analysis, both in
quantum mechanics and in the DE's of the rest of physics as well.
Hence our formalization of the process:

Notation: Let
,
,
, etc. Then we can notationally formalize
the relationship between functional orthogonality and the
underlying vector space with its suitably defined norm.

That is, suppose is a piecewise continuous function on
and suppose further that the
are orthogonal and
complete. Then:

(45)

where

(46)

so that

(47)

is a consistent statement for general only if

(48)

In these equations we repeatedly write down series sums over a
(possibly and indeed usually infinite) set of functions. We must,
therefore, address the issue of whether, and how, these sums converge. Noting that we were pretty slack in our requirements on
- it only had to be piecewise continuous, for example, and we
said nothing about how it behaved at the points and themselves.

Clearly we cannot converge uniformly (at each and every point) since
at certain points the ``function'' really isn't - whether or not you
like to think that it has two values at the discontinuities, it
clearly approaches different values from the two sides in a limiting
sense that makes its value at the limit point hard to uniquely
define. It turns out that the kind of convergence that we can expect
is at least:

(49)

This is called convergence in the mean since the mean
square error measured in this way goes to zero. This is a wee bit
weaker than plain old ``convergence'' or ``uniform convergence'', but
for a continuous, smooth, well-behaved function the convergence
is effectively uniform.

Note the
(Cauchy criterion for convergence) inherent
in this statement. This must exist for us to be able to talk
about the sum with a straight face instead of a smirk. This is
because in the real world we cannot do infinite sums. If we make a
finite approximation:

(50)

then a very important conclusion that can be shown (we won't show it)
is that the 's that minimize the least squares error (the
integral above) do not depend on and are in fact

(51)

This is not true if the are not orthogonal, for
example, if they are so the sum is a power series, adding each
term requires that one rearrange all the to obtain the new
``best fit''. You can check this for homework, too.

In the meantime, we can understand the fundamental idea by multiplying
out the convergence relation and noting that (since ):

(52)

Rearranging, we get Bessel's Inequality:

(53)

If and only if the representation is complete, we obtain

(54)

This is known as Parseval's Theorem.

Let us examine all this in the context of a well-known example.
Consider a Fourier Series for on the interval
:

(55)

Thus the functions are
,
with . We can normalize these functions easily enough:

(56)

(57)

With these definitions,

(58)

(59)

(60)

where the first relation works for too.

If we use only the functions, they are orthogonal but not
(by themselves) complete. The expansion may only poorly approximate
the function and

(61)

Closure and Completeness

Now we can state the closure relation that is directly and
intimately connected to completeness. Assume that an orthonormal set
has (or that we have absorbed the weight factor into the
normalized functions). Then:

(62)

(63)

But:

(64)

(by definition of the Dirac delta function!) Thus:

(65)

(66)

together make the closure relation. It is in this sense that we can
write the identity as the sum of unit vector projections:

(67)

To continue using Fourier representations as an example, consider the
normalized function:

(68)

on the interval with . We can read off of the
above that:

(69)

(70)

which is a crucial relation in physics. At some point in the
near future I will probably lecture you to tears on how
is not a function but a distribution, or an integral operator,
and how you can get into real trouble treating it as a function.
However, you have probably heard this lecture four or five times
already, so we'll pass it by for now.

Gram-Schmidt Orthogonalization

In many cases one can obtain a linearly independent set of functions
, but find upon examination that they are (alas!) not
orthogonal. For example, . Fortunately, it is always
possible to orthogonalize the set of functions via a simple
procedure. Let us define to be the set of non-orthogonal
but linearly independent functions. We will take suitable linear
combinations of them to generate an orthogonal set which we will call
. Finally, we can normalize this orthogonal set any
way we like to form a orthonormal representation .

To understand the Gram-Schmidt procedure, it is easiest to consider it
for ordinary Cartesian vectors. Suppose and are
two non-orthogonal, but linearly independent vectors that span a
two-dimensional plane as drawn below:

In this figure, we see that we can systematically construct
by projecting onto and subtracting
its -directed component from itself. What is left
is necessarily orthogonal to . Algebraically:

(71)

We want to find such that
. We
try

(72)

Putting this into the condition,

(73)

or

(74)

so that

(75)

exactly as we expected from the figure.

This procedure works just as well for sequential operations and
with functional ``vectors'' instead of real space vectors. Pick any
function . Let

(76)

and go ahead and normalize it:

(77)

(I won't explicitly include the normalization step in the future, but
you'll see how/where it occurs). Assume

(78)

The condition
leads to

(79)

(since
is normalized) and

(80)

We can now normalize this into
.

We then try to make

(81)

such that

(82)

(83)

These are two equations, there are two unknowns, and everything is
hunky-dory if
is linearly independent of the first two
(true by hypothesis and revealed even if untrue!). So iterate until
you run out of linearly independent functions to orthogonalize or you
get bored.

There is a clever example in both Arfken and Wyld where they show that
if you take and apply this procedure on the interval
with , you obtain the (gasp!) Legendre
Polynomials! In fact, if one varies the interval and the weight
function, one can obtain all the known orthogonal polynomials in
this manner!

Insert Table

These are quite useful for both expansions and
numerical integration (quadrature).

The Sturm-Liouville Theorem

Suppose one has a general 2nd order linear homogeneous ODE:

(84)

where we assume only that the 's are analytic on
(although a regular singularity at or is ok). We also insist
that on the interior - all interior points must be
``ordinary''. This sort of 2OLHODE occurs very, very frequently in
physics, and hence in this course. It can always (by suitable
algebraic manipulations) be put in the self adjoint or Sturm-Liouville form:

(85)

Once it is in this form, we can easily show that it has a really nifty
property. Let us define the linear operator such
that:

(86)

(87)

(etc.) where and are (almost) arbitrary twice
differentiable functions! We can integrate by parts twice:

(88)

(where the terms cancel on the boundary, as do all the terms with
derivatives of both and ). If we assume only the very weak
condition that and are well-enough-behaved on the boundaries
so that

(89)

(90)

(etc.) the boundary term vanishes and

(91)

Thus

(92)

This is called the Hermitian property of the differential
operator; Hermitian and Self-Adjoint mean almost the same thing.

With this observation in hand, we can easily proceed to prove 2/3 of
the Sturm-Liouville Theorem for solutions to nearly general
(self-adjoint) 2OLHODE's.

We assume (here as above):

real, analytic in

and on

Since it doesn't change sign, and we can always
arrange for it to be positive, so on .

That satisfies ``suitable'' homogeneous B.C.'s at and
. We require that if is a solution, so is .
Almost any homogeneous BC works, any BC that works is homogeneous.
Exceptions will be clear in the context of physics.

Then solutions to S-L 2OLHODE are a discrete set of eigenfunctions () and corresponding eigenvalues
() where:

are all real! (Self-adjoint property)

's are all orthogonal,
if
.

Note that sometimes
for distinct 's so you
might think that the 's don't have to be orthogonal. However, they
do have to be linearly independent or they are distinct solutions with
a vanishing Wronskian! So, we can GSO them to orthogonalize the
-degenerate subspace. So for all practical purposes, after a
bit of work, all the are orthogonal or can be made so even if
their eigenvalues are the same.

The set of are complete!

In summary, given a set of 's that solve a 2OLHODE, we can
always write them as a complete orthonormal set .

This is a really useful theorem. Immediately implies the
orthogonality of all the common ODE solution sets and orthogonal
function sets in use in physics, e.g.
, with its
well-known Fourier solutions on the interval with Dirichlet
boundary conditions , ,
with
. Whew!

It is easy and instructive to prove the first two properties predicted
by the SL theorem. The Hermitian property above implies:

(93)

(94)

for any , that satisfy the B.C.'s (not necessarily the
ODE). Note well that there is no in these equations. So we
define:

(95)

in analogy with the braket notation. In this shorthand, Hermitian
means:

(96)

and we can conclude that
is real.

NOW, let and be any two solutions and
corresponding to and . Then:

(97)

or (subtracting)

(98)

THUS

(99)

(and we conclude that the are real).

(100)

(and we conclude that the are all orthogonal).

It turns out to be quite difficult and involved to prove completeness.
One basically has to show closure of one sort or another, and closure
is not immediately obvious. It has long since been proven, however,
and is shown in serious books like Hilbert and Courant if you
want/need to look over the proof some day.

From this day forth, then, I will assume that you just know that the
solutions to nearly every 2OLHODE that we treat in this course (and
the rest of your courses form an orthogonal (appropriately normalized, in
practice) basis, out of which it is perfectly clear that general
solutions can be built via superposition.

It is time now for a short interlude, first on tensors (to get the
trivia out of the way), then on curvilinear coordinates (to get you to
where you can appreciate separation of variables in the Big Three
coordinate systems) and we'll hop on back to ODE's, this time in the
context of PDE's and ``real physics''.

Lecture 1

General, Nth Order Linear Homogeneous ODE's

(101)

If we divide out the
, this
becomes:

(102)

(making the leading order linearity obvious). Let be complex -
most general abelian algebra. Then a point is:

ordinary if are all analytic at .

singular if not:

regular singular if
are all
analytic, so that has no more than an th order pole.

irregular point or essential singularity (otherwise).

Fuch's Theorem

At an ordinary point there are linearly independent
solutions.

The Taylor series for these,
converges at least up to the nearest singular point.

At a regular singular point, there is at least one
solution of the form:

(103)

where can be any complex number. is called the indicial
exponent. This expansion, too, converges at least to the nearest
singular point.

Example: Bessel (Differential) Equation

(104)

or

(105)

so (obviously) is a regular singular point.

Solutions: Frobenius Method

...follows directly from Fuch's Theorem:

If you want to expand around , change variables so
that you are expanding around 0: or (if ) . This set of coordinates defines the radius
of convergence of the solution derived.

Decide which case/kind of point is:

Ordinary: Use
.

Regular singular: Use
with .

Irregular singular, essential: Use Famous Mathematician Method
(i.e. - look it up if you can find it) or Give Up.

Substitute into ODE, also expanding any coefficients
(e.g. -
. Match up
coefficients of each power of .

Take especial care with the first coefficients (usually 2,
for our purposes).

After that, write a recurrence relation and hence the whole
solution. Make pithy observations, if any exist to be made.

For the regular singular point case, the lowest remaining power
of determines via the indicial equation.

Examples

Airy's Equation

(106)

Note that is ordinary, no singularities but .

(107)

(108)

(109)

or

(110)

Examine the coefficients of:

:
or .

:

:

or

(111)

We can now reconstruct the entire solution:

(112)

Remarks:

Two linearly independent solutions with coefficients
and .

Converges everywhere (ratio test for or Fuch's Theorem).

and (Airy functions/Airy integrals) correspond
to two particular choices for . These are not too important
in E&M, but are important in quantum theory.

Legendre Equation:

(113)

is ordinary. are regular singular points. Start
with . Using same expansions for as above:

(114)

With a little work (and treating the first two carefully) we find that

(115)

or

(116)

Remarks:

Iff is an integer, one of the two independent series
terminates, giving a finite polynomial solution. We can tabulate
these solutions for each (including some optional normalization
so that ):