I'm teaching a differential equations class now and I am hoping to give a reason for the Frobenius series method beyond simply "we guess these solutions". Now, for the Euler equation
$$t^n x^{(n)}(t) + a_{n - 1} t^{n - 1} x^{(n - 1)}(t) + \dots + a_0 x(t) = 0$$
there is a good, easy explanation for why the fundamental solutions are of the form $x(t) = t^r$, where $r$ solves the indicial equation and repeated roots are handled by multiplying by powers of $\ln t$: just make the change of variables $s = \ln t$ and verify that this makes $t^n x^{(n)}(t)$ a constant-coefficient linear combination of the $x^{(k)}(s)$'s, and copy down the solutions to a constant-coefficient linear equation: $s^k e^{rs}$, with $k$ less than the multiplicity of $r$ as a root of the characteristic polynomial, which turns out here to be exactly the indicial polynomial.

But there isn't an apparent generalization of this analogy to arbitrary differential equations with regular singular points,
$$t^n x^{(n)}(t) + a_{n - 1}(t) t^{n - 1} x^{(n - 1)}(t) + \dots + a_0(t) x(t) = 0,$$
with the $a_i(t)$ analytic around $t = 0$. You can make the same change of variables and render the equation non-singular, but it will:

still have variable coefficients;

even if you got the solutions as power series, the substitution $s = \ln t$ would make them into series in $\ln t$, which is not desirable;

when two roots differ by an integer then one of the solutions won't even be of the desired form;

when there's a repeated root, the second solution looks like
$$x_2(t) = x_1(t) \ln t + x^r v(t),$$
where $v(t)$ is some other power series, anyway, which is not what you'd get from the change of variables in any obvious way. Now, it is true that there is the following relationship between $x_1(t)$ and $v(t)$:
$$x_1(t) = \sum_{i = 0}^\infty b_i(r) t^{n + r} \qquad
v(t) = \sum_{i = 0}^\infty b_i'(r) t^{n + r}$$
where we differentiate the coefficients with respect to $r$, considered somehow as a continuous variable.

So, my question:

Is there some connection, via a transform, change of variables, or approximation, that produces the Frobenius method by analogy with non-singular equations? Perhaps just when the roots of the indicial polynomial do not differ by integers?

2 Answers
2

Is there some connection [...] that produces the Frobenius method by analogy with non-singular equations?

When solving a differential equation by power series, you assume that your solution can be expressed as a power series (which is quite reasonable assumption, since many analytic functions can be expressed that way). You can substitute the power series into your equation and find the expression for the coefficients.

This works quite well when your variable coefficients of the derivatives are "nice" functions. But when they have some singularities, there's a problem with that approach: when you substitute your power series and simplify, you'll come to something in the lines of

$$(\mbox{some expression with n})\cdot a_n = 0$$

and you find out that you cannot make the expression in the parenthesis to be $0$ for any value of $n$, which implies that it is $a_n$ which must be $0$. But this gives us just a finite power series (that is, a polynomial); or, what's worse, you have all coefficients equal to zero, but this is just the trivial solution $y = 0$. It is correct, but not much of our interest.

The reason why we cannot get some nice power series in this case is that our assumption was wrong: what mathematics is trying to tell us here is that the solution function cannot be expressed by simple power series when it has singularities, because then you have some negative powers of $x$ involved (which means division by $0$ when $x=0$), or even fractional powers (roots, which don't have values for some numbers) or real or complex powers.

Enter the Frobenius method.

This can be easily fixed by assuming something more: that our power series is multiplied by $x^r$, where $r$ can be any real or complex exponent. This part of the solution accounts for the non-natural powers of $x$ in our solution. So we have the ansatz (trial solution):

$$y(x) = x^r \sum_{n=0}^\infty a_n x^n$$

which we can rewrite simpler as:

$$y(x) = \sum_{n=0}^\infty a_n x^{r+n}$$

When you substitute this trial function into your equation you'd get the indicial equation from which you can determine the value(s) of your exponent $r$.

Edit: I added answers for your follow-up questions to clarify it somewhat more.

Why should multiplying by this weird power help?

As I've said, some functions are not "nice". They have singularities (e.g. divisions by zero, discontinuities etc.). Then, they cannot be expressed as simple power series such as

$$y = \sum_{n=0}^\infty a_n x^n$$

because they don't have natural exponents in their powers. Their powers can be negative (division), fractional (root extraction), or even some real or complex exponents. So we need to modify our power series to account for such "weird" exponents. We do that by adding (or subtracting) some other constant $r$ to (from) our exponent $n$:

$$y = \sum_{n=0}^\infty a_n x^{n+r}$$

Although $n$ could be only some natural number, $r$ can now be any number (even a complex number), and we would have to determine what number it is (sometimes there will be more than one answer for that question). That's what indicial equation is for.

But before that, we shall notice something: After adding/subtracting $r$ to/from the exponent, this is no longer a standard power series (its exponent can now be non-natural). But we can use the rules for exponents to extract this $r$th power of $x$

$$y = \sum_{n=0}^\infty a_n x^n x^r$$

and then move out of the sum since this is a common factor of all its terms:

$$y = x^r \sum_{n=0}^\infty a_n x^n$$

and we have our original power series back, but we note that it is now multiplied by some power of $x$. And this is the answer to your question why do we multiply the trial series by $x^r$.

If a power series should occur, is there any relationship with the series solutions to constant-coefficient equations?

Yes there is. For constant-coefficient equations the solution was in a form:

$$ y = C_0 e^{r_1 x} + C_1 e^{r_2 x}$$

where $r_x$ and $r_y$ were roots to the "characteristic equation" (or "indicial equation"). You could get the same answer with simple power series, since constant coefficients are "nice" functions (analytic) too. You'd get a pattern in the coefficients of the power series, with factorials in there, which indicates the exponential function, and this is indeed the solution. If you have some pattern-matching skill, you can try restoring the original function (exponential in this case) from the power series. Been there, done that, this is perfectly possible.

In the case of variable-coefficient equation such as Cauchy-Euler (or equidimensional) equation, we are told that the solution is in a form:

$$y = C_0 x^{r_1} + C_2 x^{r_2}$$

which doesn't seem like the exponentials we are familiar with from the constant-coefficient equations. But note that you can rewrite $x$ as $e^{\ln x}$, and when you substitute it to the solution pattern you'll get:

$$y = C_0 (e^{\ln x})^{r_1} + C_2 (e^{\ln x})^{r_2}$$

and using the laws of exponents we see that:

$$y = C_0 e^{r_1 \ln x} + C_2 e^{r_2 \ln x}$$

and this form might look more familiar, since there is a sum of exponentials again. But now we use natural logarithms of $x$ instead of $x$ directly.

The ansatz approach is certainly motivational, but it's not intuitive and gets less satisfying the more you learn.

Well, intuition is also something you learn. For you this might not be intuitive, but when you gain experience and learn some patterns, you'd start to notice them in your equations when they appear again and again, and they'll become intuitive to you then.

These questions can't be answered by ansatz, which students always interpret as "guess and check", which it sort of is.

It's not much "guessing", it's rather an "educated guess", or "intelligent heuristics". You don't guess just anything ─ you use what you know for your guess. For example, when you have simple equation like this:

$$y' = k y$$

you can think in these lines:
"What function $y$ is proportional to its own derivative?" There is only one such function: the exponential, $y = e^x$. But we want a function which derivative will be proportional to that function in a certain ratio, a certain constant of proportionality, which is $k$. If you know that $\left(e^{kx}\right)' = k e^{kx}$, then you can guess the correct solution right away. But if you don't, you can leave some constant "to be determined later" in your trial solution, and put it into the equation to see what will happen and what value that constant shall have for the equation to be satisfied.

It happens that the exponential function solves any linear differential equation, and this is not a coincidence: The exponential is the eigenfunction (or proper function) of differentiation. This means that when you apply differentiation to it, it will come out untouched, but sometimes multiplied by a constant (which is then an eigenvalue; in the case above it was $k$). So this is not much a "guess" when you try to use exponential for other similar equations to see if it still fits. If from some reason it won't fit, you'll loose nothing: Mathematics will tell you that your assumption was wrong, and sometimes you can notice what needs to be improved to get the correct answer. For example, you need some power of $e^x$ (which is $e^{kx}$ as in our example above). Or you need to multiply by $x$, or some power of it (as $x^r$ in our example with Frobenius series).

Same goes with variable-coefficient equations, like Cauchy-Euler equation. You can see that there's a power of $x$ at every derivative, and this power of $x$ has the same degree as the order of the derivative. Recall that when we differentiate some power of $x$, its exponent drops down by 1 and it goes as a constant multiplier. But then there's that variable coefficient which is also a power of $x$, so multiplying our derivative by it will raise the power back. And since the degree of that power is the same as the order of the derivative, whatever it went down in differentiation, it will go up back again to the same level. So we will have the same powers of $x$ everywhere (a common factor!), just multiplied by some constants, which we can always tweak in a way to turn the whole equation to 0. So substituting $x^n$ into the equation as your trial function isn't that bad idea in this case. And, as you can see, you can deduce what function will work from the form of the equation, by observing what these derivatives will do with it, and how can you make the whole equation cancel.

The ansatz approach is certainly motivational, but it's not intuitive and gets less satisfying the more you learn. Why should multiplying by this weird power help? If a power series should occur, is there any relationship with the series solutions to constant-coefficient equations? These questions can't be answered by ansatz, which students always interpret as "guess and check", which it sort of is. Using series on CC equations is also an ansatz, but at least we can hope to stop the buck there.
–
Ryan ReichOct 30 '13 at 14:24

Yes, it's a pity that in books & courses they rarely explain such things. So let me explain a little bit more. I edited my answer above and added some more explanations for your questions.
–
SasQOct 30 '13 at 21:01

This is really a heroic effort you have evolving here. Unfortunately, it feels like you haven't reached the core of my conceptual issue. I think the best point you make (which, to be clear, is something I did already know) is that, for constant-coefficient equations, the basis solutions are actually eigenfunctions of the corresponding differential operator. Is there a similar explanation that applies directly to regular singular equations that yields Frobenius series?
–
Ryan ReichOct 31 '13 at 3:05

1

There is a relation between these equations and the representation theory of the so-called generalized Weyl algebras of Bavula, or at least with their category $\mathcal O$. I've never seen this connection in print; but if you look into the papers which describe Verma modules and so on for this algebras with the sort of computation that Frobenius' method makes you do in mind, you should immediately see the ressemblance. (this only really covers the case where the coefficients are polynomials, not arbitrary analytic functions)
–
Mariano Suárez-Alvarez♦Oct 31 '13 at 3:18

As you note, Ryan, it is quite sensible to try power functions for the Euler equation. Now the general equation with regular singular points is an analytic deformation of the Euler equation (near each of those singular points), in that its coefficients are analytic functions which, when evaluated at 0, give you an Euler equation. It is therefore very natural to look for solutions which are deformations of the solutions of the Euler equation. That is precisely what the Frobenius method does.

And it works marvels. For example, the indicial equation of the general equation is precisely the same as that for the corresponding Euler equation, so that the behaviour of solutions is qualitatively the same as for the Euler equation.

One does the same sort of thing when implementing the cool idea of «variation of constants».
–
Mariano Suárez-Alvarez♦Oct 30 '13 at 22:06

Thanks, Mariano, for the "deformation" perspective. You have a good point that variation of parameters (/constants) explains this, and for that matter most things, though that does not necessarily explain the specific nature of Frobenius series: that the Euler equation's solutions get multiplied by analytic functions, not to mention the integer-solutions technicality. It's a unifying perspective but only in the way that Gaussian elimination is a unifying perspective on linear algebra.
–
Ryan ReichOct 31 '13 at 3:01

Well, that the solutions of the undeformed equation change in that way when you deform should be everyone's first guess! If that were not the case, we'd have something rather weird in our hands.
–
Mariano Suárez-Alvarez♦Oct 31 '13 at 3:14

@Ryan, the semiweird 2nd solution in the case of repeated roots comes from a standard idea: if you know a solution u, look for solutions of the form uv, and see what equation must v satisfy. This works with linear equations for which you know one solution, and the subsidiary equation you get for v has one order less.
–
Mariano Suárez-Alvarez♦Oct 31 '13 at 4:51