As there are many different notations for Bessel functions, can you please clarify what $I_2$ and $K_2$ are?
–
timurMay 20 '14 at 21:28

They are the solutions of $$ x^2\frac{⁢d^2⁡ w}{d⁡x^2}+x\frac{d⁡w}{dx⁡}-(x^2+4)⁢w=0, $$ $I_2(x)=\frac{x^2}{4}\left(1+\ldots\right)$ and $K_2$ is such that $K_2(x)\approx \exp(-x)\sqrt{\frac{\pi}{2x}}$ for $x$ large and positive. See DLMF.
–
Jung Wen ChenMay 21 '14 at 8:28

3 Answers
3

This discussion makes me even firmer in my opinion that introducing fancy notation for special functions and making long lists of related formulae in reference books makes more harm than good and that we would know much more about and be at more ease with them if everybody had to start from the basics and deal with bare definitions every time he needed to establish some of their properties.

Just write your expression as
$$
I(ra)^2\int_r^1\left[-\frac{d}{ds}\frac{K(sa)}{I(sa)}\right]\,ds=\int_r^1\frac{I(ra)^2}{I(sa)^2} a(I'(sa)K(sa)-I(sa)K'(sa))\,ds
$$
Now notice that while finding the solutions of a second order ODE is in general next to impossible, finding their Wronskian is trivial. In particular, for this case we have $ a(I'(sa)K(sa)-I(sa)K'(sa))=\frac{c}s$ where $c$ is some positive number. Once we understand that, all that remains is to establish that $\frac{I(ra)}{I(sa)}$ is decreasing in $a>0$ for $r<s$, which immediately follows from the fact that $I$ is the sum of a Taylor series with non-negative coefficients.

An example of what not to do, considering the simple and natural (as always, once you know about it) accepted answer.

We consider the function
$$
f:=(x,r)\to\frac{K_{2}(xr)I_{2}(x)-K_{2}(x)I_{2}(rx)}{I_{2}(x)}I_{2}(rx)
$$
for $x>0$ and $0<r<1$. Introducing
$$
g:=x\to\frac{K_{2}(x)}{I_{2}(x)},\quad g^{\prime}(x)=-\frac{1}{xI_{2}(x)^{2}}<0,
$$
we find
$$
f:=(x,r)\to\frac{g(x)-g(rx)}{rxg^{\prime}(rx)}.
$$
This form has the advantage of involving only one function (and its derivative).
Since $g$ is decreasing and $r<1$, $f>0.$ We have
$$
f=I_{2}(rx)K_{2}(rx)-g(x)I_{2}(rx)^{2}
$$
and both $r\to I_{2}(rx)K_{2}(rx)$ and $r\to-I_{2}(rx)^{2}$ are
decreasing therefore $\partial_{r}f<x\left(I_{2}K_{2}\right)^{\prime}(rx).$

Now, to simplify, let us use approximations of these special function.
There holds
$$
\left(I_{2}K_{2}\right)^{\prime}<-\frac{1}{3x^{2}}\mbox{ for }x>4,
$$
(this is a terrible bound when $x$ is small, and not a perfect bound
when $x$ is large as $\left(I_{2}K_{2}\right)^{\prime}\approx-\frac{1}{2x^{2}}$
for large $x$) therefore
$$
\partial_{r}f<-\frac{1}{3r^{2}x}\mbox{ for }x>\frac{4}{r}
$$
Secondly, $x\to-g^{\prime}(x)\exp(2x)$ is decreasing. Therefore
\begin{eqnarray*}
-g^{\prime}(x)\exp(2x) & < & -g^{\prime}(rx)\exp(2rx)\\
\frac{g^{\prime}(x)}{g^{\prime}(rx)} & < & \exp(2x(r-1))
\end{eqnarray*}
for all positive $x$. This is a sharp bound only for large $x$.
We now compute
$$
\partial_{x}f=\frac{r}{x}\left(\partial_{r}f+\frac{1}{r^{2}}\frac{g^{\prime}(x)}{g^{\prime}(rx)}\right)<\frac{1}{xr}\left(-\frac{1}{3x}+\exp(2x\left(r-1\right))\right)\mbox{ when }x>\frac{4}{r}
$$
This gives
$$
\partial_{x}f<0 \mbox{ for } x>\mbox{max}(\frac{\log\left(\frac{2}{3}(1-r)\right)}{r-1}, \frac{4}{r}).
$$
Less brutal approximations (of functions of one variable) would improve the result. Only half of the decay of $\partial_{r}f$
was used, for example.

This function is quite interesting as it reaches a maximum in 0. This can be seen without difficulty computing the first and second derivatives. It is

$$B(a,r)=\frac{K_2(ar)I_2(a)-I_2(ar)K_2(a)}{I_2(a)}I_2(ar)$$

that gives

$$B(0,r)=\frac{1}{4}(1-r^4)>0$$

$$B'(0,r)=0$$

$$B''(0,r)=-\frac{1}{12}r^2(1-r^2)^2<0$$

for the given interval. This means that this function is decreasing for $a>0$. The problem is if there are some other points where the first derivative can become zero changing the concavity of the curve. So, the first derivative has the following involved expression

Now we notice that $I_2$ is a monotonic increasing function that never becomes zero and $K_2$ is a monotonic decreasing function that never becomes zero for the given intervals and similarly is true for $I_1,\ I_3$ and $K_1,\ K_3$. All these functions are positive. So, it is not difficult to realize that the first derivative is a monotonic function and never hits zero again being just a balance of functions never reaching zero unless $a=0$ and having monotonic behavior, $K_i$ are decreasing functions and $I_i$ increasing functions. We also note that

$$\lim_{a\rightarrow\infty}B'(a,r)=0$$

that can be proved using the asymptotic formula for these Bessel functions. Now, combining monotonicity and positivity of these Bessel functions, starting from 0 and reaching asymptotically 0 at increasing values of the argument, they can just reach an extremum and never cross zero again. This can also be seen with a simple plot

I understand that $I_2$ and $K_2$ do not become zero, but why does this imply $B'$ does not become zero?
–
Michael RenardyMay 7 '12 at 11:37

I have improved the answer. You are doing a balance of positive terms in the first derivative. This balance can only be zero in zero where the maximum is reached. Then, Is increase exponentially while Ks go down exponentially.
–
JonMay 7 '12 at 13:38

I still cannot follow. B' has positive and negative contributions. You have to show the negative ones outweigh the positive ones, and I do not see how you get this out of sign of I and K and their derivatives alone. I tried to check your expressions for B'. I could get them to agree neither with B' nor with each other. Maybe there are typos. Your second expression seems to have an unbalanced parenthesis.
–
Michael RenardyMay 7 '12 at 14:34

Ok, I finally found out where you are missing the parenthesis. I still do not follow your argument beyond the positivity and monotoncity of the I's and K's.
–
Michael RenardyMay 7 '12 at 15:00

I should say that the derivative does not change sign and so does not cross to zero again. I will try to expand the answer to make this evident.
–
JonMay 8 '12 at 7:54