I am in an introductory real analysis course. Our professor claims that one of the problems in our text likely cannot be solved, so he assigned us a much simpler problem (provable directly from the standard multivariable analogue which assumes that the function is continuously differentiable). I began working on the original problem to try to figure out why it can't be solved as asked.

Note: I am not terribly proficient with LaTeX, so please forgive any abuses of notation. Likely, I am abusing notation due to my troubles in generating the proper formulas.

My professor says that he doesn't see where the author was going with this, and he doesn't believe convexity to be enough. He believes a stronger condition, such as continuity of the derivative, is required.

So, I set off to work. Here is what I came up with so far:

If $\displaystyle p=q$ then the solution follows trivially, so assume $\displaystyle p\neq q$

Because $\displaystyle [p,q]$ is an interval, with no loss of generality, assume $\displaystyle n=1$.

Given $\displaystyle \epsilon>0$, choose $\displaystyle \delta_x$ for each $\displaystyle x \in [p,q]$ as shown to exist in the claim above.
The collection $\displaystyle \mathcal{C}=\left\{U_{\delta_x}(x): x \in [p,q]\right\}$ is an open cover for $\displaystyle [p,q]$. By covering compactness, there exists a finite subcover.
Choose one such subcover $\displaystyle \mathcal{C}^*=\left\{ U_{\delta_{x_i}}(x_i)\right\} $ indexed so that $\displaystyle 1 \le i < j \le k$ implies $\displaystyle |x_i-p| \le |x_j-p|$.

Next, if any ball $\displaystyle U_{\delta_{x_i}} \subset U_{\delta_{x_j}}, i \neq j$, then remove $\displaystyle U_{\delta_{x_i}}$ from $\displaystyle \mathcal{C}^*$
(Here is a good example of abuse of notation, and I apologize. I am not sure what other symbols are available to use in place of this to show the refinement of the finite open cover.)

What should be left is an open cover with the following properties:
$\displaystyle U_{\delta_{x_i}}(x_i) \cap U_{\delta_{x_{i+1}}}(x_{i+1}) \neq \emptyset, p \in U_{\delta_{x_1}}(x_1), q \in U_{\delta_{x_k}}(x_k)$

Essentially, I am determining midpoints between each point in my partition of $\displaystyle [p,q]$. By construction,
$\displaystyle \sum_{i=1}^k{t_{2i-1}+t_{2i}}=1$ and each $\displaystyle 0\le t_\alpha \le 1$

Finally, we have a convex combination of derivative matrices that when multiplied by $\displaystyle (q-p)$ are a distance of no more than epsilon from $\displaystyle |f(q)-f(p)|$. So, we can find derivative matrices that are close to the average derivative.

It is clear that the limit exists and tends to $\displaystyle \frac{f(q)-f(p)}{|q-p|}$. However, because our set of derivatives $\displaystyle S$ is not closed, this limit is not necessarily contained in $\displaystyle S$.

This is the point I was able to get to on my own. Now, I want to try to understand under what circumstances such a limit would be contained in the set. It is possible that it requires continuity of the derivative, although I am hoping that I will see something else. Personally, my intuition is telling me that the author was correct, and convexity is enough, but I am not quite grasping the final argument.

Here is what I gather (described in a rather informal manner, as I have not yet figured out the mathematics required to prove my claim nor even begin to describe it):

When $\displaystyle \epsilon$ is small and $\displaystyle \delta_x$ is large, it implies a large range of values for which the derivative matrix at x adequately maps the change in the function over a large distance. Therefore, locally, the larger $\displaystyle \delta_x$ is, the more linear the function appears. Linearity implies constancy of the derivative, and a constant derivative is locally continuous. Therefore, the limit as $\displaystyle \epsilon$ approaches zero tends to weight the convex combination more heavily with derivatives at points in neighborhoods that are mostly linear, yet weighing derivatives less heavily at points in neighborhoods that are extremely non-linear. So, I would like to say that the convexity of the set of derivatives somehow implies Riemann integrability, allowing me to use the Fundamental Theorem of Calculus to discover the average derivative.

Now, I also know that I can use uniform approximations of the derivative to uncover a continuous family of functions that converge pointwise to my derivative, and then I could further use Baire's theorem to show that my derivative has a dense set of continuity points. However, I am not sure how to proceed. We have not yet gotten to Lebesgue measures, so I don't yet feel comfortable using that.

Would anyone have any suggestions of how I might proceed from here? Either proving that convexity of the set of derivatives is a sufficient condition for the general analogue to the Mean Value Theorem, or if that is not possible, providing insight into why it is not?

Apr 28th 2011, 07:32 AM

SlipEternal

One more idea occurred to me:

Is there a way that I can show that for two points, $\displaystyle x_i,x_j, dist(\partial S,(Df)_{x_i})<dist(\partial S,(Df)_{x_j})\Rightarrow \delta_{x_i}<\delta_{x_j}$?
Where $\displaystyle dist(A,a)=dist(a,A)=\inf \{d(a,b): b \in A\}$

May 2nd 2011, 04:38 AM

SlipEternal

I think I figured it out. The professor said that he didn't think my method above would yield the answer, so either we were missing something simple, or something incredibly complex. Because I would have no chance of solving something complex without a lot more math, I looked for something simple. How about this:

Yeah, that was it. I needed a little more than just that, like justification that if $\displaystyle T \notin S$ then there is a unit vector $\displaystyle u \in \mathbb{R}^m$ such that $\displaystyle \left<u,f'(x)\right> > T(q-p)$ for all $\displaystyle x \in [p,q]$. Otherwise, $\displaystyle T \in S$. But, that justification should be possible to prove if I spend the time looking into the algebra of it, so my professor is giving me full credit for it. Thank you all anyway if you tried to help me out!