It is easy to see that $\phi$ has a global minimum $m$ on $\mathbb R$, and that $m$
is attained of one the values $b_1, \ldots ,b_n$. In fact, $\phi$ is affine on each interval
$[b_k,b_{k+1}]$ (and on the outer intervals $]-\infty,b_1]$ and $[b_n,+\infty[$ ).

I say that "m is the global minimum for $\phi$" can be proved "using only the triangular inequality" if there are numbers $a'_1,a'_2, \ldots ,a'_n$
with $0 \leq a'_k \leq a_k$ (for $1 \leq k \leq n$) and constants $\varepsilon_1,\varepsilon_2, \ldots, \varepsilon_n$ all equal to $-1$ or $+1$, such that

You may think about a more general case that is $x\in R^n$ or in a "Reflexive Banach space" (if you know some Functional Analysis) , this functions attains a minimum because $\phi$ is lower semi-continuous and convex.
–
checkmathMar 27 '12 at 16:38

It seems that if you have a certain method like then you would use it in any Banach space what it is not possible (the space has to be reflexive)!
–
checkmathMar 27 '12 at 16:45

1 Answer
1

Yes, the triangle inequality always suffices, and you can explicitly construct the $a_k'$ and $\epsilon_k$. They are severely constrained by the fact that the two inequalities in your last line must be equalities at the value $b_j$ at which $\phi$ takes the value $m$. For the first inequality to be an equality at $b_j$, we must have $a'_k=a_k$ for all $k\ne j$. For the second inequality to be an equality at $b_j$, all the terms must have the same sign, and this determines $\epsilon_k$ for all $k\ne j$ up to an overall sign; we can choose that sign such that all terms are non-negative. (I'm assuming that the $b_k$ are distinct, since otherwise we could just combine terms with the same $b_k$.)

This leaves only $a'_j$ and $\epsilon_j$ to be determined. Your first sum condition determines the value of $\epsilon_ja'_j$, and since $\epsilon_j=\pm1$ and $a'_j\ge0$, this determines $a'_j$ and $\epsilon_j$ (unless $a'_j=0$, in which case $\epsilon_j$ can be chosen arbitrarily). The resulting $a'_j$ is the absolute value of the slope at $b_j$ of the sum of all terms except the $j$-th, so the fact that adding the $j$-th term creates a minimum at $b_j$ implies $a'_j\le a_j$ as required. Since the values thus determined make all terms in the second inequality non-negative, the first sum condition and the last equality in the last line imply the second sum condition.

Actually the second sum condition should probably have $\pm m$, since both signs would make the last line work out; then the choice of making all terms positive wouldn't be required and either of the overall signs for the $\epsilon_k$ would yield an admissible set of values.

[Edit in response to comment:]

I think the result holds just as well for several variables. Basically what you're doing is to find the affine function that represents the sum of all terms but the $j$-th in a neighbourhood of $b_j$ and to replace the $j$-term by the negative of that so that the sum is identically zero in a neighbourhood of $b_j$, and then as you leave that neighbourhood the sign changes can only work to increase the function value. You can do something analogous for several variables: You can again replace the sum of all the functions that vanish at the minimum by the negative of the affine function that represents the sum of the non-vanishing terms in a neighbourhood of the minimum to make the sum vanish in that neighbourhood. In this case the neighbourhood will be bounded by hyperplanes on which individual affine functions vanish, and the sign change on crossing one of these hyperplanes can again only work to increase the value. As in the one-dimensional case, the replacement term is less than or equal in absolute value to the sum of the terms it replaces because otherwise the minimum wouldn't be a minimum.

I am completely convinced by the "necessity-uniqueness" part in your reasoning, but not so much by the "sufficiency" part. In particular, your claim that "the values thus determined make all terms in the second inequality non-negative" looks dubious to me, since the terms are $\varepsilon_ka'_k(x-b_k)$, and the sign depends on the position of $x$ with respect to $b_k$.
–
Ewan DelanoyMar 28 '12 at 7:40

@Ewan: I should have said more precisely "make all terms in the second inequality non-negative for $x=b_j$" -- the $\epsilon_k$ were chosen in the first paragraph to make this so, and this is enough to deduce the second sum condition.
–
jorikiMar 28 '12 at 8:03

I think I see your point now : you have an affine function whose leading coefficient is zero and who also is zero for $x=b_j$. So that function is identically zero
–
Ewan DelanoyMar 28 '12 at 10:13

By the way, do you know if the result still holds in several variables ? In that context, affine functions would be replaced by functions of the form (linear form)+(constant). To avoid degenerate cases, we can assume that the linear parts span the whole dual space.
–
Ewan DelanoyMar 28 '12 at 10:16