Let $f:[0,1]\times[0,1]\to\mathbb{R}$ be a continuous function. Define
$$
M_x:=\max\limits_{0\leq y\leq 1} f(x,y), \qquad m_y:=\min\limits_{0\leq x\leq 1} f(x,y).
$$
Is there a useful set of assumptions under which one can conclude that
$$
\inf\limits_{0\leq x\leq 1}M_x = \sup\limits_{0\leq y\leq 1} m_y ?
$$
As the example $f(x,y)=(x-y)^2$ shows, this is not true in general. On the other hand, I think (though I haven't checked the details carefully) that this is true provided that for any given $x$, the maximum of $f(x,y)$ is attained at a $unique$ $y$, and that for any given $y$, the minimum of $f(x,y)$ is attained at a $unique$ $x$. I would be grateful for any reference that discusses this question in detail.

1 Answer
1

There are many results of this type, and in general they go under the name of minimax theorems. Often some sort of convexity / concavity assumption is made, as in Sion's Minimax Theorem.

You are correct in stating that uniqueness of optimizers gives you equality when $f$ is continuous with domain $[0,1]\times [0,1]$. However, this can fail if the domain of $f$ is different. For example, let $f$ be the arc length metric on the circle. Then $M_x = \pi$ for all $x$, $m_y = 0$ for all $y$. The optima are achieved uniquely at $y = -x$ and $x=y$, respectively, but $\pi = \inf_x M_x \neq \sup_y m_y = 0$.