Here $p < 1$ is a constant which may become very small, and $1 \leq t \leq n$.

Any upper bound on $S$ would be useful. Clearly $S \leq \log n$, but for $t$ small much better constraints should be available.

If you use the Lagrange multiplier approach to constrained optimization you get equations of the form
$$
p^{x_{i+1}} = p^{x_i} - \frac{\lambda}{(x_1 + \dots + x_i)^2}
$$
for some $\lambda > 0$. (This ignores the boundary constraint $x \geq 0$)

This gives a recursive equation to solve for all $\hat x_i$ in terms of $\hat x_1, \lambda$. Experiments show that the sequence $\hat x_i$ increasingly converges to some constant $y$, and this convergence is rapid.

If you could show that the convergence was rapid enough, then you would basically know that critical situation is one in which all the $x_i$ are equal. It is quite easy to bound $S$ in this case.

I would begin by asking whether you really need to solve this particular formulation! Also, I guess you are choosing $n$ and $p$ so that a feasible solution exists? Because if $p$ is really small, then there might be no solution.
–
SuvritFeb 11 '12 at 0:02

I would like reassurance that p < 1 for the domain of interest. Gerhard "Ask Me About System Design" Paseman, 2012.02.10
–
Gerhard PasemanFeb 11 '12 at 3:46

Assuming p < 1, my feeling is that optimal values occur near x_1=x_2=...=x_T=1, where T is floor(t). You might investigate that region first before relaxing the constraint on x_T. Gerhard "Ask Me About System Design" Paseman, 2012.02.10
–
Gerhard PasemanFeb 11 '12 at 3:50

3 Answers
3

Calculus is a tough discipline. No wonder our students don't get it. On the other hand, analysis is an easy subject (so easy that we find it unnecessary to teach it to our students). So, let's do analysis instead of calculus. The key difference between the two is that in analysis we do not care much about constant factors. We'll get the answer up to a factor of $4$.

Let $S(x)$, $x=(x_1,x_2,\dots)$ be the objective function as written. Define $y$ by $y_r=1$ if $r\le t$, $rp^{y_r}=tp$ if $r\ge t$. Obviously, $x_r\ge y_r$ for every $r$, so $S(x)\le S(y)$. Our aim will be to show that there exists $x$ satisfying all the restrictions such that $S(x)\ge S(y)/4$.

Note that $S(y)=\sum_{r\le t}+\sum_{t< r< t+1}+\sum_{r\ge t+1}=S_1+S_2+S_3$.
If $S_1\ge S(y)/4$, we can just take $x_r=y_r$ for $r\le t$ and $x_r=+\infty$ for $r>t$.
If $S_2\ge S(y)/4$, we can take $x_1=y_k$ where $k\in(t,t+1)$ (there can be only one number there) and $x_r=+\infty$ for $r>1$.
The only remaining option is $S_3\ge S(y)/2$. In this case we note that
$$
\sum_{r\ge t+1}p^{2y_r}=\sum_{r\ge t+1}\frac{(tp)^2}{r^2}\le tp^2\le tp
$$
so if we take $x_r=2y_{r+k-1}$ where $k$ is the least integer exceeding $t+1$, we'll get $S(x)\ge S_3/2\ge S(y)/4$.

If you want a more explicit formula, then again, algebra is a hard subject but analysis is an easy one. Start with noticing that $S(y)$ is larger than $T(y)=\sum_r \frac 1{ry_r}$ but less than $2T(y)$ because $y_1+\dots+y_{2r}>y_1+\dots+y_{2r-1}\ge ry_r$ for each $r$.
Thus, we are interested in estimating
$$
\sum_{r\le t}\frac 1r+\sum_{r\ge t}\frac {1}{r(1+\frac{\log(r/t)}{\log(1/p)})}
$$
The behavior of the first sum is clear. The second sum is comparable to its first term corresponding to the least integer $k\ge t$ plus the integral
$$
\int_k^n \frac {dr}{r(1+\frac{\log(r/t)}{\log(1/p)})}=\int_{\log(k/t)}^{\log(n/t)}\frac {ds}{(1+\frac{s}{\log(1/p)})}
$$
which has an (rather ugly) explicit formula.

At last, let us discuss the guess that the maximizer consists of approximately equal entries. The equal entries would give the value about $\frac{\log n}{1+\frac{\log(n/t)}{\log(1/p)}}$, which tends to $\log(1/p)$ as $n\to\infty$ and $t$ and $p$ stay fixed. However, the integral above has asymptotics $\log(1/p)\log\log n$, which is a bit larger for really large $n$.

This is just a bit of data following Gerhard's and Suvrit's observations.
This is a graph of the maximum of $S$, $S_{\max}$, when $n=2$, not showing $x_1$ and $x_2$ that achieve the max,
but rather the $t$ and $p$ values, with $1 \le t \le 2=n$ and $0.05 \le p \le \frac{1}{2}$:
Throughout the $(t,p)$ range, $S_{\max} \le 1.5$.
The numerical procedure I used to search from the maximum did not converge when both $t$ and $p$ were
small, which is why the graph plummets in that region.
For example, for $(t,p)=(1.25,0.1)$, $S_{\max} = 1.38431$, achieved at $(x_1,x_2)=(1,1.60206)$.
But at $(t,p)=(1.27,0.1)$, I don't see convergence.