Month: February 2016

The Ornstein-Uhlenbeck process \( {X={(X_t)}_{t\in[0,\infty)}} \) on \( {\mathbb{R}^n} \) is the solution of the stochastic differential equation

\[ dX_t=\sqrt{2}dB_t-X_tdt \]

where \( {{(B_t)}_{t\in[0,\infty)}} \) is a standard Brownian motion. Since the diffusion coefficient is constant and the drift is affine, it follows that \( {X} \) is a Gaussian process. The computation of the mean and of the variance of \( {X_t} \) conditional on \( {\{X_0=x\}} \) yields

The explicit law of the process allows computations, for instance for any \( {s,t\geq0} \),

\[ \mathrm{Cov}(X_s,X_t)=e^{-|t-s|}(1-e^{-2\min(s,t)}). \]

For any bounded and measurable \( {f:\mathbb{R}^n\rightarrow\mathbb{R}} \), any \( {x\in\mathbb{R}} \) and \( {t\in[0,+\infty)} \), we set

\[ P_t(f)(x)=\mathbb{E}(f(X_t)\mid X_0=x). \]

We have \( {P_t(\mathbf{1}_A)(x)=\mathbb{P}(X_t\in A\mid X_0=x)} \). The family \( {{(P_t)}_{t\in[0,\infty)}} \) is a semigroup of linear operators acting on continuous and bounded functions, in the sense that

\[ P_0=id, \quad \forall s,t\geq0, \quad P_t\circ P_s = P_{t+s}. \]

These operators are Markov operators, in the sense that for any \( {t\in[0,\infty)} \),

The operator \( {A} \) (and \( {P_t} \) for any \( {t\geq0} \)) is symmetric in \( {L^2(\gamma_n)} \), in other words an integration by parts holds, meaning that for any \( {f} \) and \( {g} \),

\[ -\int\!fAg\,d\gamma_n =\int\!\nabla f\cdot\nabla g\,d\gamma_n. \]

If \( {X_0} \) has density \( {f_0} \) with respect to \( {\gamma_n} \) then \( {X_t} \) has also a density with respect to \( {\gamma_n} \) given by \( {f_t=P_tf_0} \). If \( {g} \) is the Lebesgue density of \( {\gamma_n} \), then \( {g_t=f_tg} \) is the Lebesgue density of \( {X_t} \). The evolution of \( {g_t} \) with respect to \( {t} \) is described by the Fokker-Planck equation, dual of the Chapman-Kolmogorov equation,

\[ \partial_tg_t=\Delta g_t+\mathrm{div}(xg_t). \]

If \( {\mu} \) and \( {\nu} \) are probability measures on \( {\mathbb{R}^n} \) with \( {\nu\ll\mu} \) then the Kullback-Leibler divergence or relative entropy of \( {\nu} \) with respect to \( {\mu} \) is defined by

In the case where \( {\mu} \) is a Boltzmann-Gibbs measure with Lebesgue density \( {g(x)=e^{-V(x)}} \), the quantity \( {H(\nu\mid\mu)} \) becomes a Helmholtz free energy, in the sense that

\[ H(\nu\mid\mu)=\int\!V\,d\nu-S(\nu) \]

where the first term in the right hand side is the mean energy of \( {\nu} \) while the second term in the right hand side is the Boltzmann-Shannon entropy

\[ S(\nu)=\displaystyle\int\!fg\log(fg)\,dx. \]

Suppose that the law \( {\mu_0} \) of \( {X_0} \) has density \( {f_0} \) with respect to \( {\gamma_n} \). Then the law \( {\mu_t} \) of \( {X_t} \) has density \( {f_t=P_tf_0} \) with respect to \( {\gamma_n=\mu_\infty} \). The free energy decays along the time. Namely, using the evolution equation and the integration by parts,

In particular we get \begin{align*} H(\mu_0\mid\gamma_n) =-\int_0^\infty\!\frac{d}{dt}H(\mu_t\mid\gamma_n)\,dt =\int_0^\infty\!J(\mu_t\mid\gamma_n)\,dt \leq \frac{1}{2}J(\mu_0\mid\gamma_n). \end{align*} This inequality is known as a logarithmic Sobolev inequality:

Since both sides are equal for \( {t=0} \), taking the derivative at time \( {t=0} \) allows to recover from this exponential decay the logarithmic Sobolev inequality!

Hypercontractivity. For any \( {t\in[0,\infty)} \) and any \( {p\in[1,\infty]} \), Mehler’s formula shows immediately that \( {P_t} \) can be extended into a linear operator on \( {L^p(\gamma_n)} \). In fact \( {P_t} \) is always a contraction:

where \( {h_t=f_t/\Vert f_t\Vert_{p(t)}} \). Now the logarithmic Sobolev inequality and the integration by parts give, for any \( {h\geq0} \) such that \( {h^p} \) is a probability density with respect to \( {\gamma_n} \),

a quantity which tends to \( {+\infty} \) as \( {|\lambda|\rightarrow\infty} \) since \( {q>p(t)=1+(p-1)e^{2t}} \).

The proof shows that conversely, from the hypercontractive statement, one can extract the logarithmic Sobolev inequality by taking the derivative at \( {t=0} \).

Polynomials. The set of polynomials \( {\mathbb{R}[X]} \) is dense in \( {L^2(\gamma_1)} \). To see it, let us take \( {f\in L^2(\gamma_1)} \), then the Laplace transform \( {\varphi_\mu} \) of the signed measure \( {\mu(dx)=f(x)\gamma_1(dx)} \) is finite on \( {\mathbb{R}} \) since for any \( {\theta\in\mathbb{R}} \), by the Cauchy-Schwarz inequality,

and if \( {f\perp\mathbb{R}[X_1,\ldots,X_n]} \) in \( {L^2(\mathbb{R})} \), then the derivatives of any order of \( {\varphi_\mu} \) vanish at \( {0} \), and since \( {\varphi_\mu} \) is analytic, we get \( {\varphi_\mu\equiv0} \) and then \( {\mu=0} \) and then \( {f=0} \) in \( {L^2(\gamma_n)} \).

Hermite polynomials. Hermite’s polynomials \( {{(H_k)}_{k\in\mathbb{N}}} \) are the orthogonal polynomials obtained using the Gram-Schmidt algorithm in \( {L^2(\gamma_1)} \) from the canonical basis of \( {\mathbb{R}[X]} \). They are normalized in such a way that the coefficient of the term of highest degree in \( {H_k} \) is \( {1} \) for any \( {k\geq0} \). We find

\[ H_0(x)=1,\quad H_1(x)=x,\quad H_2(x)=x^2-1,\quad\ldots \]

It can be checked that Hermite’s polynomials \( {{(H_k)}_{k\geq0}} \) satisfy

which gives \( {\Vert H_k\Vert_2^2=k!} \) by identifying the series coefficients. It follows that \( {{(H_k/\sqrt{k!})}_{k\in\mathbb{N}}} \) is a dense orthonormal sequence in the Hilbert space \( {L^2(\gamma_1)} \).

The gap between the first eigenvalue \( {0} \) and the second eigenvalue \( {-1} \) of \( {A} \) is of length \( {1} \). This spectral gap produces the exponential convergence. More generally, the semigroup preserves the spectral decomposition. If \( {f\perp\mathrm{Vect}\{H_1,\ldots,H_{k-1}\}} \) in \( {L^2(\gamma_1)} \) then \( {P_t(f)\perp\mathrm{Vect}\{H_1,\ldots,H_{k-1}\}} \) for any \( {t\geq0} \) and for any \( {t\geq0} \),

We recognize up to normalization the formula of the density of the Gaussian Unitary Ensemble (GUE) namely the density of the eigenvalues of a Gaussian \( {n\times n} \) Hermitian random matrix with Lebesgue density in \( {\mathbb{R}^{n+n^2-n}=\mathbb{R}^{n^2}} \) proportional to

\[ H\mapsto e^{-\frac{1}{2}\mathrm{Tr}(H^2)}. \]

Notes. By pure provocation, we used the Cauchy-Schwarz inequality only once. We have learned the link with the GUE during a talk by Satya Majumdar.