Let $N(t)$ be a Poisson process with intensity $\lambda$ and define the centered process as $N_0(t):= N(t)-\lambda t$. A stochastic integral can be properly defined w.r.t. to $N_0(t)$ (but not w.r.t. $N(t)$).

Let $f$ be a nice function, for concreteness let's say $f(s)=s^2$. I want to calculate the distribution (also covariances in the time domain) of the process

$$X(t) := \int_0^t s^2 \;dN_0(s)$$

I have tried the following argument:

Consider the partial sums

$$S_n = \sum_{i=0}^{n-1} t_i^2 \;(N_0(t_{i+1}) - N_0(t_i)),$$

where $0=t_0 < t_1 < \dots < t_n=t$ is some partition of the interval $[0,t]$.

Then $\mathbb E[S_n] = 0$ and $Var[S_n] = \sum_{i=0}^{n-1} t_i^4 \lambda (t_{i+1} - t_i)$, using the independence of increments of the process $N_0(t)$.

Now, in analogy to the argument for an integral w.r.t. to Brownian motion being again normal, I tried to calculate the characteristic function of the distribution of $S_n$ hoping that by taking its limit I would get something I can interpret. Using the fact that the sum of two independent Poisson distributions (fulfilled by independence of increments) is again Poisson, I am left with the problem that I don't know how to deal with the prefactor $t_i^2$, since the distribution of $t_i^2 (N_0(t_{i+1}) - N_0(t_i))$ is nothing really nice, as far as I can tell.

Questions:

Can one properly finish my argument or is there a nicer way to calculate the above stochastic integral within $L^2$-theory without referring to Ito calculus?

1 Answer
1

The stochastic integrals which interest you can be properly defined with respect to $N=(N(t))_{t\geqslant0}$ as well, for example, for every deterministic function $u$,
$$
X(t)=\int_0^tu(s)\mathrm dN(s)=\sum_{k=1}^{N(t)}u(T_k),
$$
where $(T_k)_{k\geqslant1}$ enumerates the jump times of the process $N$. Conditioning on $N(t)=n$, $\{T_k\mid1\leqslant k\leqslant n\}$ is distributed like $\{U_k\mid1\leqslant k\leqslant n\}$ where $(U_k)_k$ is an i.i.d. sample uniformly distributed on $(0,t)$. Thus,
$$
E[X(t)\mid N(t)]=N(t)\int_0^tu(s)\frac{\mathrm ds}t,
$$
and
$$
E[X(t)]=E[N(t)]\int_0^tu(s)\frac{\mathrm ds}t=\lambda\int_0^tu(s)\mathrm ds.
$$
More generally, for every $a$,
$$
E[\mathrm e^{aX(t)}\mid N(t)]=\left(\int_0^t\mathrm e^{au(s)}\frac{\mathrm ds}t\right)^{N(t)},
$$
hence
$$
E[\mathrm e^{aX(t)}]=\exp\left(\lambda\int_0^t(\mathrm e^{au(s)}-1)\mathrm ds\right).
$$
This, together with the fact that the increments of $X=(X(t))_t$ are independent, fully determines the joint distribution of the process $X$. Now,
$$
X_0(t)=\int_0^tu(s)\mathrm dN_0(s)
$$
is simply
$$
X_0(t)=X(t)-\lambda\int_0^tu(s)\mathrm ds,
$$
hence the joint distribution of the process $X_0=(X_0(t))_t$ is fully determined by the fact that its increments are independent and by the identity
$$
E[\mathrm e^{aX_0(t)}]=\exp\left(\lambda\int_0^t(\mathrm e^{au(s)}-1-au(s))\mathrm ds\right).
$$
For example, for every $0\leqslant s\leqslant t$,
$$
\mathrm{cov}(X_0(s),X_0(t))=\mathrm{var}(X_0(s))=E[X_0(s)^2]=\lambda\int_0^su(x)^2\mathrm dx.
$$