Compensators of Stopping Times

The previous post introduced the concept of the compensator of a process, which is known to exist for all locally integrablesemimartingales. In this post, I’ll just look at the very special case of compensators of processes consisting of a single jump of unit size.

Definition 1 Let be a stopping time. The compensator of is defined to be the compensator of .

So, the compensator A of is the unique predictable FV process such that and is a local martingale. Compensators of stopping times are sufficiently special that we can give an accurate description of how they behave. For example, if is predictable, then its compensator is just . If, on the other hand, is totally inaccessible and almost surely finite then, as we will see below, its compensator, A, continuously increases to a value which has the exponential distribution.

However, compensators of stopping times are sufficiently general to be able to describe the compensator of any cadlag adapted process X with locally integrable variation. We can break X down into a continuous part plus a sum over its jumps,

(1)

Here, are disjoint stopping times such that the union of their graphs contains all the jump times of X. That they are disjoint just means that whenever , for any . As was shown in an earlier post, not only does is such a sequence of the stopping times guaranteed to exist, but each of the times can be chosen to be either predictable or totally inaccessible. As the first term, , on the right hand side of (1) is a continuous FV process, it is by definition equal to its own compensator. So, the compensator of X is equal to plus the sum of the compensators of . The reduces compensators of locally integrable FV processes to those of processes consisting of a single jump at either a predictable or a totally inaccessible time.

A useful alternative Definition 1 for the compensator of a process with a single jump is as follows.

Lemma 2 Let for a stopping time and nonnegative integrable -measurable random variable U. Then, its compensator is the unique right-continuous, predictable and increasing process A with and such that

The compensator of a process with a single jump at a predictable stopping time is easily described. This is itself a process with a single jump at the same predictable time.

Lemma 3 Let be a predictable stopping time and U be an integrable and -measurable random variable. Then, the compensator of is .

Proof: First, is integrable and -measurable with . This implies that is a martingale (this does not require predictability of .

It just remains to be shown that A is predictable. As is predictable, is a predictable process. Also, there exists a predictable process with . Then, is a product of predictable processes and, hence, is predictable.

More generally, the compensator of a process consisting of a single jump can be expressed in terms of the compensator corresponding to the jump time. This enables the compensator of the process X in (1) to be obtained from the compensators of the stopping times .

Lemma 4 Let be a stopping time with compensator A, and U be an integrable -measurable random variable.

Then, the compensator of is , where is any predictable process with whenever .

Proof: Set and . We can suppose that and U are both nonnegative by, if necessary, applying the result separately to the positive and negative parts of and U. So, B is increasing and, by Lemma 2, for any nonnegative predictable process ,

Taking shows that is A-integrable. Also, Bis predictable, so Lemma 2 shows that B is the compensator of X.

It remains to describe the compensator of processes with a single jump at a totally inaccessible stopping time. In this case, the process is quasi-left-continuous. In contrast with Lemma 3, in this case the compensator will be continuous.

Lemma 5 Suppose that where is a totally inaccessible stopping time and U is an integrable -measurable random variable. Then, the compensator A of X is the unique continuous FV process such that and

(2)

is a local martingale, for each .

Proof: First, as is totally inaccessible, X is quasi-left-continuous and, therefore, its compensator A is continuous. Denoting the expression in (2) by , integration by parts gives

This follows either from the standard integration by parts for Lebesgue-Stieltjes integration or by applying Ito’s formula and noting that the quadratic variation terms drop out as the processes are all of finite variation. So, M is the Doléans exponential of . If A is the compensator of X then is a local martingale so, by preservation of the local martingale property, M is also a local martingale.

Conversely, if A is a continuous FV process then choosing any nonzero and imaginary , M will be nonzero and . So is a local martingale and, hence, A is the compensator of X.

A simple consequence of this result is that we can say precisely how the compensator of a totally inaccessible stopping time is distributed at infinity. Note that, as is constant on the interval, the same holds for A. So, can be replaced by in the result below.

Corollary 6 Let be a totally inaccessible stopping time with almost surely. If A is the compensator of , then has the exponential distribution with parameter 1.

Proof: Choose any with nonnegative real part. Then, the process M defined by (2), with U set to 1, is bounded by . So, it is a uniformly integrable martingale with and . Hence,

This is just the moment generating function of the exponential distribution of rate 1.

More generally, in the case where the stopping time can be infinite, the following lemma gives the possible joint distributions of and .

Lemma 7 Let be a totally inaccessible stopping time with compensator A. If Z is an exponentially distributed random variable with parameter 1, independently of , then

has the exponential distribution with parameter 1.

Proof: As in the proof of the previous corollary, choose any with nonnegative real part. Then, the process M defined by (2), with U set to 1, is bounded by . So, it is a uniformly integrable martingale with and . Setting , the condition that is exponentially distributed with rate 1 independently of and gives,

This is the moment generating function for an exponentially distributed random variable of rate 1.

Conversely, suppose that we are given a probability space and random variables , Z and such that Z is independent of , and such that Z and are exponentially distributed of rate 1. Then, we can define a random time by for and for . Also define the continuous increasing process . With respect to the natural filtration generated by and A, it can be seen that A is the compensator of . So, Lemma 7 does describe the most general joint distribution of and . This can be considered in terms of time-changes — every totally inaccessible stopping time can be considered as a time change of an exponentially distributed time. This however, can be understood in the more general setting of time changes of counting processes and Poisson processes, which I will show in a later post.

Classification of Stopping Times

Recall that stopping times come in several flavours, such as predictable, accessible and totally inaccessible. We now show that this classification can be expressed in terms of the compensator. We say that A is a pure jump process if (almost surely), for each time t.

Lemma 8 Let be a stopping time with compensator A. Then,

is predictable if and only A is (almost surely) a pure jump process with jumps only of size 1.

is accessible if and only if A is (almost surely) a pure jump process.

is totally inaccessible if and only if, almost surely, and A is continuous.

Proof: Set throughout this proof.

First, suppose that is predictable. Then, is predictable and is constant, hence a martingale. So, the compensator of X is A, which consists of a single jump of size 1. Conversely, suppose that the compensator A is a pure jump process with jumps only of size 1. Let be the first time at which the predictable process hits 1. This is a predictable stopping time, whenever , and for all . Lemma 2 of the previous post gives

where the final equality follows from the fact that whenever . Furthermore, by definition, if and only if . So, almost surely, whenever is finite. Similarly, using the fact that when , and the fact that is a martingale,

On the event , we have already shown that and, hence, . So, the equality above shows that has zero probability. Therefore, is predictable.

Now, suppose that is accessible.

where are predictable stopping times with . By Lemma 3, each of the terms inside the summation has a compensator equal to a process consisting of a single jump, so A is a pure jump process. Conversely, if A is a pure jump process and is any totally inaccessible stopping time then we have

As is totally inaccessible, the first term on the right hand side has a continuous compensator, which can only be a pure jump process if it is almost surely zero. So, for any totally inaccessible stopping time , showing that is accessible.

Proof: First, as A is predictable, we have . Next, for each nonnegative rational number q, let be the first time at which . This is a predictable stopping time and, whenever , the nonempty interval must contain a rational number q, so . Therefore, is contained in , so is accessible.

Example

Finally, for this post, I’ll give an example of the convergence of approximations to the compensator of a stopping time, demonstrating the convergence stated in theorems 10 and 11 of the previous post. This will also demonstrate the result that is exponentially distributed when is totally inaccessible, and show how convergence in probability can fail when is not totally inaccessible.

Let be any complete probability space on which we have defined a uniformly distributed random variable . Define the process , and let be the smallest complete filtration to which X is adapted. That is, is generated by together with the sets of zero probability. This is also the smallest (complete) filtration with respect to which is a stopping time. It can be seen that is totally inaccessible, although this also follows from the fact that its compensator, which we will compute, is continuous For any times we have

Note that the right hand side is just the Riemann-sum approximation to the integral of , so we have convergence, uniformly in t (almost surely),

Theorem 10 of the previous post tells us that the limit is the compensator of . Furthermore, as has the uniform distribution on [0,1], is exponentially distributed, as was guaranteed by Corollary 6.

We will now look at the case where is predictable, showing how convergence in probability can fail. This demonstrates that it was necessary to, instead, use weak convergence in the statement of Theorem 11. As we will show, it is possible for the distribution of the approximation to converge to the wrong limiting distribution. In fact, we will use the same underlying probability space and random time as above. The only difference is that the filtration will be enlarged so as to make predictable.

Let be a sequence of positive integers increasing to infinity. To keep the calculations simple, I’ll suppose that divides for each n. Then let be the largest multiple of which is less than or equal to ,

where is the floor function. Then, (almost surely) and . So, almost surely tends to from below. Next, let be the smallest complete filtration such that are adapted. Equivalently, is the smallest complete filtration so that are stopping times. With respect to this filtration, the times announce , showing that is predictable. Its compensator is .

Now, consider a positive integer N which is a multiple of and divides . Compute the compensator A of on the partition where .

Note that if , then we also have (using the fact that N is a multiple of ) and the conditional probability on the right hand side is zero. So,

Now, conditional on , the stopping time is uniformly distributed over . Furthermore, in this case, with equality if and only if . As has probability of lying in one of the intervals , we obtain

for all k such that , outside a set of probability . We can then calculate ,

outside a set of probability . Here, we have assumed that and are large, so that the approximation holds in the limit. For example, if then, taking we see that tends to infinity, as n goes to infinity. Then, in the limit,

As has the uniform distribution on [0,1], we see that tends to an exponential distribution as n goes to infinity. It cannot converge to in probability, which just takes the constant value 1. This might seem surprising, but in the limit, converges to the wrong distribution! However, it can be seen that it is also independent of in a limiting sense. As its expected value is 1, this is equivalent to converging to 1 in the weak topology in .