Consider the gradient operator ∇ acting on scalar functions f : Rn → R; the gradient of a scalar function is a vector fieldv = ∇f : Rn → Rn. The divergence operator div, acting on vector fields to produce scalar fields, is the adjoint operator to ∇. The Laplace operator Δ is then the composition of the divergence and gradient operators:

Δ=div∘∇{\displaystyle \Delta =\mathrm {div} \circ \nabla },

acting on scalar functions to produce scalar functions. Note that A = −Δ is a positive operator, whereas Δ is a dissipative operator.

The divergence operatorδ (to be more precise, δn, since it depends on the dimension) is now defined to be the adjoint of ∇ in the Hilbert space sense, in the Hilbert space L2(Rn, B(Rn), γn; R). In other words, δ acts on a vector field v : Rn → Rn to give a scalar function δv : Rn → R, and satisfies the formula

On the left, the product is the pointwise Euclidean dot product of two vector fields; on the right, it is just the pointwise multiplication of two functions. Using integration by parts, one can check that δ acts on a vector field v with components vi, i = 1, ..., n, as follows:

Consider now an abstract Wiener spaceE with Cameron-Martin Hilbert space H and Wiener measure γ. Let D denote the Malliavin derivative. The Malliavin derivative D is an unbounded operator from L2(E, γ; R) into L2(E, γ; H) – in some sense, it measures “how random” a function on E is. The domain of D is not the whole of L2(E, γ; R), but is a denselinear subspace, the Watanabe-Sobolev space, often denoted by D1,2{\displaystyle \mathbb {D} ^{1,2}} (once differentiable in the sense of Malliavin, with derivative in L2).

Again, δ is defined to be the adjoint of the gradient operator (in this case, the Malliavin derivative is playing the role of the gradient operator). The operator δ is also known the Skorokhod integral, which is an anticipating stochastic integral; it is this set-up that gives rise to the slogan “stochastic integrals are divergences”. δ satisfies the identity