Moment invariants to convolution

Some radiometric degradations of images, as a wrong focus, camera and/or scene motion, camera vibrations, and taking images through a turbulent medium such as atmosphere or water, can be described by combination of convolution and noise

where f(x,y) is an image function, h(x,y) is a point spread function (PSF) of the degradation, and n(x,y) is an additive random noise. If we have no limitation on PSF, invariants to such degradation cannot exist, on the other hand, the more constraints is satisfied by PSF, the more invariants can be constructed. All the above degradations can be described by a convolution with a centrosymmetric PSF, i.e. h(x,y)=h(-x,-y). In this case, we can construct the invariants by the following recursive formula.

If (p+q) is even, then

If (p+q) is odd, then

The blur invariants have their counterparts in Fourier domain, tangens of the phase of the Fourier spectrum of the image function is convolution invariant.

If we know the PSF satisfies some additional contraint, we can construct additional invariants. It is the case e.g. N-fold rotational symmetry, circular symmetry, or Gaussian PSF.

Sometimes, the convolution degradation is combined with some geometric degradation. We can use combined rotation and blur invariants or combined affine and blur invariants in these cases.

To demonstrate the performance of the above described invariants we apply them to the problem of template matching in a blurred and noisy scene. It was inspired by a remote sensing application area, we try to perform matching without any previous de-blurring by means of our blur invariants. We ran this experiment on real satellite images but the blur and noise were introduced artificially that gives us a possibility to control their amount and to quantitatively evaluate the results. We used the invariants normalized by a power of μ00 to image contrast and magnitude so they would have roughly the same range of values regardless of p and q.

The matching algorithm itself is straightforward. We search the blurred image g(x,y) and for each possible position of the template calculate the Euclidean distance in the space of blur invariants between the template and the corresponding window in g(x, y). The matching position of the template is determined by the minimum distance. The only user-defined parameter in the algorithm is the maximum order r of the invariants used. In this experiment we used r = 7, that means we applied 18 invariants from 3rd to 7th order. Three templates 48×48 were extracted from the 512×512 high-resolution SPOT image (City of Plzeň)

and contain significant local landmarks – the road crossing, the apartment block, and the confluence of two rivers

To simulate an acquisition by another sensor with a lower spatial resolution, the image was blurred by Gaussian masks of the size from 3×3 to 21×21 and corrupted by a Gaussian white noise with standard deviation ranging from 0 to 40 each. In all blurred and noisy frames, the algorithm attempted to localize the templates. As one can expect, the success rate of the matching depends on the amount of blur and on the noise level.

An example of a frame in which all three
templates were localized correctly

The results are summarized here

The boundary effect is important limitation of this method. Due to the blurring, the pixels laying near the boundary of the template inside the image are affected by those pixels laying outside the template. Since the invariants are calculated from a bounded area of the image where the blurring is not exactly a convolution, they are no longer invariant which might lead to mismatch.