Many image processing operators covered in this topic are based on, or derived from, more fundamental image processing operators that are in wide use. A survey of these operators is presented in this topic to create a foundation on which subsequent topics build.

For the purpose of this topic, the image is a matrix of spatially discrete image values. The matrix may be two- or three-dimensional. Higher dimensions are possible, but images of more than four dimensions are rarely found. Three-dimensional images may be either volumetric images or time sequences of two-dimensional images (stacks). Four-dimensional images are generally time sequences of three-dimensional images. Depending on the modality, the matrix elements may be arranged on an isotropic grid (i.e., the distance to neighbors is the same in all directions) or anisotrop-ically. In volumetric image modalities such as computed tomography and magnetic resonance imaging, the distance to the axial neighbors is often much larger than the distance to the neighbors in the main image plane. Although the image values are known only on the discrete coordinates, images are generally represented as if the image values extend halfway to the nearest neighbor. When images are displayed, the image values are represented on a gray scale or in color shades. Image representation is covered in more detail in topic 13.

Since computer memory is limited, the image values themselves are also discrete. It is very common to allocate 8 bits for one image element (termed a pixel, or in volumetric images, a voxel). Eight bits allow for 28 = 256 discrete values in any pixel. In-between values are rounded to the nearest allowable 8-bit value. Sometimes, image modalities provide a higher bit depth. Some digital cameras provide 10 or 12 bits/pixel. Most computed tomography devices provide 12 bits/pixel. High-quality scanners used to digitize x-ray film provide up to 16 bits/pixel. Some image processing operations yield fractional values, and floating-point storage for the pixel value is useful (although few image processing software support floating-point data). However, even floating-point values have limited precision, and rounding errors need to be taken into account.

The image formation process also introduces errors, with the consequence that the pixel value deviates from an ideal but inaccessible image value. Often, it is sufficient to consider Gaussian blurring and additive noise to model the image formation errors. A certain degree of blurring can be assumed when a small feature in the object (an idealized point source) is represented by a broader intensity peak in an image. An exaggerated example would be the digital photograph of a back-illuminated pinhole with an out-of-focus lens. The two-dimensional intensity function that is the image of an idealized point source, called a point-spread function, provides information on the level of detail that an image modality can provide. Moreover, the sensor elements and subsequent amplifiers introduce some noise. Often, this noise is sufficiently well described as an independent deviation of each pixel value from an idealized (but inaccessible) value by a small random displacement e, whereby the displacements have zero mean and a Gaussian distribution (additive Gaussian noise). In addition, the rounding of the (usually analog) measured value to an integer image value introduces digitization noise.