Xử lý hình ảnh kỹ thuật số P1

Xử lý hình ảnh kỹ thuật số P1

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION
In the design and analysis of image processing systems, it is convenient and often necessary mathematically to characterize the image to be processed. There are two basic mathematical characterizations of interest: deterministic and statistical. In deterministic image representation, a mathematical image function is defined and point properties of the image are considered. For a statistical image representation, the image is specified by average properties. The following sections develop the deterministic and statistical characterization of continuous images. Although the analysis is presented in the context of visual images, many of the results can be extended to general two-dimensional...

TWO-DIMENSIONAL SYSTEMS 5
values required to match a unit amount of narrowband light at wavelength λ . In a
multispectral imaging system, the image field observed is modeled as a spectrally
weighted integral of the image light function. The ith spectral image field is then
given as
∞
F i ( x, y, t ) = ∫0 C ( x, y, t, λ )S i ( λ ) d λ (1.1-5)
where S i ( λ ) is the spectral response of the ith sensor.
For notational simplicity, a single image function F ( x, y, t ) is selected to repre-
sent an image field in a physical imaging system. For a monochrome imaging sys-
tem, the image function F ( x, y, t ) nominally denotes the image luminance, or some
converted or corrupted physical representation of the luminance, whereas in a color
imaging system, F ( x, y, t ) signifies one of the tristimulus values, or some function
of the tristimulus value. The image function F ( x, y, t ) is also used to denote general
three-dimensional fields, such as the time-varying noise of an image scanner.
In correspondence with the standard definition for one-dimensional time signals,
the time average of an image function at a given point (x, y) is defined as
1 T
〈 F ( x, y, t )〉 T = lim ----- ∫ F ( x, y, t )L ( t ) dt
- (1.1-6)
T→∞ 2T –T
where L(t) is a time-weighting function. Similarly, the average image brightness at a
given time is given by the spatial average,
1 L L
〈 F ( x, y, t )〉 S = lim -------------- ∫ x ∫ y F ( x, y, t ) dx dy (1.1-7)
Lx → ∞ 4L x L y –L x –Ly
Ly → ∞
In many imaging systems, such as image projection devices, the image does not
change with time, and the time variable may be dropped from the image function.
For other types of systems, such as movie pictures, the image function is time sam-
pled. It is also possible to convert the spatial variation into time variation, as in tele-
vision, by an image scanning process. In the subsequent discussion, the time
variable is dropped from the image field notation unless specifically required.
1.2. TWO-DIMENSIONAL SYSTEMS
A two-dimensional system, in its most general form, is simply a mapping of some
input set of two-dimensional functions F1(x, y), F2(x, y),..., FN(x, y) to a set of out-
put two-dimensional functions G1(x, y), G2(x, y),..., GM(x, y), where ( – ∞ < x, y < ∞ )
denotes the independent, continuous spatial variables of the functions. This mapping
may be represented by the operators O { · } for m = 1, 2,..., M, which relate the input
to output set of functions by the set of equations

TWO-DIMENSIONAL SYSTEMS 9
FIGURE 1.2-3. Graphical example of two-dimensional convolution.
denotes the convolution operation. The convolution integral is symmetric in the
sense that
∞ ∞
G ( x, y ) = ∫–∞ ∫–∞ F ( x – ξ, y – η )H ( ξ, η ) d ξ dη (1.2-13)
Figure 1.2-3 provides a visualization of the convolution process. In Figure 1.2-3a
and b, the input function F(x, y) and impulse response are plotted in the dummy
coordinate system ( ξ, η ) . Next, in Figures 1.2-3c and d the coordinates of the
impulse response are reversed, and the impulse response is offset by the spatial val-
ues (x, y). In Figure 1.2-3e, the integrand product of the convolution integral of
Eq. 1.2-12 is shown as a crosshatched region. The integral over this region is the
value of G(x, y) at the offset coordinate (x, y). The complete function F(x, y) could,
in effect, be computed by sequentially scanning the reversed, offset impulse
response across the input function and simultaneously integrating the overlapped
region.
1.2.3. Differential Operators
Edge detection in images is commonly accomplished by performing a spatial differ-
entiation of the image field followed by a thresholding operation to determine points
of steep amplitude change. Horizontal and vertical spatial derivatives are defined as

IMAGE STOCHASTIC CHARACTERIZATION 19
As a simplifying assumption, the Markov process is often assumed to be of separa-
ble form with an autocovariance function
K xy ( τx, τy ) = C exp { – α x τ x – α y τ y } (1.4-18)
The power spectrum of this process is
4α x α y C
W ( ω x, ω y ) = ------------------------------------------------ (1.4-19)
2 2 2 2
( α x + ω x ) ( α y + ωy )
In the discussion of the deterministic characteristics of an image, both time and
space averages of the image function have been defined. An ensemble average has
also been defined for the statistical image characterization. A question of interest is:
What is the relationship between the spatial-time averages and the ensemble aver-
ages? The answer is that for certain stochastic processes, which are called ergodic
processes, the spatial-time averages and the ensemble averages are equal. Proof of
the ergodicity of a process in the general case is often difficult; it usually suffices to
determine second-order ergodicity in which the first- and second-order space-time
averages are equal to the first- and second-order ensemble averages.
Often, the probability density or moments of a stochastic image field are known
at the input to a system, and it is desired to determine the corresponding information
at the system output. If the system transfer function is algebraic in nature, the output
probability density can be determined in terms of the input probability density by a
probability density transformation. For example, let the system output be related to
the system input by
G ( x, y, t ) = O F { F ( x, y, t ) } (1.4-20)
where O F { · } is a monotonic operator on F(x, y). The probability density of the out-
put field is then
p { F ; x, y, t}
p { G ; x, y, t} = ----------------------------------------------------
- (1.4-21)
dO F { F ( x, y, t ) } ⁄ dF
The extension to higher-order probability densities is straightforward, but often
cumbersome.
The moments of the output of a system can be obtained directly from knowledge
of the output probability density, or in certain cases, indirectly in terms of the system
operator. For example, if the system operator is additive linear, the mean of the sys-
tem output is