This is the end of the preview.
Sign up
to
access the rest of the document.

Unformatted text preview: Medical Image Processing 1 (MIP1)
Image Signal Analysis and Processing (ISAP)
SS’09
Camera Systems Dr. Pierre Elbischger
School of Medical Information Technology
Carinthia University of Applied Sciences Lightning Technology (Overview)
•Reflected (1) vs. transmitted (2) light:
(1) light source positioned in front of object
(2) light source positioned behind object
• Bright field (1) vs. dark field (2):
specular reflection of a surface patch
parallel to the optical axis reach the lens
aperture (bright field) or do not reach the
lense aperture (dark field) •Coaxial light illuminates the object frontal.
This can be realized by means of prisms or
so called ring lights fixated around the
camera’s objective. Ring lights usually
consist of several LED elements. Different lightning technologies [1] 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 2 Lightning Technology (Examples) Bright field [1] Dark field [1] •Using dark field light the lightning situation is inverted. Flat regions appear dark while
stampings, scratches and grooves are seen clearly. Bright field [1] Coaxial ring light [1] By means of coaxial lightning unwanted shadows can be avoided.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 3 Optics
Besides the lightning the imaging optic or camera objective plays an important role in
the image acquisition process. Depending on the scene an appropriate objective must
be chosen. In the following relevant objective characteristics as well as deducible
parameters are introduced. manufacturer Pentax type designation TV Lens H1214-M description Manually controlled high-resolution objective
for 2 Megapixel C- and CS-Mount cameras interface, sensor size C-Mount, ½” focal length 12mm aperture range F 1.4 – 16 aperture control manual, fixable focus control manual, fixable angle of field horiz. ~ 29.8° min. object distance 0.25m Parameters of a C-Mount Pentax objective [1] 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 4 Objective Interfaces
In general two standardized objective mounting parts are distinguished: C-Mount and
CS-Mount. Both have a diameter of one inch (25.4mm) and a thread pitch of 1/32 inch
(32 turns per inch). The flange back (distance between objective contact surface and
image plane, Auflagemaß) is 17.5mm with C-Mount and 12.5mm with CS-Mount. C-/CS-Mount-flange backs and their possible combination with an AVT CS-Mount-camera [1] Adding a 5mm spacer ring, a C-Mount-objective may also be used with a CS-Mountcamera. In contrast, a CS-Mount-objective may never be used with a C-Mount-camera.
Besides, there exist several more adapters that allow for combining other objectives
with C-Mount-cameras.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 5 Formats
•The size describes the dimensions of the image sensor within a camera. In the data
sheet of an objective, the largest sensor size that can be used is stated – it is possible to
used sensor chips with less or equal size. Combining a large-sized objective with a
smaller image sensor is often even beneficial because the more distorted outer area of
the lens has no contribution to the image formation.
m
m
Attention
The given diagonal values don’t
match the effective sensor size.
Rather they are a relict of the tube
cameras and correspond to the
exterior diameter of the camera
tube. Common image sensor size for the C-Mount standard [1] 05.05.2009 m
m Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 6 The pinhole camera (1)
• The image is formed by light rays issued from the scene facing the box. • If the pinhole is reduced to a point (theoretically), exactly one ray of light would
pass the pinhole and contribute to a single point in the image. • A pinhole of finite size would collect light from a cone of rays, subtending a
finite solid angle, in a single point on the image plane and result in blurry
images. • The object points are related to the image points by a perspective projection optical axis
pinhole
image plain
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 7 The pinhole camera II
•
•
• The larger the pinhole, the brighter the image, but a large pinhole gives blurry
images.
Shrinking the pinhole produces sharper images, but reduces the amount of light
reaching the image plane and may introduce diffraction effects.
Diffraction occurs if the diameter of the aperture is in the order of the wavelength
of the used light. Diffraction pattern of a
small circular pinhole. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 8 Thin lens
• For two reasons the pinhole is replaced by a lens in most cameras: • Similar to the pinhole camera the lens leads to a perspective projection of the
object onto the image plane. –
– to gather light
to keep the image sharp while gathering light from a large area. Refraction
Reflection and refraction at the interface between two
homogeneous media with indices of refraction n1 and
n2 are by Snell’s law. Considering first-order (praxial) geometrics, where
angles between all light rays going through a lens and
the normal to the refractive surface of the lens are
small, one can use the praxial refraction equation as an
approximation of Descarts’ law. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 9 Thin-Lense Equation (Focal Length)
When choosing an objective, the required focal length
depends on the given object distance g, the object size G
and the image size B. To the object size G applies: Gwidth
G=
Gheight if Gwidth ≥ 1.33 ⋅ Gheight
otherwise Path of rays on thin lenses1 for common 4:3 sensor formats (analog for image size B).
From the theorem of intersecting lines the
magnification V may be deduced1: 1Gg
f
===
V B b b− f
leading to the thin-lens equation of Descartes: 111
= +.
f gb
Inserting and transforming these equations
one obtains the needed formula for the
focal length: f= 1approximation 05.05.2009 g⋅B
.
G+B How focal length
affects photograph
composition:
adjusting the
camera's distance
from the main
subject while
changing focal
length, the main
subject can remain
the same size, while
the other at a
different distance
changes size. when taking the thin lens model as basis the praxial refraction equation holds
Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 10 Focal Length (Example)
Example: The human eye (50x35mm²) shall be captured for a biometric access control
(iris recognition) in a distance of 500mm using a 1/4” image sensor (3.2x2.4mm²). given values:
G = 50mm
B = 3.2mm
g = 500mm
seeked value:
f
solution: f= g ⋅ B 500 ⋅ 3.2
=
= 30.07 mm
G + B 50 + 3.2 To ensure that the entire object is contained in the image, an objective with a minimal
wider angle (smaller focal length) is chosen. Besides, there exist standardized focal
length graduations from which the best matching one is taken.
f = 4.2, 6, 8, 9, 12, 16, 25, 35, 50, … [mm].
In the given example f = 25mm would do a good job.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 11 Focusing
Adjusting the focus of an objective corresponds to changing the image width b.
Focusing an object being far away (g ∞), the image distance b converges to the
focal length f. Focusing an object being quite close to the objective, the shortening of
g is limited by the minimum object distance MOD (see next slides). Principle of focusing Similar to the aperture also the focus of an objective can be adjusted manually or
motorized, whereas not all objectives have an adjustable focus at all.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 12 Minimum Object Distance
The Minimum Object Distance MOD is the minimum reasonable distance between the front
lens and the object to be captured. For object distances smaller than MOD it is no more
possible to bring the object into focus. The MOD is given by the limit stop of the objective’s
focusing mechanism and can be varied by adding spacer rings (sr) between objective and
camera.
The equation for calculating the needed spacer ring size d for a given arrangement can be
deduced from the thin lens equations (see before). MOD = g min , b= f ⋅ MOD
,
MOD − f MODsr = g sr,min = d = bsr − b = bsr = b + d 111
= +.
f gb bsr ⋅ f
bsr − f f ⋅ MODsr
f ⋅ MODsr
f ⋅ MOD
f ⋅ MOD
−
=
−
MODsr − f MOD − f
f − MOD f − MODsr The new magnification Vsr is given by: Vsr = bsr b − f d
=
+
g sr
f
f ⇒ Vsr,max = V + d
, Vsr,min =
f 1Gg
f
===
V B b b− f For g=∞ b
converges to the
focal length f and the
d
first term in the
,
equation vanishes.
f V Due to the supposed thin lens assumption, in general these equations will give estimates
rather than exact values.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 13 Depth of Field (1)
In optics, particularly as it relates to film and photography,
the depth of field (DOF) is the portion of a scene that
appears sharp in the image. Although a lens can precisely
focus at only one distance, the decrease in sharpness is
gradual on either side of the focused distance, so that
within the DOF, the blurring is imperceptible under normal
viewing conditions.
Effect of aperture on blur and DOF. The points in focus
(2) project points onto the image plane (5), but points
at different distances (1 and 3) project blurred images,
or circles of confusion. Decreasing the aperture size
(4) reduces the size of the blur circles for points not in
the focused plane, so that the blurring is
imperceptible, and all points are within the DOF.
An ideal point is projected as a circle of confusion
into the image (point-spread-function).
Using digital matrix cameras (i.e. CCD cameras) the
diameter ∅ of the circle of confusion shouldn’t be
larger than the size of a single pixel element.
Otherwise image information is lost. ∅ = Min( pixel width , pixel height )
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 14 Depth of Field (2) - The f-number
In optics, the f-number N (sometimes called focal ratio, f-ratio or relative aperture) of an
optical system expresses the diameter of the entrance pupil D in terms of the effective
focal length f of the lens; in simpler terms, the f-number is the focal length divided by
the aperture diameter. The amount of light captured by a lens is proportional to the area
of the aperture A. Modern lenses use a standard f-stop scale, which is an approximately
geometric sequence of numbers (index n=0,1,2,…) that corresponds to
the sequence of the powers of √2=1.414: f1, f1.4, f2, f2.8, f4, f5.6, f8,
f11, f16, f22, f32, f45, f64, f90, f128, etc. The values of the ratios are
rounded off to these particular conventional numbers, to make them easy
to remember and write down. Thus, two consecutive steps in the f-number
sequence correspond to a reduction in light intensity by a factor of two
(half area A).
Reducing the aperture size (increasing the
f-number): reduce the amount of light passing the aperture may require an increase of the exposure time increases the depth-of-field
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 Sequence of f-numbers 15 Depth of Field (3)
The pixel size can either be calculated from the pixel number and the sensor
dimensions or it can be taken from the sensor’s data sheet. A typical pixel size of a
modern camera sensors is 5x5µm.
The front lf and back lb depth of field limits along the optical axis can be calculated as
followed: l f ,b = To the depth of field range r applies: g− f
1± ∅ ⋅ N ⋅ 2
f r = lb − l f = According to the equation, the focal length
f as well as the f-number N influence the
depth of field. In order to increase the
depth of field a higher f-number can be
chosen (smaller aperture). For g>>f the
focal length has an approximately quadratic
influence and, thus, it’s useful to use a wide
angle objective (smaller focal length) to
increase the depth of field.
05.05.2009 g . g
g− f
1− ∅ ⋅ N ⋅ 2
f f-number: 5.6 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 − g
g− f
1+ ∅ ⋅ N ⋅ 2
f . f-number: 22 16 Angle of Field
The specification of the angle of field Θ is redundant to the specification of the focal
length f and the sensor format Bmax. Θ = 2 ⋅ arctan Bmax
2f Even though the information is redundant, the angle of field is often added to the data
sheet. Thus, the angle of field easily allows for assessing the kind or type of an
objective. Commonly one distinguishes wide-angle- (Θ>50 ), standard- (Θ≈50 ) or
telephoto-lenses (Θ<50 ).
Example: For the format of a classical (analog) consumer camera (24x36mm², diagonal
43.2mm) one obtains a focal length f = 50mm for the standard objective. For a 1/4”
sensor a focal length of f = 5mm is obtained for the same objective. In both cases the
image diagonal was used in the calculation.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 17 Extended depth-of-field (EDoF) Image series of out-of-focus images (large microscopy magnification) Image processing algorithms can be used to create
one image of an extend depth-of-field from a
series of images that are partially out of focus.
05.05.2009 Extended depth-of-field image Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 20 Real lens - Aberrations
•
• A more realistic model of simple optical systems is the thick lens.
Simple lenses suffer from a number of aberrations.
Plane of least confusion Spherical aberration
The praxial refraction equation was only an
approximation (the first order term of a Taylor
expansion). Using higher order terms one can
show that rays striking the interface farther from
the optical axis are focused closer to the
interface.
Distortion
This effect is due to the fact that different areas
of a lens have slightly different focal lengths. Chromatic aberration
The index of refraction of a transparent medium
depends on the wavelength (color) of the incident
light rays.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 21 Compound lenses
• Aberrations can be minimized by aligning several simple
lenses with well-chosen shapes and refraction indices,
separated by appropriate stops.
• These compound lenses can still be modeled by the thick
lens equation. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 22 Vignetting
• The complex compound (correction) lenses still suffer from one more
defect relevant to image processing called vignetting.
• Vignetting is caused by light beams emanating from object points
located off axis that are partially blocked by various apertures positioned
inside the lens to limit aberrations. This phenomenon causes the
irradiance to drop in the image periphery.
• Because of other reasons irradiance is also proportional to cos4(α) and
falls off as the light rays deviate from the optical axis. In typical
situations this term can be neglected. Vignetting effect in a two-lens system
05.05.2009 Vignetting as it appears in images Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 23 CCD cameras
•
•
•
• A CCD sensor is placed in the image plane of the optical system
The CCD sensor consists of an regular (matrix) array of photo diodes that collect
electrons that are created by the photo electrical effect as light shines onto the
photo diode.
The integrated charge in a photo diode cell is almost linear in time and light
intensity.
Under conditions where a CCD is exposed to very high intensity illumination, it is
possible to exhaust the storage capacity of the CCD wells, a condition known as
blooming. When this occurs, excess charge will overflow into adjacent CCD
photodiode wells resulting in a corrupted image near the blooming site. The
preferred blooming direction is the read-out direction of the CCD sensor. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 24 CCD cameras The frame-transfer CCD. Uses a two-part sensor in which one-half of the parallel array
is used as a storage region and is protected from light by a light-tight mask. • The interline-transfer CCD. Incorporates charge transfer channels called Interline Masks.
These are immediately adjacent to each photodiode so that the accumulated charge can
be rapidly shifted into the channels after image acquisition has been completed. The
very rapid image acquisition virtually eliminates image smear. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 Interline-Transfer • Frame-Transfer Full frame CCD. The accumulated charge must be shifted vertically row by row into the
serial output register and for each row the serial output register must be shifted
horizontally to readout each individual pixel (progressive scan). A disadvantage of full
frame is charge smearing caused by light falling on the sensor whilst accumulated
charge signal is being transferred to the readout register. Use a mechanical shutter to
avoid smearing. Full frame • 25 Image digitization •
•
• To create a digitized image f(n,m),
we need to convert the continues
sensed data f(x,y) into digital form.
Digitizing the coordinate values is
called sampling (is performed by the
CCD-chip).
Digitizing the Amplitude values is
called quantization. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 26 Sampling / Resolution 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 27 Resolution
•Considering the resolution one has to distinguish the sensor resolution Rcamera [pixel]
and the spatial resolution Rspatial [mm/pixel] at the object, which are related by: Rcamera = G’ is the field of view [mm]. It consists of the
object size G, a tolerance in the object position
t and a safety value sf (usually 1.2). G'
Rspatial •The spatial resolution is the size of the smallest resolvable feature s [mm] divided by the
number of pixels npx [px] that it occupies in the image. Rspatial = s
npx •With pixel accurate calculations npx=1px, applying sub-pixel methods (use
interpolation, area moments, …) it results in npx=1/sub-pixel-factor. The following
table shows typical values for various applications.
method sub-pixel-factor
with ideal conditions sub-pixel-factor in
practice determining the center of gravity 10-20 4-8 camera calibration 6-10 2-6 object measurement 1 0.3 segmentation 6-10 2-6 pattern matching touching edges
binary-large-object-analysis
gray-value-correlation application Realizable sub-pixel-factors sorted by methods
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 28 Resolution: Example 1
For the given stamping part the marked
distance between two borehole midpoints
shall be measured. given values:
G = 60mm (object width)
t = 10mm (max positioning error)
sf = 20mm (safety)
s = 0.02mm (measuring accuracy)
method: determination of the circles’ centers of gravity
sub-pixel-factor (table): 4, npx = 1px/4 = 0.25px seeked value:
Rcamera (camera resolution) solution: Rspatial = G ' = G + t + sf = 60mm + 10mm + 20mm = 90mm µm
0.02mm
s
=
= 80
npx 0.25 px
px Rcamera = G'
Rspatial = 90mm
= 1125 px
80 µm / px A standard resolution that is available is 1280x960px².
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 29 Resolution: Example 2
In example 1 an ideal objective with an arbitrary focal length f
that realizes G’=90mm was assumed. Actually a standard objective
with focal length f* hast to be chosen, that leads to a different
field of view G* according to the objective’s reconstruction scale. V* = B
f*
=
.
G* g − f * What is an appropriate objective for the upper example? additionally given values:
g = 200mm (defined by setup)
B = 6.4mm (1/2” camera with sensor resolution of 1280x960px²) seeked values:
f, f* (focal length)
s* (measuring security) f= g⋅B
200 ⋅ 6.4mm²
=
= 13.27 mm
G '+ B 90mm + 6.4mm To ensure the object lying completely
within the image, a focal length
f*=12mm is chosen. Finally, one has to
check whether the reached camera
resolution is still sufficient to provide
an accuracy of s=0.02mm.
05.05.2009 B
g− f *
200 − 12
= B⋅
= 6.4 ⋅
= 100.26mm
V*
f*
12
n px
0.25
= 100.26 ⋅
= 0.0195mm
s* = G * ⋅
1280
Rcamera G* = Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 30 Quantization The upper left image has 256 gray levels. In the following images the quantization resolution is halved.
05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 31 Literature
[1] Azad, P., and Gockel, T., and Dillmann, R. (2007): Computer Vision – Das Praxisbuch.
[2] Sonka, M., and Hlavac, V., and Boyle, R. (2008): Image Processing, Analysis, and
Machine Vision. 05.05.2009 Dr. Pierre Elbischger - Camera Systems - MIP1/ISAP'SS09 32 ...
View
Full Document