Share A New Linear Calibration Method for Paracatadioptric Cameras

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

A New Linear Calibration Method for Paracatadioptric Cameras
Bertrand VANDEPORTAELE, Michel CATTOEN, Philippe MARTHON, Pierre GURDJOSIRIT and LEN7, ENSEEIHT, 2 rue Camichel, 31071 TOULOUSE, FRANCEbvdp@enseeiht.fr
Abstract
We propose a new calibration method for the paracata-dioptric cameras using one image of at least three observed lines. This method is based on the Geyer & Daniilidis [2]one but has two main advantages. First, a geometric dis-tance is used to compute the camera parameters instead of an algebraic distance. Second, it allows to deal with linesthat are projected to straight lines or to circular arcs in anuniﬁed manner. We provide a geometric interpretation of the algorithm: The line images are ﬁrstly projected to a vir-tual paraboloid, then planes are ﬁtted on these projectionsand their intersection ﬁnally provides the camera param-eters. Thanks to this new formulation, the method is alsoable to deal very efﬁciently with outliers. We compare re-sults with existing methodsfrom Geyer & Daniilidis [2] and from Barreto & Araujo [3].
1. Introduction
A paracatadioptric camera consists of a paraboloidalmirror and a telecentric lens which accomplishes an ortho-graphic projection to an imaging sensor. It has a singleviewpoint. Paracatadioptric cameras are widely used be-causeoftheirpanoramicﬁeldofview. Comparedwithothercentral catadioptric cameras using hyperboloidal mirrors, itis moresimpleto aligntheiropticalandreﬂectiveparts. Thebook [6] give more information about this kind of cameras.In this paper, we propose a new linear method with aclosed form solution for the calibration of such cameras us-ing the images of at least three lines.We assume that the line images are observed in a per-fect image plane where skew is equal to
0
(the lines andcolumns of the image plane are orthogonal) and that pixelsare square. As proposed by Geyer in [2], a preliminary rec-tiﬁcation can be achieved in order to obtain the line imagesin such a plane. This rectiﬁcation only depends on the CCDsensor and is not affected by translations of the mirror orchanges of focal distance.In these conditions, there is a simple linear formulationof the problem which allows to estimate the camera param-eters (the combined focal length and the 2D position of themirror). This formulation uniﬁes line images that are lin-ear and circular and thus provides a better accuracythan themethodsusingonlycirclesincaseoflineimageswhichtendto be straight lines. Linear line images are common, for ex-ample when the axis of symmetry of the camera’s mirror isverticalin a buildinginteriorcontainingmanyverticallines.We ﬁrst present existing calibration methods and recallthe geometry of the camera. We then propose our calibra-tion method and show how to deal efﬁciently with robust-ness. We then give comparative results with methods fromBarreto [3] and Geyer [2].
2 Paracatadioptric camera calibration
In this paper, we compare our method with the Geyer’sandBarreto’s onesbecausetheyhave a closedformsolutionbased on one image of some non constrained scene lines.Geyer & Daniilidis proposed in [2] a linear calibra-tion method based on a constraint on the circular line im-ages. After rectiﬁcation of the image to have zero skewand square pixels, they compute the intersection of sphereswhose equators are the line images using an algebraic dis-tance. The intersection point corresponds to the calibrationparameters.Barreto & Araujo algorithm, presented in [3], computesa closed-form solution in two steps. Considering circle-pairs, the principal point is ﬁrst estimated as the point min-imizing the distances to a set of lines. Each of these linesis spanned by the two common points of one circle-pair,other than the circular points. Then, the image of the abso-lute conic is ﬁtted to the images of the circular points of allplanes, whose vanishing lines are the polars of the principalpoint with respect to the different circles. The sought cam-era parameters are directly encoded by the elements of theimage of the absolute conic.Here is a brief review of some other calibration meth-ods: Geyer has previously proposed in [11] a calibrationmethod based on 2 sets of parallel lines. Ying & Zha pro-pose a robust method to estimate the central catadioptric
0-7695-2521-0/06/$20.00 (c) 2006 IEEE
camera’s parameter in [7], using a two steps Hough trans-form. Vasseur & Mouaddib propose in [5] a method basedon iterative non linear optimization. Aliaga, in [8], proposea non central camera model calibration based on many im-ages of a known calibration pattern. Kang propose in [9]a self calibration algorithm based on the tracking of scenepoints. Sturm & Ramalingampropose a generic concept forcamera calibration in [10].
3 The paracatadioptric projection of lines
Let
R
be an orthonormalcoordinate system expressed inpixelunit. Theoriginandaxes
x
and
y
of
R
arealignedwiththecolumns(
u
)andlines (
v
)ofthe perfectimageplane. Let
Q
r
be the paraboloid modelling the shape of the mirror. Itsrevolution axis
z
is perpendicular to the image plane. Theposition of its focus
V
(which is the viewpoint) is locatedat
(
u
0
,v
0
)
in the image plane. Let
h
be the combined focal.
Q
r
is given by Eq. 1 .
Q
r
:
z
=
−
h
+ (
x
−
u
0
)
2
+ (
y
−
v
0
)
2
4
h
(1)
Rperfect image planeD1d1d2 u0v0D3d3D2 zyx
π
1
π
2
Q
r
V
Figure 1. Paracatadioptric line projections.
Let
D
i
be a line in space. Let the plane
π
i
be deﬁnedby the line
D
i
in space and
V
. The image of the line
D
i
is the orthographic projection of the intersection of
π
i
with
Q
r
to the image plane. In general, it is a circle deﬁned byits center
(
cu
i
,cv
i
)
and its radius
r
i
(
D
1
is projected on
d
1
in the ﬁg. 1). However, the lines which are coplanar with
z
are projected to straight lines in the image plane (
D
2
isprojected on
d
2
in the ﬁg. 1). The lines which contain
V
are projected to a single point (
D
3
is projected on
d
3
in theﬁg. 1).The planes
π
i
intersect the horizon of fronto parallelplane (
z
= 0
) in lines passing through
V
and
Q
r
intersectsthe same plane in a circle of radius
2
h
. Thus, line imagesintersect this circle antipodally. Let a circular line image beparameterizedby its center
(
u
c
,v
c
)
and its radius
r
c
. Geyershown in [2] that the line image deﬁned by Eq. 2 shouldsatisfy Eq. 3. For a line (given by Eq. 4) to intersect antipo-dally a circle, it has to pass by the circle center. So linearline images pass through
(
u
0
,v
0
)
(Eq. 5).
(
x
−
u
c
)
2
+ (
y
−
v
c
) =
r
2
c
(2)
4
h
2
=
R
2
−
D
2
=
r
2
c
−
(
u
c
−
u
0
)
2
−
(
v
c
−
v
0
)
2
(3)
k x
+
l y
+
m
= 0
(4)
k u
0
+
l v
0
+
m
= 0
(5)
4 The new calibration method
Let
Q
v
be a virtual paraboloid deﬁned by the canonicalEq. 6. We propose to back project the line images to
Q
v
byorthographic projection of axis
z
. For circular line images,developingEq.2andreplacing
x
2
+
y
2
by
z
fromEq.6leadstoanequationofplane(Eq.7)whoseplane’sparametersaregiven by Eq. 8. For linear line images, projecting the linedeﬁned by Eq. 4 to
Q
v
leads also to an equation of plane,whose coefﬁcients are given by Eq. 9.
Q
v
:
z
=
x
2
+
y
2
(6)
(
a b c d
) (
x y z
1)
T
= 0
(7)
(
a b c d
) =

−
2
u
c
−
2
v
v
1
u
2
c
+
v
2
c
−
r
2
c

(8)
(
a b c d
) = (
k l
0
m
)
(9)So each line image is projected to a curve at the surfaceof
Q
v
which lie into a plane. Instead of detecting lines andcircles inthe imageplane, we proposeto projecttheir pointsto
Q
v
and then to ﬁt a plane on each set of projected points.Using this detection method, linear and circular line im-ages are uniﬁed, as they are detected in the same way. Toavoid bias in the estimation of the circles and lines in theimage plane, it is necessary to weight the image point. Let
r
2
j
=
x
2
j
+
y
2
j
,
(
x
j
,y
j
)
being a point in the image plane. As
Q
v
is deﬁned by
z
=
r
2
j
, it’s derivative in
(
x
j
,y
j
)
along theradius is equal to
2
r
j
, which is proportionalto
r
j
. Thus, theresiduals
e
j
are weighted by
r
j
to estimate the planeparam-eters

(
abcd
)
as shown in Eq. 10 whose solution is obtainedby SVD.

(
abcd
) =
arg
(
abcd
)
min
Σ(
r
j
e
j
)
2
(10)Let us now see how the constraint for line images is for-mulated for these planes. The circle parameters
(
u
c
,v
c
,r
c
)
corresponding to a plane
(
a b c d
)
are given by Eq. 11. Re-placing
u
c
,
v
c
and
r
c
in Eq. 3 leads to Eq. 12. A veryinteresting property is that this equation is also valid forlinear line images,
c
being equal to
0
(merging Eq. 9 andEq. 5). All the planes intersect in a single point, located at
(
u
0
v
0
u
20
+
v
20
+ 4
h
2
)
. The intersection corresponding tothree line images is obtained by computing
x
=
N
−
1
D
from the system of linear equations of Eq. 13 where
N
and
D
are concatenations of plane parameters.In case of over constrained system, when more than 3line images are available, it is necessary to normalize the
0-7695-2521-0/06/$20.00 (c) 2006 IEEE
different planes equations such as
||
a b c
||
= 1
. The
x
which minimizes the sum of squared geometric distancesto the different planes is then computed by Singular ValueDecomposition of
(
N
−
D
)
.
r
2
c
=
a
2
+
b
2
−
4
cd
4
c
2
, u
c
=
−
a
2
c , v
c
=
−
b
2
c
(11)

a b c

−
u
0
−
v
0
−
u
20
−
v
20
−
4
h
2

=
d
(12)
x
= (
−
u
0
−
v
0
−
u
20
−
v
20
−
4
h
2
)
T
,
Nx
=
D
(13)Using our new formulation and the normalized planes,we are able to minimize linearly a geometric error when wecompute the intersection of the planes. Oppositely, becauseof its sphere intersection formulation, the distance mini-mized by Geyer is algebraic (it is is in fact the power of a point with respect to a sphere instead of the distance), andthus introducesome bias [12]. Moreover,the linear line im-ages are a degenerate case in the Geyer’s method as theycorrespond to spheres with inﬁnite radius.To guarantee the uniqueness of the solution of Eq. 13,different rows of
N
should be linearly independent, id.
det
(
N
)

= 0
. Each line of
N
corresponds to the normalof a plane, and so the 3 normals should not lie in a singleplane. Back to the circular line image equations, this sig-niﬁes that the center of the 3 circles should not be aligned.This also means that calibration cannot be achieved using3 linear line images, as the corresponding planes all havetheir normals in the
z
= 0
plane. These cases generallycorrespond to parallel lines in space.The proposed calibration algorithm is perfectly ﬁtted forrobust estimation, as we show in [14]. The potential lineimages are detected using robust ﬁtting of planes on thecontour points projected to
Q
v
. This allows to detect thedifferent lines and circles present in the image. As seenpreviously, the ﬁtted planes corresponding to line imagesshould intersect in a point. A RANSAC scheme is appliedto check triplets of planes and ﬁnd inliers. Each line of thesystem of equations is multiplied by the number of pixelsof the corresponding line image to give more weight to theones having an important support in the image. Becauseof the speciﬁcity of the proposed method, the computationtimes are much lower than the general method of Ying &Zha [7].
5 Results
We have developed a benchmark for Matlab allowing totest the proposed method and compare it with those fromGeyer and from Barreto. Various parameters can be setup.The sensor is parameterized by the width and height of the
100200300400500600050100150200250300350400450exemple of synthesized image, H=120, u0=320, v0=240, width=640, height=480 NbCircles= 6, NbPointsPerCircle= 20,Uniform Repartition=1
(a) (b)
Figure 2. (a) Example of line images used forsimulation with an uniform repartition. (b) Anon uniform repartition in a real calibration.
image, the combined focal
h
and the image center (
u
0
,v
0
).The number of line images as well as the number of pointson each line image, the number of time the calibration isachieved for each noise level can be modiﬁed. Line imagecan be synthesized in two different ways. In the uniformcase (example on ﬁgure 2.a), uniformly distributed normalsof planes
π
i
are randomly selected for the line images al-though in the non uniform case, a bimodal repartition isobtained by selecting a normally distributed repartition of the normals around the vertical and horizontal axes, half the normals belonging to each type to obtain data whichare more similar to the real calibration image of ﬁgure 2.b.Some random points are then sampled on an angular por-tion
θ
of the image using an uniform distribution and arecontaminated with gaussian noise of varying standard de-viation. The points coordinates can be normalized for allthe presented algorithms (zero mean and
√
2
standard de-viation). This specially improves our algorithm results (ittends to center the virtual paraboloid and make the planesfrom which the calibration parameters are computed to becloser from the real ones.) while not changing too muchGeyer and Barreto’s method results. The Geyer’s circle de-tector or our own one can be used for the Barreto’s calibra-tion method.We present in the ﬁgure 3 some results of the same kindthan [2] and [3], the ﬁeld of view being 180˚(
h
= 120
,
4
h
=
height of the image). Due to limited space, we showonly a few comparative results which we think representa-tive. Here are the parameters corresponding to all the fol-lowing tests: 500 calibrations for each noise level, from0.5 to 6 pixels of standard deviation. The calibration areachieved on sets of 6 line images each containing 10 pointsand
θ
= 180˚
. Normalization is done on the set of points.Barreto’s method uses the Geyer’s circle detector.The ﬁrst (resp. second) row of ﬁgure 3 shows the RMSerror for focal (resp. image center (
u
0
,v
0
)). Our method isdenoted ”proposed” on this graphs. The ﬁrst column showsthe RMS error as a function of the gaussian noise standarddeviationincaseofuniformrepartitionofthenormals(UR).The second column corresponds to a non uniform reparti-
0-7695-2521-0/06/$20.00 (c) 2006 IEEE
024605101520Noise std dev (pix)
f o c a l R M S e r r o r ( p i x )
Uniform repartition (UR)ProposedGeyerBarreto0246010203040Non Uniform Repartition (NUR)
f o c a l R M S e r r o r ( p i x )
Noise std dev (pix)024605101520n=3n=5n=2Noise std dev (pix)
f o c a l R M S e r r o r ( p i x )
Varying Number of lines0501001500.10.150.20.250.30.35focal
f o c a l R M S e r r o r ( p i x )
Varying Focal02460204060
c e n t e r R M S e r r o r ( p i x )
Noise std dev (pix)
0246050100150
c e n t e r R M S e r r o r ( p i x )
Noise std dev (pix)02460204060n=3n=5n=2
c e n t e r R M S e r r o r ( p i x )
Noise std dev (pix)0501001500.511.52
c e n t e r R M S e r r o r ( p i x )
focal
Figure 3. RMS errors for different tests (see the text for details).
tion (NUR). Erroneous calibration results are automaticallydiscarded and thus do not affect the RMS curves.In this experimental case, our method outperforms thetwo others, especially in the non uniform case.The third column shows the RMS error for our methodas a function of the number of lines used for the calibration(3, 5 and 20) using (UR).The last column shows the RMS error for focal andimage center as a function of the combined focal for ourmethod using (UR) and one pixel std. dev. gaussian noise.When only 3 lines are used for the calibration, all themethods provides approximately the same results.Using the real image of ﬁgure 2.b, our method outper-forms the others, as the distances between the reconstructedline images (whose planes
π
i
passe by
V
) to the imagepoints is smaller. This tendency has been conﬁrmed onmore than 30 images using different focal length and im-age center positions.
6. Conclusions
We have presented a new calibration method for para-catadioptric cameras which provides accurate results andhave a closed form solution whose computation is simple.We have compared it with the two main other similar meth-ods and have demonstrated the improvement of accuracy.We provide the benchmarkcode [13] to allow the readers todo their own tests in other various conditions. This methodhas been successfully used in a robust algorithm to detectvery efﬁciently paracatadioptric line images using an un-calibrated camera [14].
References
[1] C. Geyer and K. Daniilidis, ”Catadioptric Projective Geome-try”, In
IJCV
, 45(3), pp. 223-243, 2001.[2] C. Geyer and K. Daniilidis, ”Paracatadioptric Camera Cali-bration”, In
PAMI
, volume 24, No 5, pp. 687-695, 2002.[3] J. P. Barreto and H. Araujo, ”Paracatadioptric Camera Cali-bration Using Lines”, In
ICCV 2003
, Nice, France, October2003.[4] J. P. Barreto, ”General central projection systems, modeling,calibration and visual servoing”,
PhD Thesis
, October, 2003.[5] P. Vasseur and E. M. Mouaddib, ”Central Catadioptric LineDetection”, In
BMVC
, Kingston, Sept 2004.[6] R. Benosman and S. B. Kang,
Panoramic Vision
, Springer,2001.[7] X. Ying, H. Zha, ”Simultaneously Calibrating CatadioptricCamera and Detecting Line Features Using Hough Trans-form”, In
IROS 05
, Alberta, Canada, August 2005.[8] D. Aliaga, ”Accurate Catadioptric Calibration for Real-timePose Estimation of Room-size Environments”, In
CVPR
,pp.127-134, 2001.[9] S.B. Kang, ”Catadioptric self-calibration”, In
CVPR
, pp.201-207, 2000.[10] P. Sturm and S. Ramalingam ” A Generic Concept for Cam-eraCalibration”, In
ECCV
, Prague, Czech Republic, pp. 1-13,Vol. 2, May 2004.[11] C. Geyer and K. Daniilidis, ”Catadioptric camera calibra-tion”, In
ICCV 1999
, Greece, pp. 398-404, 1999.[12] Z. Zhang, ”Parameter Estimation Techniques: A TutorialwithApplicationtoConicFitting”, In
Technical report n2676
,October 1995.[13] B. Vandeportaele, P. Gurdjos, ”Paracatadiop-tric camera calibration benchmark for Matlab”,
http://www.enseeiht.fr/
∼
bvdp/calib
, 2006.[14] B. Vandeportaele, M. Cattoen, P. Marthon, ”A Fast Detectorof Line Images Acquired by an Uncalibrated ParacatadioptricCamera”, In
Proc. 18th International Conference on Pattern Recognition
, Hong Kong, 2006.
0-7695-2521-0/06/$20.00 (c) 2006 IEEE

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.