Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional datasets.
To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets
with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on
camera parameters that are affecting view projection the most.

As input data, we used a 15-viewpoint two-dimensional dataset withintrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods usingstructure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

About Dataset

The main dataset contains images of 17 checkerboard configurations taken from 15 coplanar viewpoints, with additional two "checkerboard-less scene" image states. The intended use of the dataset is to facilitateevaluation and research of multi-camera calibration methods.

Capture was performed via three Canon EOS M cameras, mounted in a rigid vertical stack onto a calibrated dolly. For each scene configuration, the camera triplet was translated horizontally to 5 pre-set positions, taking imagesat each position (hence resulting 15 viewpoints). Position measurement data is included in the dataset archive.

Licence

This dataset may be used for academic and research purposes. If you want to use this dataset, please cite: