In: Paparoditis N., Pierrot-Deseilligny M.. Mallet C.. Tournaire O. (Eds). IAPRS. Vol. XXXVIII. Part 3A - Saint-Mandé, France. Septeniber 1-3, 2010
282
Extrinsic parameters (camera-centered)
Figure 1: overview of the truck
(and more specifically the GPS antenna position) as the center of
the truck coordinates system, every data have to be replaced in
this specific frame, and will be finally expressed in the world co
ordinates system. We now present briefly the calibration methods
used for replacing all devices in this coordinates system.
Figure 3: calibration grid positions
In order to compute Lidar positions and orientations in the GPS
coordinates system, we decided to determine their positions with
respect to the camera, then to transfer these positions using the
camera extrinsic calibration results.
Different approaches for lidar calibration have been developed.
Antone and Friedman implemented a method where only lidar
range data are required, but which is based on the design of a spe
cific calibration object (Antone and Friedman, 2007). They claim
that registration to any camera can be further processed by apply
ing a pattern on this object. (Mahlisch et al.. 2006) developed a
calibration method of a multi-beam lidar sensor with respect to a
camera which has to be sensitive to the spectral emission band.
Alignment is then performed when viewing a wall from differ
ent orientations, through a reprojection distance minimization.
Huang presented an algorithm for multi-plane lidar calibration
using geometric constraints on the calibration grid plane (Huang
and Barth, 2008). We chose Zhang and Pless approach (Zhang
and Pless, 2004), a two step algorithm based on a geometric con
straint relative to the normal of the calibration grid plane. This
method uses a linear determination of the pose parameters, fur
ther refined by a non linear optimization (generally performed
through a Levenberg-Marquardt algorithm).
In our experiments, we use about 15 images, where the calibra
tion grid is seen in both lidar and camera views. The poses of the
calibration grid with respect to the camera are previously deter
mined in the 2.2.2 section and we select manually the grid area in
the lidar scan (Cf. figure 4). Zhang and Pless two-pass algorithm
is then performed with the collected data for the three front lidar
range sensors.
(a) Lidar and camera
2.2 Camera Calibration
(b) “Front” and “Sky” lidar sensors
cones of view
2.3 Lidar Calibration
J
♦
1
—4—
♦ w
’a) Calibration grid is
selected in the scan
manually
(b) corresponding image
We present in this section a two step camera calibration ; first
the camera is roughly oriented so as to be aligned with the truck
main axis. Then it is finely calibrated, using a dedicated Matlab
toolbox.
2.2.1 Camera Rough Alignment We designed a calibration
site presenting many parallel longitudinal and transversal lines
and marks for positioning the vehicle wheels. A primary camera
orientation is processed in order to align it with the truck main
axis : it consists in a dynamic process allowing a rough setting of
the pitch, the roll and the yaw. Pitch is set in a way that the vehi
cle’s hood is not seen. Roll is set to zero from the transversal lines
mean orientation. Yaw is set so that the longitudinal lines vanish
ing point has an u-coordinate equals to the principal point - i.e.
the projection of the camera center in the image - u-coordinate
(Cf. figure 2).
Figure 2: configuration tool output image, instructions are dis
played at the top left of the image
2.2.2 Fine Calibration We mainly focus here on extrinsic ca
libration, i.e. the camera position and orientation with respect
to the truck. We used Jean-Yves Bouguet's Matlab camera ca
libration toolbox ’, which relies on Zhang calibration algorithm
(Zhang, 2000) and returns calibration grid positions in the camera
coordinates system (Cf. figure 3).
Intrinsic parameters, though quite important for any image pro
cessing algorithm, are not critical in the road description process.
Indeed, this first stage goal is to define camera position and ori
entation in the GPS antenna coordinates system, which is done
using our calibration site.
1 http ://www.vision.caltech.edu/bouguetj/calib_doc/
index.html
Figure 4: lidar data and corresponding image used for calibration
As an intermediate result of this stage, we can reproject lidar im
pacts on the grid, as can be seen in figure 5.
The final step in the calibration scheme consists in replacing all
calibrated lidar in the GPS antenna coordinates system, in order
to have all sensors in the same reference. As these sensors are
2D lidar, the vehicle displacement provides a full scan of the sur
rounding 3D world. Using RT Maps acquisition platform, all data