-
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
598
scan have effected accuracy of computed orientation parameters.
The method is useful when require tamsformation of laser scans.
There are another kind of methods to compute transformation
parameters and applay for adjacent overlap laser scans data
related with survey instrument configuration, object surface and
measurement area. Particularly, image based registration
methods is used frequently (Dold and Brenner, 2006). To laser
scanning while moving, the integration of the terrestrial laser
and GPS/IMU sensors was done (Talaya, 2004). In this system,
all the scanning data have been oriented correctly in reference
frame directly. Independent model triangulation method is
combined laser scanner data as similar method in aerial
triangulation (Scaioni, 2002). In addition, there are too many
different methods in literature for 3D modelling and combining
of laser scanning data in reference coordinate system (Scaioni,
2005; Zhi and Tang, 2002).
In this study, combination of laser scanner data by one image
was investigated. Image taken which cover adjacent laser scan
area, and registration was performed by select point seen on the
image and on the scans. This article has been organised form
explain our work in section 2, experiments in section 3,
conclusion and future work in section 4 and acknowledgement.
2. OUR WORK
In our work, it was investigated that registration of adjacent
scans in common coordinate system by one image. The image
has been taken form seen each adjacent scans. These scans have
been registered by selected conjugate points on the scans and
image. The steps of the our aproach are below;
• The projection center coordinate and rotation angels
of the taken image related with scan 1 (S1 ) and scan 2
(S2) were calculated.
• Rotation parameters are applied to the every scan (SI
and S2). In this way, coordinate axes of the each scan
are parallel for image cordinate axes.
• Translation vector between the scans has calculated
by differences projection center coordinates for the
each scan.
This steps were explained in subsections.
2.1. Camera locate estimation
Camera locate estimation is defined of determining the position
of calibrated camera from 3D references points and their images
(Figure 1). It is also called space resection in the
photogrammetry community and computer vision. The most
part of the problem is determine of distances from image
projection center to object reference points. The solve of the
problem has required least 3 conjugate points on object and
image. With any two points of the three points, equation (1) can
be writen in below;
Pij (Xj, xj) = Xj Z + Xj z + CyXjXj — dij 2 = 0 ( 1 )
Cÿ = -2 cos Gy
where, xi and xj is distance from i th and j th object points to
image center, and they are not known. The other parameters in
the equation is known. 0y is angle between i and j directions in
the projection center C. dy is obtained with object coordinates
of the points.
For two pairs of the three points is writed three polynom as
below,
12
t (xi, x 2 ) = xi 2 + x 2 2 + c, 2 x,x 2 - d, 2 2
r 13 (Xj, X 3 ) = Xi 2 + x 3 2 + C 13 XjX 3 - d 13 2
P23 (X 2 , X 3 ) = x 2 2 + x 3 2 + c 23 x 2 x 3 - d 23 2
(2)
These equations are singuler, and if Sylvester method is applied
for eleminate x 3 and x 2 . Afterwards g x is hand related x^x;
q x =a 4 x 4 +a 3 x 3 +a 2 x 2 +a t x 1 +a<)=0
(3)
This equation is solved singular value decompositon method
(SVD) (Zhi and Tang, 2002). There are indefinities solving
poblem by three points. There are not indefinities for four
points. Three g x functions is obtained for four points and, SVD
is applied by Maple software for these functions. After x h x 2 ,x 3
and X4 was obtained, camera location (Xo,Y 0 ,Z 0 ) is calculated in
object coordinate system.
Image
plane
Figure 1. Image and object points.
2.2. Rotation angels between image and laser scanner data
Rotation angels between image and the each laser scanner data
is calculated by collinearity equations (Equation 4). Number of
the equation is selected point as much as the image and the each
scans.
X = X n -c
y = y 0 -c
r ii(X-X 0 ) +r 2 ,(Y-Y 0 )+r,,(Z-Z 0 )
r i3 (X - X 0 )+ r 23 (Y - Y 0 )+ r 33 (z - Z 0 )
r i2 (X ~ X 0 )+ r 22 (Y — Y 0 )+ r, 2 (Z - Z 0 ) (¿y
r 13 (X-X„) +% (Y-Y 0 ) + r H (Z-Z 0 )
where;
x,y : image point coordinates,
x 0 ,y o : image pricipal point coordinates,
c : focal lenght,
r : elements of rotation matrix related 00,9,k.
X,Y,Z: obeject system coordinates,
X 0 ,Y 0 ,Z 0 : coordinates of the projection center in object system.
The initial values of the unknowns (<B,(p,K,Xo,Y 0 ,Z 0 ) in the
equation 4 is reguire. Initial values of the Xo,Y 0 ,Z 0 is accept
projection center coordinates. Initial values of the rotation