1150
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008
the line. The constraint in Equation 4 indicates that these three
vectors are coplanar, and can be introduced for all the
intermediate points along image space linear features.
(V,*v)'V, = 0 <4)
Point-based Incorporation of Linear Features
Another technique is presented here for the incorporation of
linear features for photogrammetric georeferencing. This
technique uses a point-based approach in which a line is defined
in image space, by selecting any two points along the same line,
in overlapping imagery (Figure 4). Then, the corresponding line
is extracted from LiDAR data using the procedure described in
section 2.2.1, in which the extracted LiDAR-derived line is also
represented by two points. One should note that none of the
endpoints, whether in image space or object space, are required
to be conjugate points (Aldelgawy et al., 2008). The only
requirement is that the selected points should be along the same
line. This approach is based on restricting the weight matrix of
the points in the line direction. Consequently, the behaviour of
these points will be fixed in all directions except for the line
direction. This means that the points are free to move only
along the line, which is considered as a constraint. The
collinearity equations are used as the mathematical model.
Figure 4: Point based incorporation of linear features.
In this work, the weight restriction is performed in the image
space. Therefore it uses a 2x2 weight matrix, where the weights
of the points along the linear features are set to zero. For this
procedure, a minimum of two non-coplanar line segments is
needed (Habib, 2004). Having outlined the methodologies for
the various georeferencing techniques, the remainder of the
paper will focus on experimental results and analysis of the
different methods.
3. EXPERIMENTAL RESULTS
Experimental work was conducted to validate the feasibility and
applicability of the above approaches, and to compare the
performance of each method. A bundle adjustment was
performed using overlapping photogrammetric and LiDAR data
captured over the University of Calgary campus. Nine photos in
three strips were used. The photos were captured by an RC30
frame analogue camera, with an average flying height of 770m,
and a focal length of 153.33mm. The photos were then digitally
scanned at 12 microns resolution, obtaining a 6cm GSD. Based
on these specifications, the expected photogrammetric
horizontal accuracy is around 0.09m, and vertical accuracy of
about 0.30m (assuming an image measurement accuracy of 1
pixel). Ten LiDAR strips were captured in two flight missions
over the study area (six strips in the first day and four strips in
the second day), with an Optech 3100 sensor. The data was
capture with a flying height of 1000m for the first flight mission,
and 1400m for the second. The LiDAR system provided a
0.75m ground point spacing, and a vertical accuracy of 15cm
for both flight missions. The horizontal accuracy for the frist
flight mission is 50cm, and 70cm for the second.
The experiment was conducted by applying all the alternatives
mentioned above using control points, control patches, and
control lines. The number of control points was 24, the number
of control lines was 50, and the number of control patches was
42. In the experiments using control patches and control lines,
the number of tie points was 48. The comparative performance
of the introduced methodologies was evaluated through
quantitative and qualitative analyses. The following section, 3.1,
provides a quantitative analysis on the experimental work
performed using mean, standard deviation, and RMSE values,
while Section 3.2 provides a qualitative analysis using
orthoimage generation.
3.1 Quantitative Analysis
The quantitative analysis is performed for the three sources of
control information as per the following sub-sections.
3.1.1 Georeferencing Results Using GCPs: Out of the 24
independently collected GPS surveyed points, 8 points are used
as the ground control points, while the remaining 16 are used as
check points. The results are summarized in the second column
of Table 1. With a pixel size of 12 microns and an image
measurement accuracy of 1 pixel, the expected horizontal
accuracy is around 0.09m, while the expected vertical accuracy
is around 0.30m. From Table 1, it can be seen that the expected
accuracies match closely with the results computed in this
experiment (RMSE X , RMSE Y , RMSE Z ,).
3.1.2 Georeferencing Results Using Areal Features:
The results from the georeferencing of the imagery using
LiDAR-derived planar features are presented in columns 3 and
4, in Table 1. A relatively large amount of bias is present in the
results (Meanax, Mean A Y, MeanAz), which is not present in the
results from Section 3.1.1 This is because a bias was observed
between the LiDAR reference frame and the used GPS
coordinate system. Moreover, a bias in the LiDAR system
parameters is suspected as well. However the error amount (oy,
ay, erf) is reasonable. The horizontal standard deviation is
similar to the results from Section 3.1.1, while the vertical
standard deviation is improved compared to Section 3.1.1
results. A possible reason for this is that many more areal
control features were used in comparison to the number of
ground control points used in Section 3.1.1. That is, the
improved vertical accuracy may be due to the higher
redundancy. This bias value has affected the final values of the
root mean square error (RMSE X , RMSE V , RMSE Z , RMSE Tota i),
which are larger than those presented in the second column of
Table 1. The two methods of incorporating areal features yield
similar results. However, it was observed from the experiments
that the weight restriction method is more sensitive to blunders
than the coplanarity method. This can be explained by the fact
that blunders in the planar patches will affect the estimated
plane parameters, which might cause singularities. In the
coplanarity method, on the other hand, planar patches are
incorporated in the bundle adjustment by forcing the point
patches to lie on the plane defined by three photogrammetric
vertices. In other words, each point in the segmented patch
provides one condition equation. The high redundancy
promotes higher robustness against possible blunders.