The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008
1149
One should note that the points selected in the imagery and in
LiDAR patch need not be conjugate (Figure 2). In order to
compensate for the non-correspondence between the vertices
defined in the imagery and the vertices in the control patch, we
will restrict the weight of the selected points from the LiDAR
control patch, along the plane direction. The weight restriction
procedure is performed as follows. First, a local coordinate
system (UVW) with the U and V axes aligned along the plane
direction is defined. The relationship between the original
coordinate system (XYZ) and the local coordinate system (UVW)
is defined by the rotation matrix R. The rotation matrix is
defined using the orientation of the normal to the planar patch,
which is derived through a plane fitting procedure using all
points of the control LiDAR patch. The original weight
P
matrix, XYZ ,is defined as the inverse of the variance-
covariance matrix ^ xYz , which depends on the accuracy
specification of the LiDAR data. Using the law of error
propagation, the weight of the points in the local coordinate
system (Pyyyy ) can be derived according to Equation 1, where
P*xyz is the weight matrix in the object coordinate system
defined by the LiDAR data, and P uvw is the weight matrix in
the patch coordinate system. Then, the weight matrix can be
modified according to Equation 2 by assigning a zero value for
the weights along the planar patch, to obtain a new weight
matrix P' uvw in the plane coordinate system. Finally, the
modified weight matrix P' XYZ in the original coordinate
system can be derived according to Equation 3.
UVW
~ RPxyzR
0 0
0 0
0 0
0
0
PA
pi _ p'pi p
r XYZ — r UVW /V
0)
(2)
(3)
Next, a point-based solution using a regular bundle adjustment
procedure, with the modified weight matrix ( P\yz )>
applied to georeference the involved imagery. It is important to
mention that in order to de-correlate the estimated parameters in
the bundle adjustment procedure, one should make sure to use
planar patches with varying slope and orientation when using
control planar patches.
Figure 2: Point-based incorporation of planar patches in
photogrammetry.
2.2 Indirect Georeferencing using LiDAR lines
Similar to LiDAR-derived areal features, LiDAR-derived linear
features can be used as control information for the
georeferencing of the photogrammetric data. This section
outlines two methods for the integration of LiDAR linear
control features in a photogrammetric triangulation procedure.
2.2.1 Extraction of LiDAR Lines: Similar to the extraction
of areal features, the extraction of linear features from a LiDAR
point cloud is performed using a developed program (Figure 3).
Once LiDAR patches are extracted (Section 2.1.1),
neighbouring planar patches are identified and intersected to
produce infinite straight-line segments. Then, the LiDAR points
in the segmented patches that are within a certain distance from
the infinite lines are projected onto the lines. The most extreme
projected points along the infinite lines are chosen as the line
endpoints. This procedure is repeated until all the LiDAR linear
features are extracted. Once these features are extracted from
the LiDAR data, the next step is the incorporation of these
features in photogrammetric georeferencing.
Figure 3: Extracted linear features, through planar patch
intersection.
2.2.2 Incorporation of Linear Features for Image
Georeferencing: This section presents the two approaches used
for incorporating linear features extracted from LiDAR for the
georeferencing of photogrammetric data. The first approach is
the coplanarity-based incorporation of linear features, while the
second one is the point-based incorporation of linear features,
where restrictions are imposed on the weight matrix. The
mathematical models for these approaches are provided in
detail in the following sub-sections.
Coplanarity-based Incorporation of Linear Features
The coplanarity-based incorporation of linear features was
presented by Habib et al., 2004. This technique defines a line in
object space by its two end points. These two points in the
object space are extracted from the LiDAR data using the
previously mentioned procedure. In the image space, the line is
defined by a group of intermediate points. Each of the
intermediate points satisfies the coplanarity constraint shown in
Equation 4. In Equation 4, vector Kj is the vector from the
perspective centre to the first LiDAR end point of the line,
vector V 2 is the vector from the perspective centre to the
second LiDAR end point of the line, and vector V 3 is the vector
from the perspective centre to any intermediate image point on