The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008
due to perturbations in the flight trajectory. However, the line
orientation along the scene can be used as an approximation.
(2) Bundle Adjustment Using Control Lines:
In this case, the error ellipse expansion, or weight restriction,
can be applied in either image or object space. When expanding
the error ellipse, or restricting the weight matrix, in image space
(Figure 4.c), the object line will be represented by its end points
with their variance-covariance matrices defined b\ the utilized
procedure for providing the control lines. On the other hand, the
variance-covariance matrices of the image lines are expanded,
the weight matrices are restricted, to compensate for the fact
that the end points of the image lines are not conjugate to those
defining the object line. It should be noted that this approach is
not appropriate for scenes captured by line cameras since the
image line orientation cannot be rigorously defined.
In this approach, the planar patch is defined by three points, a, b,
and c in image space and a set of LiDAR points in object space
(Habib, A. et al, 2007). The points, a, b, and c should be
observed in at least two overlapping images (Figure 5). The
collinearity equations 2 and 3 are used to relate the image space
coordinates (x, y) to the object space coordinates (X, Y, Z) for
the image points a, b, c. For any LiDAR point P in object space,
the following constraint should be satisfied:
v =
Xp Yp zp 1
Xa Xa Xa 1
Xb Xb Xb 1
Xc Xc Xc 1
(.Xp-XA) (Yp-Ya)
(Xb-Xa) (Yb-Ya)
(Xc-xa) (y c -ya)
(.Zp-ZA)
(Zb-Za)
(Zc-ZA)
.o< 10 >
When expanding the error ellipse, or restricting the weight
matrix, in object space (Figure 4.d), the image lines will be
represented by non-conjugate end points whose variance-
covariance matrices are defined by the expected accuracy of the
image coordinate measurements. To compensate for the fact that
these points are not conjugate to each other, the selected end
points will be assigned different identification codes. On the
other hand, the object line will be defined by a number of points
whose variance-covariance matrices are expanded, or weight
matrices are restricted. If we have m images, the number of
these points will be 2m since every line is defined by two points.
It should be noted that this approach can be used for scenes
captured by frame or line cameras since it does not require the
expansion of the variance-covariance matrix, or the restriction
of the weight matrix, in image space.
Figure 4: Variance expansion and weight restriction options of
line end points for their incorporation in point-based bundle
adjustment procedures
3. INCORPORATION OF AREAL FEATURES IN
PHOTOGRAMMETRY
The approaches used to incorporate areal features extracted
from LiDAR data in photogrammetric triangulation are
presented in this section. The first approach is the coplanarity-
based incorporation of areal features, while the second one is
the point-based incorporation of areal features, where we can
either expand the error ellipse or restrict the weight matrix. The
following sub-sections explain these approaches in details.
3.1. Coplanarity-Based Incorporation of Planar Patches
where, (A,f,Z) A g q are the object space coordinates of image points
a. b, c ax\d{X P J P ,Z P )are the object space coordinates of any ground
point, P = 1 to n, where n is the number of extracted LiDAR points in
the areal feature.
The above constraint in equation 10 is used as the mathematical
model for incorporating LiDAR points into the
photogrammetric triangulation. In physical terms, this constraint
means that the normal distance between any LiDAR point P and
the corresponding photogrammetric surface consisting of the
three points A,B,C should be zero. In other words, the volume of
the tetrahedron comprised of the four points is equal to zero as
these four points belong to the same plane. This constraint is
applied for all LiDAR points comprising this surface patch.
Figure 5: Coplanarity-based incorporation of planar patches
3.2. Point-Based Incorporation of Planar Patches
A new approach to incorporate the planar patches using a point-
based technique is presented here. In this case, conjugate patch
vertices are defined in at least two overlapping images. Then, an
equal number of patch vertices is defined in object space.
Correspondence between points in image space and object space
is not necessary (Figure 6). The used mathematical model is the
regular collinearity equations 2 and 3. To compensate for the
fact that there is no correspondence of points between image
and object spaces, we expand the error ellipse, or restrict the
weight matrix, in object space. Variance expansion, or weight
restriction, in image space is not applicable since all patches in
image space belong to the same 2-D plane (i.e. the image itself).
Two alternatives of the point-based techniques are used. The
first one relies on expanding the error ellipse, variance-
covariance matrix, along the areal feature, while the second one
relies on restricting the weight matrix along the areal feature.
Since the error expansion, or weight restriction, is done only in
object space, this approach is valid for both frame and line
cameras.