753
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
with white feature marks are measured by total station. The
planar and height precision of these control points are all better
than 0.015m. Figure 4 shows the distribution of ground feature
points. The 33 triangular shaped points are used as control
points, while the other 31 square shaped points are check ones
for aerial triangulation in experiments of section 4.
3800
A A
A
A
3700
.A
A
3600
A
« 1
" A
A
A
3500
> A
A- 1 A
A
• A *
*
■ ■
3400
m ■
A
■ |A ■A ”
3300
■
■
■
■
A
■
■
■
■
►
►
►
3200
A
- A .
3100
7550 7650 7750 7850 7950 8050 8150 8250
Figure 4. Distribution of control and check points
3. PHOTOGRAMMETRIC PROCESSING OF LOW
ALTITUDE IMAGE SEQUENCES
3.1 Image matching
Image matching is the prerequisite for aerial triangulation and
3D information extraction. It is also one of the cores of
geometric processing. Camera orientations usually vary with
time among adjacent images, because the posture of the
stabilising platform is not so accurate. For example, 3 degrees
variation in pitch and roll angle will result in about ±5%
changes in forward and side overlap. In extreme circumstance,
the real overlap between adjacent images will change 10%
against the predefined overlap, i.e. may be 70% to 90% if the
predefined forward overlap is 80%, or 20% to 40% if the
predefined side overlap is 30%. Variation in yaw angle will also
bring problems of overlap. As shown in figure 5, the above
motioned deficiencies distinctly worsen the problem of image
matching. So how to find some overall optimum conjugate
points and thus estimate the orientation variation between stereo
image pairs are the key technologies to match low altitude
image sequences.
Figure 5. Sketch map of images acquired by low altitude remote
sensing system in extreme circumstance
Image matching includes three steps, feature points extraction,
overall matching and fine matching. Firstly, feature points are
automatically extracted with Harris comer detector (Harris,
1988) from all images. In this process, each image is divided
into regular grids with size of 90pixels* 90pixels. One feature
point with maximum interested value that larger than a certain
predefined threshold is extracted in each grid. For the camera
Kodak Pro SLR with image format 4500pixels*3000pixels, at
most 1650 feature points can be extracted from each image.
In the overall matching process, searching range of conjugate
point in the right image for a certain feature point in the left
image is usually 40% of the image format. That means if the
predefined forward overlap is 70%, the searching range of
conjugate point in the right image is from 50% to 90% in
x-direction and from 80% to 120% in y-direction. As shown in
figure 6, the black rectangle represents left image of a stereo,
the purple rectangle represents the right image with 70%
predefined forward overlap. Searching range is from 50% (blue
rectangle in figure 6a) to 90% (dashed red rectangle in figure 6a)
in x-direction, and from 80% (blue rectangle in figure 6b) to
120% (dashed red rectangle in figure 6b) in y-direction.
Changes of yaw (kappa) angle in the range of ±15 degrees is
also considered (blue and dashed red rectangle in figure 6c).
This strategy ensures that images with large orientation change
or containing large parallax objects such as tall buildings and
high mountains can also be matched successfully. But caution
that the larger the searching range, the more the mismatches. So
only feature points with strong interested value in the left image
are used to find overall conjugate ones in the right image. Once
5 or more overall optimum conjugate points are successfully
matched, relative orientation can be performed to determine the
overlaps and rotation angles between stereo pairs. Of course,
gross mismatches should be detected and removed while
calculating relative orientation parameters.
(a) (b)
Figure 6. Searching range of conjugate point in the right image
Fine matching can be performed as long as relative orientation
parameters are available. Pyramid strategy is always used for
fine matching. This process is somewhat similar as traditional
image matching. The overlap between adjacent stereo pair
determined above can be used to predict the position of possible
conjugate point for a certain feature point. Then precise least
squares image matching is used to find the conjugate point in a
local searching area around the predicted position along
epipolar line determined by relative orientation parameters.
After more conjugate points are matched, the relative
orientation process is called to refine orientation parameters.
Gross mismatches are detected and removed in relative
orientation process. Then precise matching is again performed.
Usually, matched points will be stable within three iterations.
Note that for convenience of linking all stereo models
automatically, matched image points in one stereo pair should
transfer to the next stereo pair. This purpose can be realized by
replacing feature point of left image with matched conjugate
point in the same grid from the former stereo pair.
(c)