Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

754 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008 
3.2 Relative orientation and model link 
Relative orientation can be used to determine the orientation 
parameters of the right image when taken that of the left image 
as known. Orientation parameters of the first image are usually 
assumed as zero. At the same time, mismatches that do not obey 
the coplanarity condition used by relative orientation can also 
be founded and thus removed. In most cases, relative orientation 
is used in conjunction with image matching. There are usually 
significant lens distortions in images acquired by non-metric 
cameras. So lens distortion parameters calibrated in advance 
should be considered in relative orientation process. 
All models in one strip can be automatically linked together by 
determine the baseline length with pass points that derived from 
image matching process. If a set of images include more than 
one strips, orientation parameters beside the first strip should be 
transformed through common points between adjacent strips. 
This work can be done by calculating seven parameters (three 
translations, three rotations and one scale) between two strips 
with model coordinates of common points. 
3.3 Aerial Triangulation 
There are inevitably discrepancies among camera parameters 
and model coordinates of conjugate points after model link. So 
free network bundle adjustment is performed to eliminate these 
discrepancies. Then absolute orientation parameters (usually 
seven parameters) can be easily calculated with model 
coordinates and world coordinates of ground control points. 
Initial values of unknowns in aerial triangulation can be 
obtained by absolute orientation process with results of free 
network adjustment. 
Aerial triangulation with ground control points is the key of 
geometric precision analysis and prerequisite of 3D information 
extraction. Collinearity equation is still the basic mathematic 
model of aerial triangulation. Because the camera used for data 
acquisition is non-metric, interior parameters will change a little 
bit from time to time. The pre-calibrated interior parameters will 
not exactly the same as that of the truth of data acquisition. So 
self-calibration strategy is expected to be used in aerial 
triangulation. Furthermore, similar as traditional film based 
photogrammetry, there are also systematic errors in coordinates 
of images acquired by digital cameras (Cramer, 2007). High 
order correction polynomials like the 44 parameter Gruen model 
is introduced as unknowns in aerial triangulation. 
Self-calibration parameters of lens distortion are often closely 
correlated with additional systematic parameters. The two sets 
of parameters should not be unknowns at the same time in 
bundle adjustment. 
Forward and side overlaps of low altitude image sequences are 
both higher than that of the traditional photogrammetry. A 
certain ground point usually has several corresponding image 
points. So the geometric model of low altitude image sequences 
is stronger than traditional photogrammetry. Another advantage 
is that the precision and reliability of aerial triangulation of low 
altitude image sequences will also increase since there are more 
redundant observations. So the precision of aerial triangulation 
will also superior to that of traditional photogrammetry. 
4. EXPERIMENTS AND RESULTS 
Experimental results of the proposed approaches will be 
discussed in this section. Low altitude image sequences and 
ground control points are used as sources of information. 
Results of image matching, aerial triangulation and digital 
photogrammetric products generation such as DSM, DOM and 
DLG will be discussed in the following. 
4.1 Image matching 
There are about 600 to 800 successfully matched conjugate 
points in each image pair with the proposed approach of low 
altitude image matching. Figure 7 shows the matched conjugate 
points of one stereo pair. As can be seen, all conjugate points 
are randomly distributed in overlapped areas. These conjugate 
points are enough for aerial triangulation. They can also be used 
to analyze the forward and side overlaps or relative rotation 
angles between adjacent images. The mean forward overlaps 
between adjacent images are well fitted with the predefined 
80%. The maximum forward overlap between adjacent image 
pairs is about 85%, and the minimum is about 75%. The 
maximum side overlap between images of adjacent strips is 
about 80%, and the minimum is about 70%. As compared with 
the predefined 80% forward overlap and 75% side overlap, 
maximum overlap variation is about 5% in both directions. The 
maximum orientation variation between adjacent images of the 
same strip is usually less than 5 degrees. 
Figure 7. Matched conjugate points of a stereo pair 
However, rotation angles between images that belong to 
different strips usually larger than that belong to same strip. As 
shown in figure 8, the rotation angle (kappa) between two 
images of adjacent strips is about 9 degrees. Sometimes, this 
angle will be about 15 degrees. As can be seen in Figure 8, 
although the amount of conjugate points matched by the 
proposed algorithm is less than that in figure 7, almost all of 
them are exact conjugates. These conjugate points are vital for 
aerial triangulation because they link different strips together. 
Figure 8. Matched conjugate points of images of adjacent strips
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.