| over Fort
ographs of
> data sets,
irst, image
associated
images of
| measured
lean square
| between 0
gent frame
graphs are
n the three
e discussed
MS (mm)
0.075 |
0.074 |
0.074 |
0.074 |
DO]
0194 |
ken at 1650
eters with a
is 30 by 30
t Hood Data,
n
|-3 in order to
'e coordinates
lodels 2 and 3
shotographs is
ations (7), are
Edward M. Mikhail
2.4.5 Purdue EE Building Data. The Purdue EE building data set consists of three convergent photographs taken
from a parking garage approximately 30-50 meters away from the Electrical Engineering Building on the Purdue
University Campus; see Figure 4. Two pictures were taken from the top level of the garage (43 and 42), while one was
taken from ground level (48). Photographs were taken with a 75mm hand-held camera, and the diapositives were
scanned at 15 micrometers. The results for image transfer to photograph 48 are shown in Table 3.
‘Figure 4. Purdue EE Building Data, Frames 43, 42, 48, from left to right
2.4.4 Discussion of Image Transfer Results. Results from these three data sets show that provided that degenerate
cases do not exist, Models 2 and 3 will converge and their results will be the same as in Model 4. Thus, for the
remainder of the paper results from Models 2-4 will be combined and printed as if the same model. Models 2-4 are
rigorous in the sense that they all linearize with respect to the observations, unlike Model 1 which may give
unpredictable results.
As for the F-matrix technique, the technique that uses all three F matrices
and their associated constraints is less susceptible to noise, as shown Model x RMS y RMS
clearly by comparing the last two rows of Table 1 for 25 micrometers of (pixels) | (pixels)
noise. Although not as robust of a technique as collinearity, F-matrix Model 1 1.48 1.40
techniques have the benefit of being applicable to those parts of the scene Model 2 1.43 1.37
where points can be observed on two images only. Model 3 1.43 1.37
Model 4 1.43 1.37
3Fs 1:29 1.95
3 OBJECT RECONSTRUCTION 2F's ].24 2.01
Table 3. Image Transfer Experiments
Section 2 described techniques to first establish the relationship between | With Purdue EE Data, 8 control points,
the image coordinates on three photographs. Once the image-to-image 8 check points
relationship is established, the introduction of known 3D control points
allows us to solve for the 3D projective transformation matrix. This 4 by 4 matrix is used to transform object points
from model to ground coordinates, and similarly to transform the camera transformation matrices from relative to
absolute. Therefore, ground coordinates of new points observed on two or more photographs can be computed, and the
physical camera parameters can be estimated from the absolute camera transformation matrix.
3.1 3D Projective Transformation Matrix
For the general case of uncalibrated cameras, the model coordinates computed using relative camera transformation
matrices are in a 3D non-conformal system that requires a 3D projective transformation to obtain ground coordinates in
a 3D conformal system. After the 3D model coordinates have been computed for all of the points, the next step is to
compute the 15 elements of the non-singular 4x4 projective transformation matrix, H. Since three equations per point
can be written, a minimum of 5 points is required to solve for the 15 elements of H. (Note that the (4,4) element of H is
set to unity.) With more than 5 points, a linear least squares solution is applied. The 3D projective transformation H is
from projective ground space to projective model space. The relations for coordinates and transformation matrices are
given by:
[ v Zzup sx Y Z iar (12a)
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 589