ne XXXIX-B3, 2012
age matching. Figure 4(a)
e relief displacement of
different from different
er window for matching.
he hypothesis planes and
pace as shown as Figure
may correct the tilt
‚VgNCC is calculated at
l(c). In this example, the
n after the wall of LOD 2
he master window)
-1.2m, AvgNCC-0.9
2m, AvgNCC=0.6
corrected image for
hing in different depths
ferent depths
le image matching
ructural lines in different
directions
nethods for a line
matching for a line. The
e two endpoints of a line.
e 5(a). A 3D line can be
iple image matching is a
ze of endpoints matching
the endpoint should be a
ion. The second strategy
ching window of a line is
| is that it can cover the
g. The last one is an edge
line into a set of edge's
| the edge for matching.
natching. Comparing the
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
line and edge matching, the former can only handle 3D lines
that are parallel to a wall as the hypothesis plane is parallel to a
wall. The latter is suitable for 3D lines in different directions.
Figure 5(d) shows two examples of structural lines in different
directions.
2.4 3D Line Fitting
For endpoints matching and line matching, we can directly
generate 3D lines in object space. However, for edge points
matching, a number of 3D points are generated after the
multiple images matching. There are two major steps in line
regression. In the first step, we use Random Sample Consensus
(RANSAC) to obtain the collinear points in object space. The
advantage of RANSAC is to remove the outliers in 3D line
fiting. We iteratively and randomly select two points to
calculate the line parameters, i.e. direction and starting points of
a line. Then, we find the maximum cluster in parameter space.
The maximum cluster represents the collinear points. In the
second step, we use least square adjustment to calculate the
optimal lines. Figure 6 is an example of 3D line regression. The
red circles are the 3D points from matching. The blue line is the
extracted line.
Figure 6. An example of 3D line fitting from 3D points
3. EXPERIMENTAL RESULTS
The test data are multiple close-range images taken by Nikon
D2X camera. The target is a fagade of a building. The average
image scale is about 1/3000. The base-to-depth ratio between
the two camera stations is about 1/10. The LOD 2 building
model is generated from 1/5000 aerial images. The estimated
accuracy of the building model is about 30cm. Table 1 is the
related information of the test images.
Table 1. Related information of test images
Date 2011/3/29
Camera Nikon D2X
Number of image 33
Image size(pixel) 4288x2848
CCD size(um) 5.5
Focal length(mm) 18
Spatial resolution(mm) 6
Overlap (%) 90
3.1 Orientation modelling
The automatic image matching has generated 2515 tie points.
Among these tie points, 2166 points is the intersection of two
rays, the remaining points is the intersection of three or more
rays. Figure 7(a) shows four images with matched points. The
control and check points are collected by a total station. The
65
number of control and check points are 4 and 26 points,
respectively. The mean error of check points in three directions
are 3.3, 5.2 and 2.6 cm, respectively. The RMSE of check
points in three directions are 4.1, 4.5 and 1.8 cm, respectively.
As the Y direction is the look direction of the camera, the error
in Y direction is larger than the other directions. Figure 7(b)
shows the distribution of camera stations. The yellow points in
Figure 7(b) are the object points intersected from tie points.
(b) perspective view of camera station and 3D points
Figure 7. Results of orientation modelling
3.2 Comparison of Line-based and Endpoints Matching
In order to compare the line-based matching and endpoints
matching for linear features, we selected a window as a target
area for comparison. The target area is about 3.5m by 3.5m in
object space. A close-range image is selected as the master
image. The linear features on master image are manually
digitized at the boundaries of window. The green lines in Figure
8(a) are the digitized lines. Figure 8(a) also shows the shape of
the target window in object space. The digitized line on the
master image is ray-tracing to the wall of LOD-2 building
model, then, back project to other images to find the slave
images. Six slave images are automatically selected out of 32
images. The depth for AvgNCC is ranged from -2m to +2m
with the step of 0.05m. The window size for matching is
0.21cm by 0.21 cm.
Figure 8(b) is the perspective view of matched lines in object
space by endpoints matching. The straight lines on the window
are deformed after the matching. The endpoints matching only
consider the vertices of a line. Hence, The lack of information
caused the distortion of 3D lines. It is especially true when the
vertex of a line is occluded by other objects. Figure 8(c) is the
perspective view of matched lines in object space by line
matching. The results of line-based matching are better than
endpoints matching. The shape of extracted lines is more
regular when compared to the results of endpoint matching. A
few incorrect lines located at the bottom of the window are
caused by self-occlusion.