International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
object-based method that starts from a set of hypothesis planes
in object space. These planes are back projected to different
images. Then, each plane is filled with the gray values from an
image. Finally, a similarity index is calculated to find the best
hypothesis planes. Comparing these two methods, the first one
is a sequential processing while the second one is a
simultaneous processing. Hence, this research selects the
second method for multiple image matching.
In order to generate reasonable hypothesis planes in object
space, we use a LOD 2 building model to provide the initial
location of facade structure. Besides that, we also have the
facade feature from line extraction. The object-based multiple
image matching is implemented by selecting a feature in the
master image. Then, we use the line-of-sight of the selected
feature and LOD 2 building model to derive the intersection
point. This intersection point is the initial location of façade
structure. À number of rectangles in different depths are then
generated based on this initial location. These rectangles are
back-projected to images and resampled the gray value. Finally,
a number of corrected image chips are generated for further
process. Figure 3 illustrates the idea of object-based matching.
CI to C6 denote camera stations. The blue rectangles indicate
the hypothesis planes while the red and green lines are the line-
of-sight. These hypothesis planes are along the line-of-sight of
the master image.
E,
Figure 3. Illustration of object-based matching.
The next step calculates the matching scores from the corrected
image chips. The matching score is based on normalized cross
correlation (NCC) (Schenk, 1999). A number of NCCs is
calculated between the master and slave images at a certain
depth. Then, average NCC (AvgNCC) is obtained by equation
(1) in different depths. The AvgNCC indicates the similarity
between the corrected image chips at a certain depth. We get
different AvgNCCs by changing the depth along the line-of-
sight. Finally, we choose the maximum correlation as the best
hypothesis planes.
NCC(, wind ave(i )
2 Me Slave(i) ( 1 )
depth =
n
AvgNC
Where, Iyer and Ig. are the master and slave corrected image
chips; 7 is the number of slave image; AvgNCC is average NCC.
64
Figure 4 is an example of multiple image matching. Figure 4(a)
shows 5 original images. Due to the relief displacement of
facade structures, these images look different from different
views. The red box indicates the master window for matching.
We use different depths to generate the hypothesis planes and
the corrected image chips in object space as shown as Figure
4(b) The corrected image chips may correct the tilt
displacement of image. Then, the AvgNCC is calculated at
different depths as shown as Figure 4(c). In this example, the
maximum AvgNCC is located at -1.2m after the wall of LOD 2
building model.
th=2m, AvgNCC=0.6
(b) corrected image for
matching in different depths
Carrelation
Depth
(c) Average of NCC in different depths
Figure 4. An example of multiple image matching
(d) structural lines in different
directions
Figure 5. Different matching methods for a line
(c) edge matching
There are three possibilities to do the matching for a line. The
first one only performs matching on the two endpoints of a line.
The matching area is shown as Figure 5(a). A 3D line can be
reconstructed by 3D endpoints. Multiple image matching is a
time consuming process. The advantage of endpoints matching
may save the computation time; but, the endpoint should be a
well-defined endpoint without occlusion. The second strategy
performs matching on a line. The matching window of a line is
shown as Figure 5(b). The advantage is that it can cover the
whole gray value of a line for matching. The last one is an edge
matching, which means, we divide a line into a set of edge's
points. Then, we use every point on the edge for matching.
Figure 5(c) shows the idea of edge matching. Comparing the
line
thai
wal
Fig
dire
2.4
For
gen
mai
reg
(R/
adv
fitti
cal
ali
The
sec
opt
red
ext
Fig
The
D2
ime
the
mo
acc
relz
Ovc