Full text: CMRT09

CMRT09: Object Extraction for 3D City Models, Road Databases and Traffic Monitoring - Concepts, Algorithms, and Evaluation 
edges should very well coincide with a strong image line. 
However, in practice there are often a small offset and an 
angle between a model edge and its corresponding image 
line. The optimal angle and distance threshold value are de 
pendent on the quality of exterior orientation parameters and 
focal length. 
2. A best match is chosen from all candidates according to 
either the collinearity of the candidates or the candidate’s 
length (see the purple lines in Figure 5). It is a common 
case that a strong line is split to multiple parts by occlusions 
or shadows. If a number of hough line segments belong to 
a same line, we set this line as the best match. If not, the 
longest candidate is chosen as the best match. 
Figure 5: Matching model edges with image lines (Blue: model 
edges’ projection in the image; red: Hough lines; green: candi 
dates; purple: the best matches) 
No spatial index is established in the image space to improve the 
comparison efficiency, because the search space is already local 
ized to a single building facade, which includes only dozens of 
edges and Hough lines. 
A limitation of this matching method is that it can hardly de 
termine the correct corresponding edge if too many similar line 
features are within searching range. Simply comparing the geom 
etry properties of position, direction and length are not sufficient 
in this case. For example, the eaves in Figure 5 result in many 
significant lines and they are all parallel and close to the wall’s 
upper boundary edges. These eave lines can be distinguished if 
the eave is also reconstructed and included in the facade model, 
but ambiguity caused by pure color pattern is still difficult to be 
solved. 
5.3 The refinement strategy 
After matching, most model edges should be associated with a 
best matched image line. These model edges are updated by pro 
jecting to their best matched image line. There are some model 
edges which don’t match any image lines. If no change is made 
to an edge with its previous or next edge changed, strange shapes 
like sharp corners and self-intersections may be generate. There 
fore interpolations of the angle and distance change from the pre 
vious and next edges, are applied to the edges without matched 
image lines. With these refinement strategies, an original model 
is updated to be consistent with the geometry extracted from im 
ages, and the model’s geometry validity and general shape are 
also maintained. 
Finally, the refined model edges in image space need to be trans 
ferred back to the model space. Because the model edges are only 
moved on their original 3D planes, which is known, the collinear 
ity equations are used again to calculate the new 3D positions of 
all the modified model vertices. 
6 TEST CASES 
In this section, three data sets are experimented with the presented 
refinement method. The building models are produced with the 
reconstruction approach introduced in Section 3. All the images 
are originally provided as Cycloramas. The central perspective 
conversion and exterior orientation calculation follow the pro 
cesses explained in Section 4. 
6.1 The restaurant house 
The inconsistencies between the model edges and image lines in 
Figure 6 are mainly due to inaccurate exterior orientation of the 
image. It is difficult to pick an image point accurately by man 
ual operation. Picking the corresponding point in a laser point 
cloud is also a difficult job. Automated texturing of building fa 
cade models is desired in the context of our research. The quality 
of the exterior orientation is a key issue to the texturing effect. 
Even a minor inaccuracy in the exterior orientation parameters 
can lead to poor texture result, as shown in Figure 7(a). Apply 
ing our refinement method, several model edges are linked with 
their matched image lines (see Figure 6(b)), and are updated ac 
cordingly. The texture result is significantly improved as shown 
in Figure 7(b), with the sky’s background color removed. How 
ever, the middle top part of the facade model is still not refined, 
because this image part is too blurred to output a Hough line. 
Figure 6: Matching model edges with image lines for refining the 
restaurant house’s model 
6.2 The town hall 
The upper boundary of the town hall in Figure 8(a) contains a 
lot of tiny details, which are well recorded by laser scanning and 
modeled as sawtooth edges in the building facade model. Instead 
of adjusting the outline generation parameters in the reconstruc 
tion stage, we can also use the presented image based refinement 
to smooth the model outline. Figure 8(b) shows the matching 
step. The model’s upper edges are successfully matched to the 
strong lines, which actually come from the eave. In this example
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.