Full text: New perspectives to save cultural heritage

CIPA 2003 XIX th International Symposium, 30 September - 04 October, 2003, Antalya, Turkey 
deviation. Differences in the principle point location are smaller 
than 2.0 times the standard deviation. 
3. IMAGE ORIENTATION 
3.1. The method for image orientation 
The method for image orientation aims at full automation and is 
described in (Heuvel, 2002). The camera is assumed to be 
calibrated, in this case by the method outlined in section 2. The 
method relies on automated straight line extraction and 
vanishing point detection, and results in a model coordinate 
system that is aligned with the building. The building has to 
fulfil the following requirements for the method to be 
successful: 
Parallel and perpendicular straight object edges 
Coplanarity of the edges in the façades 
Successful orientation can require a few manual measurements 
to allow for reliably resolving ambiguities inherent in the 
vanishing point detection and in the repeating and symmetric 
structures present in most buildings. Furthermore, the manually 
measured points reduce the computational burden considerably 
and can be used to guarantee the required overlap of at least one 
point between consecutive models needed to transfer scale from 
model to model. 
The semi-automatic method for relative orientation outlined 
above is successfully applied to four images of the CIPA 
reference data set (Figure 1). In Figure 4 two views on the 
resulting approximate reconstruction are shown. Relative scale 
of consecutive models was determined using a manually 
measured point on each corner of the building. In fact this 
method results in an approximate and partial reconstruction of 
the building. The fully automatic relative orientation of two of 
the four images is described in the next section. 
3.2. Image orientation using the CIPA data set 
Two characteristics of the CIPA data set images play a major 
role in prohibiting the method for automated relative orientation 
to be successful in all cases. The first one is the considerable 
differences in image scale between images. This is due to the 
obliqueness of the selected images relative to the façades, as 
well as to the large differences in the object to image distance 
(see image 16 in Figure 1). Secondly, the repeating structures in 
the form of the many identical windows make the detection of 
the correspondence ambiguous. To some extent it is possible to 
adapt the parameters to these characteristics. In the example 
presented here, straight lines are extracted for images 3 and 6 
with a minimum line length set to 40 pixels. The maximum 
distance between two lines to decide for their intersection was 
set to 10 pixels (Figure 5 on the next page). When these 
parameters were set to 30 and 5 pixels respectively a correct 
solution could only be found with two additional manual point 
measurements. The reason is found in the symmetry of the 
building; the long façades (on the left in image 6 and on the 
right in image 3) are erroneously matched when many lines in 
these façades are available. A longer minimum line length (40 
instead of 30 pixels) avoids this. 
In Table 4 some statistics of the experiment are presented. The 
table demonstrates the reduction of the computational burden, 
inherent in the method, to manageable proportions. The number 
of possible correspondences is considerably reduced (from 
204x214= 43656 to 2378) by checking characteristics of the 
intersection of the image lines, such as the orientation of the 
lines in object space that is available from the vanishing point 
detection. The correspondence hypotheses are being clustered 
based on a statistical coplanarity test for each combination of 
two correspondences. Not all combinations are tested; two 
correspondences with different orientation of the facade are not 
combined. (# potential tests Table 4). Furthermore, a number of 
tests can be excluded because of an unlikely relative position of 
the two images. For instance, the angle between the relative 
position vector and the vertical is required to be close to 90 
degree (threshold set to 10 degree). The clustering results in 
3706 clusters of correspondences. For each cluster an overall 
adjustment is set up. The correspondence with the largest 
rejected statistical test is removed from its cluster and the 
adjustment is repeated. 
Figure 4: Two views on the approximate reconstruction from the 
4 images using the semi-automatic method for orientation 
Parameter 
Value 
Minimum line length 
Maximum distance for point creation 
# created points image 1 / 2 
# correspondence hypotheses 
# potential tests 
# computed tests 
# accepted tests 
# clusters 
Maximum # correspondences 
# clusters after testing 
Maximum # correspondences 
Test (ratio with critical value) 
40 pixel 
10 pixel 
204/214 
2378 
1,182,214 
213,991 (18.1%) 
26,063 (3.2%) 
3706 
27 
97 
22 
3.65 
Table 4: Statistics of correct solution for the automatic relative 
orientation of image 3 and image 6.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.