Full text: CMRT09

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3A/V4 — Paris, France, 3-4 September, 2009 
panoramic sphere part to an assumed plane. First, we make two 
lines by connect the image acquisition point (perspective center) 
with the most left and most right model vertices. The angles of 
the two lines with north direction derive the longitude boundaries 
of the region of interests (ROI). In practice we widen the ROI 
to both left and right by 100 pixels, because the image acquisi 
tion positions provided by GPS are not so reliable. The princi 
pal point is set on the sphere equator, with middle longitude of 
the two boundaries. Assuming the perspective center coincide in 
both perspectives, the pixels inside the ROI are converted from 
panoramic perspective to central perspective according to the fol 
lowing equations: 
threshold parameters should be specified for edge linking and 
finding initial segments of strong edges. Thresholds set too high 
can miss important information. On the other hand, thresholds set 
too low will falsely identify irrelevant information as important. 
It is difficult to give a generic threshold that works well on all im 
ages. In addition to the conventional Canny algorithm, we apply 
a histogram analysis on the image gradients in order to adaptively 
specify the threshold values. However, factors such as illumina 
tion, material, and occlusions still result in many irrelevant edges. 
In the other hand, some desired edges may not be extracted due 
to the nature of images. For example, outlines of a wall with very 
similar color with surrounding environment will not be detected. 
Outlines inside shadow areas can hardly be extracted either. 
tan a = 
x p xo 
r 
Vp ~ Vo 
r 
X c ~ Xq 
f 
tan ¡3 = 
(y c - yo) x cos a 
f 
(1) 
(2) 
(3) 
(4) 
where (x p ,y p ) is the pixel coordinate in panoramic perspective; 
(x c , y c ) is the pixel coordinate in central perspective; (xo, yo) is 
the principal point; r is the angle resolution; a and /3 represent 
the longitude and latitude of the pixel on the panoramic sphere; 
/ is the distance of the panoramic sphere center to the assumed 
plane, can also be seen as the focal length of the converted cen 
tral perspective image. With equation 1 to 4 the unique relation 
between (x p , y p ) and (x c , y c ) can be determined. 
4.2 Spatial resection 
In order to get an unique solution for the six unknown exterior ori 
entation parameters, at least observations of three image control 
points should be available to form 6 collinearity equations. Fig 
ure 3 illustrates the interface for selecting tie points from a laser 
point cloud and an image. In this implementation it is required 
to select at least four tie pairs, with one pair for error checking. 
If more than four pairs are selected, a least squares adjustment is 
performed to obtain better results. 
Strong line features are further extracted from Canny edges by 
Hough transformation (see Figure 4(c)). Because of the unpre 
dicted number of edges resulted from the previous step, a lot of 
irrelevant Hough line segments may also be generated. To min 
imize the number of these noise lines, instead of adjusting the 
thresholds of Hough transformation, we sort all the Hough line 
segments according to their length, and only keep a certain num 
ber of longest ones. This is based on the assumption that building 
outlines are more the less the most significant edges in an image. 
The limitations of this assumption are already anticipated before 
applying to practice. For example, large and vivid patterns on a 
wall’s surface can result in more significant line features than the 
wall edges. 
(a) Raw image (b) Canny edges (c) Hough lines 
Figure 4: Extracting significant lines from an image 
5.2 Matching model edges with image lines 
To match model edges and the image lines, both should be located 
either in the 3D model space or 2D image space. We have chosen 
the latter space, because projecting object from 3D to 2D is much 
easier than the other way around. With the calculated exterior 
orientation parameters from spatial resection and the focal length, 
model edges can be projected to the image space according to the 
collinearity equations (see the blue lines in Figure 5). 
Assuming a relatively accurate exterior orientation and the focal 
length are available, the best matched image Hough line for a 
model edge is determined in two stages: 
Figure 3: Selecting tie points for spatial resection 
5 MODEL REFINEMENT 
5.1 Extraction of significant lines from images 
The Canny edge detector algorithm (Canny, 1986) is used for ini 
tial line extraction (see Figure 4(a) and Figure 4(b)). Here two 
1. Candidates of best matching image lines are filtered by their 
parallelism and distance with the model edge (see the green 
lines in Figure 5). In other words, the angle between a candi 
date with the model edges should be smaller than a thresh 
old (5 degree for example), and their distance should also 
be smaller than a threshold (half meter for example). Note 
the the actual distance threshold is in pixel, which are also 
’’projected” from a 3D distance on the wall plane. If the ex 
terior orientation and focal length are perfect, most model 
219
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.