Full text: Close-range imaging, long-range vision

  
shadows and occluding objects (e.g. cars, trees, lamp posts) has 
to be reduced, if not completely eliminated. 
The issues discussed in this paper refer to the process of 
extracting 3D line features to complete the required 3D model. 
Details on the 3D reconstruction procedure to obtain the rough 
(topologically structured model) can be found in (Zlatanova & 
van den Heuvel, 2001) and (Vermeij & Zlatanova, 2001). 
Details on the topological organisation of the 3D model in 
relational DBMS are given in (Zlatanova, 2001). 
2. THE APPROACH 
Since the UbiCom system aims at serving a walking person, the 
concentration is basically on 3D features visible from a street 
level, i.e. details on facades and on the terrain surface. This 
paper focuses on the reconstruction of details on facades. Our 
approach to extract 3D line features is based on two 
assumptions: 1) 3D rough geometry of the buildings of interest 
is available (e.g. Figure 1) and 2) the orientation parameters of 
the images are known. The 3D rough model can be obtained 
following different approaches: 3D automatic (Suveg & 
Vosselman, 2002) or semi-automatic (Vermeij & Zlatanova, 
2001) reconstructing procedures or by extruding footprints of 
buildings from topographic maps (e.g. in ArcView, ESRI). In 
order to achieve the requirements for decimetre accuracy of the 
UbiCom project, we have reconstructed manually the 3D 
facades within the test area by using the commercial software 
PhotoModeller (Zlatanova & van den Heuvel, 2001). The 
facades through which knowledge on the “depth” of the 3D line 
features is introduced, support the 3D line feature extraction. 
    
38$ Netscape T | 
  
Figure 2: Rough 3D model, i.e. walls represented as rectangles 
The interior and exterior orientation parameters of the images 
have to be available as well. In our case, we use the parameters 
obtained in the process of manual reconstruction, i.e. obtained 
by an integrated least-squares adjustment of all 
photogrammetric measurements in close-range and aerial 
images (Zlatanova & van den Heuvel, 2001). 
The procedure for 3D line extraction can be separated into the 
following general steps: edge detection, projection of edges on 
the rough 3D model and back projection on the next image, 
edge matching, and computation of the end points of the 
matched 3D edges. 
Edge detection: The edge detection utilises the line-growing 
algorithm proposed in (Foerstner, 1994), i.e. edges (straight 
lines) are extracted by grouping adjacent pixels with similar 
gradient directions and fitting a line through them. After 
calculating the gradients, the line-growing algorithm selects the 
pixel with the strongest gradient as a starting pixel (the normal 
of the edge through this pixel is determined by the grey value 
gradient). Then, if a pixel is eight-connected to a pixel of 
already classified ones and has a gradient that is perpendicular 
to the edge, it is added to the area that describes the line. The 
direction and the position of the edge are re-computed using the 
first and the second moments of the pixels in the edge area. The 
process continues until no more pixels can be added to the edge. 
This algorithm is performed on all the images that contain the 
facade of interest. The outlines of the façade (available from the 
3D rough model) are used to restrict the search area to only 
those edges that represent features on the facades. Only edges 
that fall within the area enclosed by the borders of the facade 
are considered for further processing. 
  
  
Detected —¥. 
edges, 
Projected edges. 
from image 1 
onto image 2 
    
   
Detected ——3 
edges 
  
  
Image 1 
s Image 2 
Projection center 1 Projection ANN 
Figure 3: Knowledge-based edge projection 
  
  
  
  
Edge projection on sequential images: Next, all the selected 
edges from the first image are projected onto the second image 
by applying intermediate projection onto the facade in 3D 
space. This is to say that the rays passing through the end-points 
of an edge (of images 1) and the projection centre 1 intersect 
the 3D plane of the facade into two 3D points that give the 
position of the edge in 3D space. This edge is back projected 
onto the second images, i.e. the rays passing through the 3D 
end-points of the edge and projection centre 2 are intersected 
with the image plane 2 (see Figure 3). Thus, image 2 contains 
already two sets of edges, i.e. projected and detected ones. 
Indeed, the two sets contain a different number of edges with 
slightly different position and a length that can vary 
considerably. The systematic shift in the position is influenced 
by the accuracy of the facade and the quality of the exterior 
orientation of the images, while the length of the detected edges 
depends on the parameters set for the edge detection 
Edge matching: To match the projected and detected edges, we 
apply four constraints. The first one is related to the distance 
between projected and detected edges. A search algorithm looks 
for matching candidates within an area of interest (buffer) 
defined as a rectangle around the projected edge. The second 
constraint takes into account the number of endpoints (one or 
two) of a detected edge that are located within the buffer. The 
detected edges from the second image that have at least one 
endpoint falling in the buffer are considered as candidates. The 
third criterion filters the candidates with respect to the angle 
between detected and projected edges. The fourth and last 
constraint refers to the length of the two matched edges, i.e. the 
difference between the two lengths should not be greater than a 
reasonable threshold. Among all the candidates, the edge that 
matches best is selected. Note, that an edge from image 1 may 
be matched with more than one edge from image 2. 
-234- 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.