Full text: XVIIth ISPRS Congress (Part B3)

  
Figure 2c. In order to find all the humps, we segment 
gray-value image to form contour lines. In this step, the 
interval between adjacent contours is a key parameter. In 
order to detect all humps, the interval should always be 
smaller than the lowest height of the humps in a given 
scene. In the contour image, humps are characterized by 
closed boundaries. See Figure 3b. 
3.2.4 Eliminating non-hump boundaries and redundant 
hump boundaries 
In Figure 3b some non-hump boundaries as well 
as redundant boundaries can be seen. To eliminate all non- 
hump boundaries, two generic properties are used. 
Closure property: a boundary for a hump is always 
closed. Length property: a hump boundary should not be 
too short or too long. By choosing the most outside 
boundary, redundant boundaries are eliminated. 
3.2.5 Eliminating blunders 
After all bright clusters in a gray-value DEM 
image are determined, they must be examined for 
blunders, such as some high peaks caused by wrong 
matching and bunkers. Shape operators may be useful to 
detect some blunders. An example for a simple shape 
operator is the ratio of length and width of a hump. For a 
complicated one, central moments may be used[Bian, 
1988]. For instance, the second and third order central 
moments will tell the shape of an object and its 
symmetry. For bunkers, an elevation operator may be 
implemented to check all detected humps. If the gray 
value(elevation) inside a hump is lower than its 
surroundings, then it is not a hump, but a bunker. After 
all blunders have been eliminated(Figure 3c), the 
remaining humps are stored, together with shape 
information, such as average height, length, width, and 
volume. 
3.3 Grouping of 3D edges 
All 3D edges are now grouped into humps based 
on their locations under the condition that all edges in 
one group should belong to one hump. The number of 
groups is identical to the number of humps. Edges which 
do not belong to any hump are grouped into an extra 
class: topographic surface edge. 
3.4 Segmentation and Classification of 3D edges 
In this step hump edges are segmented into 
horizontal and vertical edges, and further horizontal 
edges are classified into edges on the topographic surface 
or above it. 
3.4.1 Classifying 3D edges into horizontal and vertical 
edges 
In the 3D space, an edge can be a 3D curve. For 
such an edge, some segment(s) of it may be horizontal 
and other segment(s) are vertical. Horizontal edges are 
composed of horizontal edge segments, and vertical edges 
722 
are from vertical edge segments. To get the segments, 
every point of a 3D edge is classified as horizontal point 
or vertical point based on an angle defined by the 
following formula: 
Zi 2-1 
Xy 
  
angle — arctan( ) 
where z; and z;., are two elevation values of the two 
adjacent points, pj and pj.1, and dyy is the distance 
between the two points on horizontal plane. If the angle is 
greater than a threshold, the point pij is classified as 
vertical. After all points of an edges have been classified, 
by simply connecting the adjacent points of the same 
class, horizontal and vertical edges are generated. 
3.4.2 Classifying horizontal edges belonging to the 
topographic surface 
To classify horizontal edges in a hump as edges on 
the topographic surface or above the surface, first it is 
necessary to find the minimum elevation of the edge 
points of a hump. Onct the minimum elevation is found, 
according to the average elevation of a horizontal edge, 
the edge is classified as edge on the topographic surface 
or above it. 
4. EXPERIMENTAL RESULTS 
We tested our approach on several stereo pairs ot 
urban area image patches. 
4.1 Source Data 
The image patches used in the experiments were 
selected from aerial images(model 193/195) of The Ohio 
State University campus, a good example of a typical 
urban scene. The scale of the photographs, from which 
the digital images were digitized, is about 1:4000. The 
experiment was performed on the images with a 2k x 2k 
resolution. Each pixel in the images represents a square 
44cm x 44cm. For the experiment two image patches 
were selected with a size of 512 x 512. 
Figure 2a shows the two image patches used in the 
experiment. The matched edges are shown in Figure 2b, 
and a DEM surface generated from the matched edges is 
shown in Figure 2c. The two figures in Figure 2c are two 
different view angles for same one DEM surface. The 
DEM surface was generated by using Interhraph's 
modeler software. We recognize from Figure 2c that the 
buildings are distorted by the interpolation process. 
4.2 Experimental results 
Figure 3a is the gray-value DEM image for the 
DEM in Figure 2c. In this image some bright clusters are 
recognizable, which indicate potential humps. Comparin; 
this figure with Figure 2a, we see that areas witl 
buildings are obviously brighter than their surroundings 
Figure 3b shows a contour image of Figure 3b. Th 
conto 
detec 
locati 
Addit 
deter 
to h 
geom 
proce 
only 
two s 
librar 
hum 
surfa 
grouj 
DEM 
been 
humr 
on tl 
genei 
all h 
surfa« 
of ve 
the tc 
prop: 
inter, 
mate 
impr: 
5. Her 
place 
demor 
vertic: 
impor 
proce: 
recog 
Surfac 
recons 
surfac 
and ; 
Addit 
used t 
ACK? 
NASA 
Comp 
Unive 
for pr 
and M 
REFE
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.