Full text: XIXth congress (Part B3,2)

Edward M. Mikhail 
  
  
  
  
  
belong to the same class, and hence to the same evid 
object, or not. However, objects, such as building valic 
Spectral & Depth Data Fusion rooftops, are not always homogeneous in material and Figu 
the hyper-spectral data is usually of a significantly com 
lower resolution than that of PAN images. For these buik 
reasons, we have decided, at least at present, to use 
| | DEM and spectral sensors to provide cues for the 3.3.3 
Thematic Cues je presence of buildings but to use PAN images for 
De accurate delineation. Figure 21 shows a block The 
diagram of our approach. The left most column usefi 
denotes the multi-view system described above. If regic 
Improved DEM data is available, object cues are extracted from from 
Classification it and supplied to MVS where this information can be 
used to aid in the process of hypothesis formation and HYI 
selection. Similarly, HYDICE data is analyzed to form 
: : : ; produce thematic maps which again aid in the process Fieu 
Figure 21. Multi-Sensor Information Integration of hypothesis formation and "election for MVS i Ed 
These processes are described in some detail next. betw 
mult 
3.3.2 DEM Supported MVS coor 
requ 
The DEM for the Ft. Hood site, corresponding to the area shown earlier in Figure 18, is shown in Figure 22 (displayed 
intensity is proportional to elevation.) Note that while the building areas are clearly visible in the DEM, their boundaries 
are not smooth and not highly accurate. These characteristics prevent direct extraction of buildings from DEM images 
but clearly can help cue the presence of 3-D objects. The building regions in a DEM are characterized as being higher 
  
  
  
            
  
  
  
  
  
Figure 22. DEM corresponding to image in Figure 23. Lines near DEM cues have 
Figure 18 or H 
appre 
than the surround. However, simple thresholding of the DEM is not sufficient, as height variations of the magnitude of our c 
a single story building can occur even in very flat terrain sites. Our approach is to convolve the image with a Laplacian- 
of-Gaussian filter that smoothes the image and locates the object boundaries by the positive-valued regions bounded bj 3.41 
the zero-crossings in the convolution output. Object cues are used in several ways and at different stages of the 
hypothesis formation and validation processes; they can be used to significantly reduce the number of hypotheses that The 
are formed by only considering line segments that are within or near the cue regions. The 3-D location of a line segment allov 
in the 2-D PAN images is not known. To determine whether a line segment is near a DEM cue region we project the Sinc 
line onto the cue image at a range of heights, and determine if the projected line intersects a cue region. Figure 19 Serio 
earlier showed the line segments detected in the image of Figure 18; Figure 23 shows the lines that lie near the DEM e 
at 1 
cues. As can be seen, the number of lines is reduced drastically (81.5%) by filtering without losing any of the lines 
needed for forming building hypotheses. This not only results in a significant reduction in computational complexity 
but many false hypotheses are eliminated allowing us to be more liberal in the hypotheses formation and thus including The 
hypotheses that may have been missed otherwise. We also use these cues to help select and verify promising have 
hypotheses, or conversely, to help disregard hypotheses that may not correspond to objects. Just as poor hypotheses cal US 
be discarded because they lack DEM support, the ones that have a large support see their confidence increase during the Ses 
autor 
verification stage. In this stage, the selected hypotheses are analyzed to verify the presence of shadow evidence and wal 
  
602 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.