Full text: XIXth congress (Part B3,2)

  
they define. An 
h road segment 
s exists. Figure 
ters) vary from 
ent step, later in 
rid model 
r road segments 
> propagates the 
his phase of the 
road segments) 
rovide the input 
———————— 
iut Images 
ent 
  
  
  
/hile a DEM ha 
n a road segment 
with the matched 
; case, consistent 
for the extended 
n) to a minimum 
of the geometric 
own in Figure 3l 
en with the shifts 
Edward M. Mikhail 
  
(e.g. in the lower third of image), the worst segments are still bad. We then eliminate these inconsistent road segments 
from the set of good matches and recompute the extended streets. 
  
     
Figure 30. Initial DEM Consistency Measure 
Figure 31. After using DEM 
  
  
  
  
  
This system has been run on several sites one of 
which is over Washington, DC. The final results of 
the extracted street grid are shown in Figure 32 
projected onto an orthophoto corresponding to the 
extracted DEM. This orthophoto only covers most of 
the area of 3 of the images used for the extraction so 
some extracted roads are displayed off the image. The 
colors are used to indicate one (arbitrarily) selected 
street in black, with its intersecting streets in white 
and all the others in gray. The time for the initial 
verification (approximately 2000 intersections) was 
      
  
Figure 32. Extended streets combined on one oblique view 
  
  
  
roughly 90 minutes (covering five 2000X2000 
images). The refinement using the same 5 images and testing about 3200 road segment triples (some are tested in 
multiple images) was about 500 minutes. After all the refinement steps, approximately 63km of streets are extracted. 
A detailed analysis of these results shows one common error: road segments that are misplaced by the width of the road 
(i.e. the left side of the model matches the right side of the actual road and the right side of the model matches some 
other structure parallel to the road). These errors are caused by weak boundaries for the road itself and stronger edges in 
features parallel to the road. Exact measures of quality are not available, but false negatives are approximately 20% of 
the total with placement errors in about 30% of the individual road segments. 
4 VISUALIZATION 
All the activities in the MURI project culminate into a database to be used for a variety of applications. The primary 
objective here is to develop a 3D Visualization Environment that is suitable for rapidly creating and displaying 3D 
virtual worlds from the database in order to promote data understanding and rapid model editing. Some of the expected 
benefits include: (1) an improved model for multi sensor data visualization; (2) enhancement of identification and 
correction of errors in 3D models and terrain data; (3) model verification; (4) change detection; (5) battle damage 
assessment; (6) allowance of high fidelity extraction of 3D models in urban areas; (7) support for data understanding 
through multi-source data fusion; (8) projected textures improve automated extraction algorithms; (9) effective handling 
of occlusion and foreshortening problems; and (10) generation of ortho-rectified imagery or any camera view in real- 
time. 
The system being developed is called Visualization Environment Supporting Photogrammetry and Exploitation 
Research (VESPER). The basic elements of photogrammetry are integrated with 3D visualization technology. These 
include precise camera calibration, position and orientation, overlapping images, and image to ground transformation. 
Methods presented enable the understanding of multiple overlapping images. Image to ground transformation is 
accomplished through careful application of projective textures. A Digital Projector with accurate camera information 
allows imagery to be projected onto terrain and feature surfaces, Figure 33. This method encourages the use of multiple 
overlapping images in VR. We present results that demonstrate the ability of this process to efficiently produce 
photospecific VR. Current photospecific visualization tools lack native support for precise camera models. 
The fusion of multiple image sources in an interactive visualization environment demonstrates the benefits of bringing 
the rigors of photogrammetry to computer graphics. The methods presented show promise in allowing the development 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 605 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.