Full text: Proceedings, XXth congress (Part 3)

    
   
     
    
   
   
   
   
    
   
   
   
  
  
  
  
  
  
  
  
  
   
   
  
    
    
     
    
   
   
  
    
   
   
   
   
     
    
   
    
   
    
    
    
ul 2004 
ck 
  
s during 
the fact 
to shad- 
; due to 
; a com- 
id GPS: 
her with 
d route. 
ime due 
t can be 
ombina- 
luced in 
'rzinger, 
s differ- 
n all di- 
ons (see 
jns with 
be used 
tterns in 
, 1999), 
in situ. 
vertical longitudinal 
SQ 
O- 
7 
3 t 
T : : 
1 
cA 
   
  
I: T 1 r + T + r + Y r Y f 7 -— T T i 
10. 14 18 2 26 30 34 10: 14. 185. 22 26 30 34 
Figure 3: Accelerations during human walking 
Different types of hard or soft underground cause differ- 
ent step pattern. This patterns change completely with a 
slope > 10°. Taking the wrong time intervals will re- 
sult in wrong number of steps. This can cause travelled 
distance errors of 10m or more meters in dead reckon- 
ing mode (Ladetto et al., 2000). (Ladetto and Merminod, 
2002) present a combination of GPS, INS, electronic com- 
pass and barometer in a small wearable system that is used 
in a pedestrian navigation system for the blind. The system 
is capable of detecting several walking patterns including 
up and down stairs movement. It can track the person’s 
path even indoors with several navigation algorithms that 
compare the information of the output of all sensors and 
using different strategies in case of sensor outages or con- 
flicting information. 
A wealth of research deals with fusion of several sensors in 
order to overcome the weaknesses. Vision based methods 
e.g. are used in combination with INS to improve head mo- 
tion tracking accuracy (Ribo et al., 2002, You et al., 1999) 
with the computer vision algorithms providing information 
of low frequency movements and INS for fast movements. 
3.1 Occlusion 
An unprocessed overlay of virtual objects in a video stream 
or on a head mounted display system will not result in a re- 
alistic impression of the fusion of real and virtual scene. 
Without any further processing virtual objects behind a 
real object like a building will be displayed in front of that 
building instead of being occluded by it. 
In order to solve the occlusion problem in computer graph- 
ics we can use depth information (z-buffer) of objects to 
be displayed. A matrix is used to store distances from the 
projection center to the object models for each pixel. The 
object with a smaller distance to the projection center oc- 
cludes the one with a greater distance. 
In augmented reality the depth information of the augmented, 
virtual objects can be calculated because the geometry about 
these objects is accessible. But for the real objects on the 
video stream there is in principle no information about the 
geometry. Additional data acquisition is necessary in order 
to generate the depth information for real objects. Other 
1051 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
information of depth | 
of virtual scene | 
information of depth 
of real scene 
virtual water surface 
  
> 
comparison 
: AS ; 
real > virtual ^ ^. virtual > real 
EE - 
color of 
virtual objects video image 
t 
re combination ^^ 
     
  
  
  
  
  
Figure 4: Schema of occlusion processing 
possibilities are the use of image processing (Simon and 
Berger, 1999) or probabilistic models of form and posi- 
tion (Fuhrmann et al., 1999), The virtual objects that are 
used to provide the depth information about the real objects 
are not displayed. They are called "phantoms" (Grohs and 
Maestri, 2002, Fuhrmann et al., 1999). The used occlu- 
sion solution in this work is shown in figure 4. Here the 
depth information (phantoms) is extracted from a digital 
elevation model derived from laser scanning or, if avail- 
able, from building models. These models are used to con- 
tinuously calculate the depth information according to the 
camera movements. The phantoms used for the depth in- 
formation are not visible, they are replaced by pixels from 
the video image. The depth information for the real and 
the virtual scene are compared, pixel by pixel. A real ob- 
ject that is more distant from the projection center than a 
virtual object at a distinct pixel will occlude the real one, 
and vice versa. 
4 ARFOR DISASTER RELIEF: HIGH WATER 
4.1 Motivation 
In 2002, 30% of all damaging events, 42% fatalities, 50% 
economic and 37% insured losses are due to high water 
worldwide (Rückversicherungs-Gesellschaft, 2002). Apart 
from storms, high water causes the most damage in com- 
parison to other natural disasters. Disaster management 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.