Full text: Proceedings, XXth congress (Part 5)

  
ulated the 
igle of the 
1emory. 
> the new 
We used 
on frame. 
> triangles 
5 1s used 
character. 
can be 
cts inside 
(4) 
— 
e 
DE 
f crowd 
nd more 
inimation 
level of 
technical 
t we are 
/ariety of 
IS variety 
the same 
behavior 
uations is 
ude same 
Is require 
ronment 
increase 
"hallman, 
i] crowds 
crowded 
‚ge speed 
:anged to 
user can 
changing 
nentation 
asure the 
animated 
it terrain 
er model 
tions and 
yectively. 
nbul 2004 
  
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
Figure 5. 10.000 animated characters in test VE 
  
  
  
  
  
  
Name Processor Memory | Graphics Hardware 
Test Intel Pentium 4,1256 mb | NVidia  GeForce2 
PC#1 1.8 Mhz go 400, 32 mb 
Test Intel Pentium 4,| 512 mb | NVidia  GeForce4 
PCA2 2.4 Mhz go, 460 64 mb 
Test Intel Pentium 4,|2 gb NVidia GeForceFX 
| PC#3 3.2 Mhz 5900, 128 mb 
  
  
  
Table 1. Configuration of Test PCs 
  
  
  
  
  
  
  
  
  
Test PC # of Soldiers in Viewing Frustum Frame 
Rate 
Test PC#1 3000 4 fps 
Test PC#1 380 20 fps 
Test PC#1 110 30 fps 
Test PC#1 1100 10 fps 
Test PC#2 300 30 fps 
Test PC#2 800 15 fps 
Test PC#2 1970 6 fps 
Test PC#3 1700 20 fps 
  
  
  
  
  
Table 2. Rendering performance results 
2.3 Sound 
Let us imagine a scenario that happens in immersive VR 
system: The user walks in the geo-specific VE. When he wants 
to learn the name of the hill he faces, one way is to display text 
that shows the name. Although the user is informed, the 
displayed text destroys realism, as in the real world we do not 
see geographic names on the hills. Also this solution is not 
different than using traditional 2D GIS or paper map. Another 
solution, which we prefer, is text to speech mechanism. By 
using GIS import tool we get the coordinates of geographic 
locations in Gazetteer and defined a buffer around them. For 
example when the user touches a hill with data glove he hears 
the name of the hill. We simulated this scenario on desktop VR 
with mouse and evaluated it useful. Regarding the use of sound 
in VE we used library of wav files that contains sound effects 
such as wind, marching group, rain, various engine sounds etc. 
Although these simple sounds contributed the realism of the 
environment more realistic use of sound is essential. Real world 
effects such as Doppler effect may better impress the user. 
      
   
   
  
    
  
   
  
  
   
   
   
  
    
   
  
  
  
   
  
  
  
   
   
   
  
    
  
   
  
   
   
   
  
  
    
   
   
    
  
   
   
     
    
    
   
    
   
    
   
   
    
   
    
  
  
    
   
    
     
   
   
   
   
   
      
2.4 Conceptual Elements 
2.4.1 Sky and Clouds: Atmospheric rendering is an 
important step to generate impressive virtual environments. 
There are a lot of methods for sky and atmosphere rendering, 
ranging from the use of single color to very realistic models. 
The sky color is time and location dependent. It is a known fact 
that the sky color around the horizon is not same with the sky 
color around zenith at the same time. It is also known that the 
sky color around horizon becomes red at sunrise and sunset. 
The altitude of the sun, the viewing direction, the height of the 
observer, conditions of the atmosphere, and the reflected light 
from the ground are the parameters that affect the color of the 
sky (Nishita et al., 1996). It is a very complex task to try to 
render sky according to criteria listed above. In order to 
simplify this complicated task we built pre-rendered skybox 
library. Skybox is a cube in which inner faces are texture 
mapped with five or six pre-rendered images. When this cube is 
folded, inner faces create a seamless scene. Aesthetic skyboxes 
need artist work. The easy way to create them is to use special 
landscape rendering packages. It is also possible to obtain 
skybox image sets on Internet. Our skybox library contains 
many consequent scenes that complete a day loop. 
  
Figure 6. Skybox 
2.4.2 The Sun and the Moon: VR applications that render 
real world conditions use the Sun and the Moon as light sources 
and complementary objects of the VE. In both usages, it is 
important to place them into their correct positions in the three- 
dimensional scene. To calculate the positions of the Sun and the 
Moon at a given time and location, some methods use 
astronomic almanacs and complex equations, which give 
precise results and some others use simple formulas to get 
rough results. Jean Meeus, a Belgian astronomer published a 
book Astronomical Algorithms for computer calculations, which 
became popular among amateur astronomers and computer 
programmers (Meeus, 1991). Geocentric positions are accurate 
to within a few arc-seconds, which is many times higher than 
typical desktop display resolution precision. In order to 
correctly visualize the Sun and the Moon it is necessary to 
calculate angular sizes and locate them on the outer border of 
the limited VE. 
© — 2arctan(r / d) (5) 
where . O- angular size of celestial body 
r= radius 
d= distance to the Earth. 
Rendering of the Moon is quite different since it necessary to 
determine the visible portion and the bright limb angle, which 
corresponds to inclination with respect to rotation axis.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.