Full text: Proceedings, XXth congress (Part 5)

  
    
  
  
    
   
  
    
    
    
   
    
  
    
    
    
    
    
    
    
   
   
  
  
  
  
  
  
  
  
  
   
  
   
  
  
  
   
   
  
   
   
   
  
  
  
  
  
  
  
   
  
    
   
  
  
  
  
   
   
    
   
  
  
    
   
    
   
   
   
  
  
  
   
     
  
  
  
  
       
nbul 2004 
overage of 
algorithm 
the object. 
o] of the 
described 
triangular 
n order to 
er, 2003), 
‘ements a 
influence 
according 
estimate a 
from all 
ighted by 
n a set of 
where the 
1 the form 
ene taken 
tation of 
and speed 
dressed in 
€ chose a 
nospheric 
s as ours 
m for fast 
94), and 
'olumetric 
il vector 
ume data. 
storage of 
functions 
1. The r/e 
the same 
| shading 
tter with 
p to four 
s happens 
al vector 
all these 
| a more 
nificantly 
language 
isily. The 
the Tel 
vithin any 
| (aLMo) 
rocedure. 
Grids from numerical models describe the Cloud Liquid 
Content in %, in 7x7 Km? cells, covering an area of 1401x1197 
Kn* (Figure 6). The vertical size is not uniform, and this was 
an obstacle in importing the grid into a volume suitable for 
volume rendering. The aLMo NWP data-set is used as test for 
3D cloud field modeling and interpolation of irregular grid to 
regular was performed in IDL programming environment due 
to the ease of handling 3D arrays of data. 
At the current stage we started with importing 3D cloud fields 
from combined ground-based and satellite observations into the 
Volsh volume rendering procedure. The data originate from the 
GB and EO measurements that took place in April 2002, over 
the Kloten airport near Zurich. The area covered from the EO 
measurements is 55x55 Km? and approximately 2x2 Km? from 
the GB observations. The 3D cloud field is described by the 
cloud top height (CTH), the cloud liquid water (DLR product) 
estimated on the whole of the grid from the MISR satellite data 
and the CBH information extrapolated over the area covered 
from the GB measurements. 
2.3 Development of rendering methods 
In parallel with the development of 3D cloud fields we used 
several techniques for visualization of 3D cloud volumes. After 
the creation of these volumes, based on the metaball technique 
(see Section 2.2), we implemented a hardware assisted 
rendering method (Woo, 1999) which is used often in volume 
rendering. In its simplest form we create planar cross-sections 
of the volume, oriented towards the viewer, which are in the 
end blended together. The resulting image is the combination of 
Ta 
  
   
    
Figure 7: Volume Rendering with 
planar slices 
the colour from all planes, multiplied by their transparency. 
Using the example of the previous section we supply a sample 
rendering in Figure 7, where we can see that the absence of 
lighting calculations (shadows) in the volume results in 
difficulty understanding the depth of a volume. This problem 
can be solved as in (Dobashi, 2000), where lighting calculations 
are performed by sorting each of the planes towards the light 
source, and calculating an attenuation factor for each plane. 
This factor represents the amount of light that reaches a plane 
after passing through the other planes that lie in front of the 
light source, and determine its shade level. 
We further improved the interactivity our rendering method by 
introducing dynamic levels of detail for the 3-dimensional 
volume. The levels are created by the graphics processor from 
the original resolution data, and the OpenGL graphics library 
handles the display and transition between the levels. In order 
to avoid aliasing effects from undersampling the original 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
volume and the LoDs' transition we tried trilinear interpolation, 
which improved the image quality of the texture. but slowed 
down the frame rate significantly. In parallel we had 
implemented the import procedure into the Volsh library (see 
Section 2.1) and tried its rendering capabilities (Figure 8). The 
procedure is no more real-time, but the rendering times are 
quite low. For example a volume of 256x128x64 pixels is 
rendered in approximatelly 3 seconds. 
To the advantages of the last rendering approach we count the 
option to add shading to the volume, which increases the 
realism of the resulting image, and the option to adjust the 
transfer function, which defines the material properties of the 
medium for every value of the scalar variable, in near real-time. 
This means that we can interactively change the material 
properties, and focus on some specific range of the scalar value, 
leaving the outliers transparent. For the animation, we have the 
option to record some key-positions and create the intermediate 
Figure 8: Volume rendering using 'Volsh' 
library 
frames, by interpolating between the neigbour key-positions, 
thus limiting the number of commands necessary to render the 
frame sequence. Finally the library is based on the Tcl scripting 
language, which allows the execution of any external program 
in combination with the rendering procedure. This allows us, 
for example, to automate the production of animations, and 
deliver video from the separate rendered frames. 
For the disadvantages we can mention the absence of 
perspective projection in the library (only parallel projection is 
available), and the complexity to manipulate the camera 
flightpath to create key-frames for the animation of volumes. 
A rendering of the whole scene (Figure 9) revealed problems 
with poor rendering quality, which result from the fact that the 
input data are the surface points measured from the EO 
observations and do not correspond to a 3D volume, but a 
surface. 
In order to evaluate the performance of the procedure with 3D 
volumes we imported the CLC - cloud cover- variable from the 
alpine local model, used at MeteoSwiss (see Section 1), and 
created an animation of a fly-through around the rendered 
volume. A snapshot from this animation is given in Figure 10. 
We continued by visualising the interpolated 3D Cloud Liquid 
Water, from the combined EO and GB measurements, taken in 
April 2002. These measurements are a fusion of the MISR 
Cloud Top Height estimations over Zurich-Kloten and the 
ground-based CBH estimations, taken at the same time, using 
the ground camera system developed at ETH Zurich. The 
measurements cover a different extent of the area and the CBH 
were extrapolated over the whole extent.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.