ue to the
iplify the
ng fractal
; deling of
some first
° graphics
not much
ssues that
en during
fitting the
'eneration
ition with
1ethods to
0 clouds,
be found
of the
3).
1e. dataset
(t step we
ht, using
pearrance
5) and we
ns for 3D
'ork done
> volume .
tried two
, the first
, and the
id others
alls', also
lated the
aving the
was also
rallel. We
nensional
d us with
losen 150-
hardware
OpenGL
> volume.
ised at the
ado, USA
1t deliver
rmance in
gave until
erned. In
| 3D cloud
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
field, from a cloud field that is present in all three dimensions,
we used the same processing chain both in the cloud bottom
field, and the local alpine numerical weather prediction model,
used from the Swiss Meteorological Office (MeteoSwiss).
1.3 Related work
The problem of modelling and rendering gaseous phenomena
has been an active topic since 20 years now either as a problem
not solved in computer graphics at that time (Blinn, 1982) , or
in more practical approaches in the field of cinema special
effects (Reeves, 1983). Point based rendering techniques have
given a simple, yet flexible and powerful tool for rendering.
Levoy and Whitted extended the use of points further than
rendering smoke or fire, into traditional geometry visualisation
(Levoy, 1985). The last few years were followed by an increase
in applications demanding visualisation solutions, together with
huge developments in computer hardware. In (Rusinkiewicz,
2000), (Zwicker, 2001) points in combination with texture
splatting were used, while Harris (Harris, 2001) used particles
and impostors to deliver high frame rates for scenes with many
cloud objects. Nishita and others (Nishita, 1996), (Dobashi,
2000) used a particle system to control the metaballs model
that composed the cloud objects and in combination with
hardware accelereted OpenGL programming, they achieved
impressive results of rendering and animation of clouds in near-
real-time (Figure 1).
Figure I: Clouds modelled using metaballs
(from Dobash et. al. 2000)
Alternative methods like the planar and curved surfaces with
texture used by Gardner (Gardner, 1985) and textures with
noise and turbulence functions (Ebert, 1990), taken from the
work of Perlin (Perlin, 1985) provided new perspectives, which
were difficult though to apply to real world measurements.
Volume rendering has undergone many inporvements on the
speed of algorithms and the work of Lacroute at the University
of Stanford (Lacroute, 1994), (Volpack URL) and its
development by Schulze (Schulze, 2001) attracted our attention
and part of it was included in our implementation.
2. MODELING AND RENDERING OF CLOUDS
2.1 Initial tests
We started our tests on the ground-based (GB) measurements,
creating a triangulation of the cloud bottom height (CBH)
surface. Afterwards we used the GB images in combination
with the point cloud for determining whether the point
belonged to a cloud or not (Figure 3). We first performed an
adaptive histogram equalization on the image and then
classified the points. The results of this procedure were quite
good (Figure 3), but is not full-automatic since it demands
some input from the user.
The remaining points formed the triangulated model of the
CBH surface, which was textured mapped, using the standard
interface OpenGL (OpenGL URL) (Woo, 1999), as an attempt
Figure 2: Point cloud from ground measurements (left) and its
triangulation (right)
Figure 3: Cloud mask applied in GB measure-
ments
to increase the realism of the visualisation. These first results
were fair, but further improvement was not intented since we
consider traditional polygonal modelling/rendering as an
inefficient method for volumetric phenomena. A part of these
first attempts were used, though, in further stages.
2.2 Development of modelling methods
The point cloud which resulted from the first stage, was used as
data for developing and testing a point based rendering system,
using particles. We constructed a simple system which includes
positional and colour information for each particle, based on the
point measurements of CBH. We also prepared the source code
to import wind direction vectors for the animation of 3D clouds
from subsequent measurements, lifetime duration of each
particle and normal vector direction. This normal vector
direction results from the triangulation of the first stage. Some
test animations were performed with GB measurements
adding artificial data for wind direction and speed, with
variations in the particle size and anti-aliasing methods with
satysfying results as far as performane and memory consuption
is concerned.
In the frame of this work, we first looked into different
techniques of cloud modelling and concluded that the main
issue is the description of clouds as volumes. These volumes
consist of cells, where the values of the desired attributes are
stored, usually aligned in regular grids. We constructed three-
dimensional textures from ground based measurements. The
pixel values adjust the material transparency and they are
calculated according to the number of point measurements
present inside each volume pixel. Volume modelling was
tested, using the help of a software used in several weather-
related applications, called Vis5d. An open source software,
released under the GNU General Public License (Vis5D URL).
The positive conclusions are the compression of the volume
data, and the ease of including subsequent measurements and
creating an animation. The negative cocnlusions are the absence
at that stage of CTH estimations, which lead to an incomplete
volume dataset, and a not satisfying final volume rendering.
The first impression was that a completed version of the dataset
would bring much better results.
The classification method described in section 2.1 was based
only uppon the radiometric behaviour (colour channels) of the