irkey —
1, Turkey
'S is pre-
jatient in
detected
f human
umetric
rage ray
his study
Both for
vith Bor-
ric mod-
(Visuali-
sented to
intended
istogram
th many
Is can be
llumina-
settings,
tructures
who are
played in
ire gener-
' commu-
ull extent
a mental
cessful in
Three di-
ings in a
ining and
three di-
c features
0 find the
2. SURFACE AND VOLUME RENDERING
For the visualization of three dimensional data, usually two
visualization techniques are used; volume rendering and surface
rendering. Images of the objects are formed by the interaction
of the object surfaces with light. However there are some ob-
jects whose surfaces are translucent. For example, fog, glass or
such other objects are translucent and light rays pass through
the inner parts of the surface. This kind of objects can not be
modelled with surface interactions. Instead, these are modelled
with the properties of the inner structures. As can be understood
easily, we can say that there are two major different cases for
modelling objects. Modelling with surface interactions and
modelling with inner properties of the objects. These two cases
are called as surface rendering and volume rendering respec-
tively,
2.1. Surface Rendering
Surfaces of the real objects are considered continuous functions
in the mathematical sense. Only the objects that have regular
geometry can be defined with implicit functions. For example,
sphere, ellipsoid or plane etc. can be defined with implicit
continuous functions. But the object surfaces that have irregular
geometry cannot be modelled implicitly. Instead, kind of irregu-
lar surfaces are modelled explicitly by using small surface
elements. This explicit definition would only be an approxima-
tion to the real surface. For the surface construction, to achieve
good approximation, the points that define the surface should be
chosen according to the surface characteristics. Therefore, for
surface reconstruction, firstly the geometric shape or boundaries
of the objects should be found. To find the object boundaries or
shapes, image segmentation techniques are used. After segmen-
tation process, surface points are extracted with contour lines in
2D, and isosurfaces in 3D Contour (or isosurface) values are
obtained from segmentation results. This isovalue can be con-
sidered as the grey level value of the boundary pixels of the
object to be reconstructed. When isovalue is obtained, locations
of surface points whose values are equal to isovalue can be
computed by using interpolation techniques. Extracted points
can then be connected to each other with surface primitives
such as triangles in 2D and tetrahedrons in 3D. After this proc-
ess, polygonal representation of surface model is obtained. This
final representation can be colored and illuminated by using
computer graphics techniques.
In order to connect surface points by using primitives, some
techniques can be used such as Delaunay triangulation and
Marching Cubes algorithm, etc. If the surface points are in
irregular form, then 2D or 3D Delaunay triangulation is more
effective. For structured i.e. regular points, marching cubes
algorithm is more effective, (Schroeder, et.al, 1998).
2.1.1. Isosurface Extraction by Contouring (Marching
Cubes Algorithm):
In 2D contouring algorithms, grey level values of pixels are
assumed as being scalar values at grid (pixel) locations. The
contour values to be extracted, passes either from an exact grid
location or between two grid points. If contour value is between
two points, its location on the edge formed by two points can be
found by linear interpolation of scalar values (grey levels) of
these points. Once the points on the cell edges are generated,
these can be connected with contours using few different ap-
proaches. One approach detects an edge intersection (i.e. the
contour passes through an edge) and then tracks this contour as
it moves across cell boundaries, (Schroeder, et.al, 1998).
Another approach uses a divide and conquer technique treating
cells independently. This algorithm is called “marching
squares” in 2D and “marching cubes” in 3D. The basic assump-
tion of this technique is that a contour can only pass through a
cell in a finite number of ways. A case table is constructed that
enumerates all possible topological states of a cell, given com-
binations of scalar values at the cell points. The number of
topological states depends on the number of cell vertices, and
the number of inside/outside relationships a vertex can have
with respect to the contour value. A vertex is considered inside
a contour if its scalar value is larger than the scalar value of the
contour line. For example, if a cell has four vertices and each
vertex can be either inside or outside the contour, there are in
2D 2* = 16 possible ways and in 3D 25-256 ways that the con-
tour passes through a cell. Now it’s important how the contour
passes through the cell. In other words, topology of the contour
in the cell should be defined. To define cell topologies, a case
table is constructed. Case table has 16 topological states in 2D
and 256 states in 3D. In 3D, by using symmetry properties, 256
different states can be represented with 15 cases. Case table can
be indexed by encoding the state of the each vertex. During
searching process, every cell location is compared with case
table and its topological state is selected. Once the proper case
is selected, the location of the contour is calculated using inter-
polation. The further details can be found in (Lorensen, 87,
Schroeder et. Al, 1998).
2.1.2. Delaunay Triangulation
Delaunay triangulation technique build topology direct from
unstructured points. The points are triangulated to create a
topological structure, consisting of n dimensional simplicies
that completely bound the points and linear combinations of the
points (so called convex hull). The result of triangulation is a
set of triangles in 2D or tetrahedrons in 3D depending on the
dimension of the input data. The Delaunay triangulation has the
property that the circumsphere of any n dimensional simplex
contain only one point, (Schroeder, et.al, 1998).
2.2. Volume Rendering
Volume rendering techniques overcome problems of the accu-
rate representation of surfaces by isosurface techniques. These
problems are related to making decision for every volume
element whether or not the surface passes through.it and this
can produce spurious surfaces or erroneous holes in surfaces,
particularly in the presence of small or poorly defined features.
Volume rendering techniques don’t use the intermediate geo-
metrical representations. Volume rendering involves the follow-
ing steps: formation of an RGBA volume from data, reconstruc-
tion of continuous function from this discrete data set, and
projecting it into 2D viewing plane and thus produce output
image from the desired point of view. An RGBA volume is a
3D four vector dataset, where the first three components are R,
G, B colour components and A is alpha value, i.e., A represents
the opacity value. An opacity value of zero means the voxel is
totally transparent and a value of 1 means totally opaque. Be-
hind the RGBA volume, an opaque background is placed. The
mapping of the data to opacity values acts as a classification of
the data one is interested in. Isosurfaces can be shown by map-
ping the corresponding data values to almost opaque values and
the rest to transparent values. The appearance of volumes can
be improved by using shading techniques to form the RGB
mapping. Opacity can be used to see the interior of the data
volume. These interior regions appear as clouds with varying
density and colour. A big advantage of volume rendering is that
this interior information is not thrown away, so that it enables
-251-
A RC Rl mem