one to look at the 3D dataset as a whole. Disadvantages are the
difficult interpretation of the cloudy interiors and long time,
compared to surface rendering, needed to perform volume
rendering., (www.cc.gatech.edu, 2001). In our software, colour
and opacity classification can be done interactively by viewing
the changes of the appearance of the volume. Furthermore,
gradient opacity function can be used for classification too.
Volume rendering can be implemented by various methods. We
used ray-casting methods in this study.
2.2.1. Ray Casting
For every pixel in the output image, a ray is sent into the data
volume. According to the predefined sampling interval, along
the ray the colour and opacity values are obtained by interpola-
tion. The interpolated colours and opacities are merged with
each other and with background by compositing in back to front
order to yield the colour of the pixel. These compositing calcu-
lations are simple linear transformations, (Schroeder, et.al,
1998). This technique is called composite ray casting.
During ray casting, to find the colour and opacity of a pixel,
instead of compositing, maximum intensity values or average
intensity values can be used as colour of pixels. This means
that, when a ray travels along the data set, if the final pixel
colour is assigned to the maximum density value along the ray,
this is called maximum intensity projection (MIP). If the final
colour of the pixel is computed by averaging the densities of
pixels along the ray, this is called average intensity projection
(AIP). Rendering with MIP and AIP, on final image some
intuitive interpretations are needed. Because, the location of the
maximum or the average values aren’t known. In this case it is
not possible which objects are behind another.
3. SEGMENTATION OF CT AND MR IMAGES
Segmentation is the process of classifying pixels in an image or
volume. It is one of the most difficult task in the visualization
processes. For reconstruction of medical 3D surface and vol-
ume, interest tissue boundaries should be distinguished from
others on all the image slices. After the boundaries have been
found, the pixels, which constitute the tissue, can be assigned to
a constant grey level value. This constant value represents only
this tissue.
Label values can be used as isocontour value for surface render-
ing. For volumetric rendering, in spite of the surface properties
of the tissues, their inner properties are also important. Because
of this reason, we should find the opacity values for the indi-
vidual voxels. And near this, wee need different colours to
separate the volume the elements which belong to different
tissues. For these purposes, segmentation results are used too.
In this study, we have used 3 different segmentation ap-
proaches. These are; interactive histogram thresholding, contour
segmentation and manual segmentation.
3.1. Interactive Histogram Thresholding
The simplest way of image segmentation is thresholding. By
this technique, according to image histogram, possible threshold
values are found. The pixels, that have values above or below
this threshold is assigned to constant values. Thus a binary-
segmented image is obtained. One can choose more than one
thresholds. In this case, between values of these thresholds are
replaced with constant label values. In our software, in spite of
histogram analysis, we have also presented an interactive
thresholding option. By this option, when one changes the
threshold by using a track bar, its effect is seen on the screen
synchronously. By the time user decide that the optimal seg-
mentation has obtained, he/she can change the threshold. After
thresholding, on the images, there would be many holes and
also many small areas. To delete, unwanted small areas, we
make a connectivity analysis, (Gonzalez, 1987, Teuber, 1993).
By this analysis, the areas that are smaller than the area thresh-
old are deleted. After connectivity analysis, still there might be
some unwanted pixels on the image. We have written functions
to delete these areas manually. After thresholded segmented
regions have been obtained by using morphological operators
such as erode/dilade, we fill or delete the remaining holes. The
final segmentation is recorded as a file.
3.2. Contour Segmentation
By this method, possible boundary value of a tissue is selected
with histogram analysis. This value is assumed to be the con-
tour value and the interested image is contoured by tracking this
value. After contouring, small areas can be detected automati-
cally by connectivity analysis or manually with hand. After
refinement of contours, we assign labels to pixels which are
bounded by the counter lines. If user doesn't like the contouring
result, he/she can ignore it and make a new segmentation easily.
3.3. Manual Segmentation
With the automatic segmentation procedures, it is inevitable to
make some incorrect label assignment. So in the literature,
manual segmentation is said to be still the best method. For
precise medical applications, manual segmentation will give the
best results. In this case, user draws the boundaries of the inter-
ested region by using mouse pointer. User can make editing
during manual segmentation. However, manual segmentation is
too time consuming. It can take hours or sometimes days to
segment complex MR images by manual segmentation.
4. EXTERNAL FACE SURFACE CONSTRUCTION
with DIGITAL PHOTOGRAMMETRY
In this study, we have written a photogrammetric software
module for external face reconstruction. We have tested the
module for small objects and have good results. But because of
speed limitations of our computer we couldn't test it for human
head. We took photographs of the patient head with a mask on
the head with the control points from multistations. We have
calibrated and oriented pictures with the software by using
bundle block adjustment with 10 additional calibration parame-
ters. After orientation of the pictures, we begin the automatic
matching procedure to measure the face surface points. For this
purpose, we have written adaptive least squares matching
(ALSM) with epipolar constraint. Firstly, the interest areas are
pointed with a window on all images. Then our program, subdi-
vides this window in to small windows on the master image
during matching procedure automatically. For epipolar con-
straint, at the start of the procedure user enters minimum and
maximum Z values for the interest area. These values can be
obtained approximately from priori information. Then the pro-
gram computes minimum and maximum Z values for each sub
window when processing them. And by using these Z values
and exterior orientation parameters of the images, it finds the
epipolar lines on the search images. Then along these lines, a
cross-correlation matching is performed and the pixel where the
—252-
A PA no m