191
5 IMAGE SEQUENCE AND SEGMENTATION
For the test of the shape from luminance algorithm, we used a sequence of video images of a dummy with a wig that
was recorded in steps of 10° under viewing angles of +90° relative to a frontal view. For the depth from contours
algorithm, we used a analogous sequence of the dummy without the wig (Fig. 6). In both cases, the distance from the
head to the camera was 5.95m. The recording set-up inciuded a fixed Sony XC-77CE high resolution CCD video camera
with a pixel size of 11um together with a SGI VideoLab frame grabber. In the first case a 50mm lens was used, in the
second case a 100mm lens. The dummy was mounted on a swivel chair, thus allowing for the different viewing angles.
The perspective projection, and especially the conversion between continuous and discrete image system, require the
camera to be calibrated. However, since only pixel accuracy is needed a lean calibration involving the principal point and
some correction terms was applied (Fromherz, 1994b).
The segmentation process turning the image sequence into a sequence of silhouettes is based on a probabilistic image
theory and exploits experimental properties of the statistics of local brightness derivatives (Bichsel, 1994). The
silhouettes in the segmented images are built up by a closed list of vertices.
6 EXPERIMENTS AND RESULTS
6.1 Shape from Luminance
The shape from luminance algorithm was tested on the maximal object volume which resulted from the contours
procedure applied to a dummy (Fromherz, 1994a). The angles were limited to views of 190°, because only these facial
parts show a marked luminance distribution. A diffuse illumination made sure that the luminance of the surface elements
only weakly depended on the surface normals.
When running the algorithm, several problems arise which lead the program to exclude some true surface voxels, i.e.
surface elements of the real object which should not be discarded from the voxel cluster. The main reason for this is the
experimental fact that the luminance distribution of the surface varies considerably with the viewing angle. This is due to
deviations from a perfect diffuse illumination and a perfect diffuse reflectance model. This is because, due to self-
shadowing effects, the illumination is not perfectly isotropic, and further because physical surfaces may show a non-
isotropic luminance distribution even if illuminated perfectly isotropically. Therefore, the values in a set can only be
compared up to a threshold. Furthermore, since the gray values of any two subsequent angles in the set are expected to
be the most similar ones, these values only should be compared in pairs, i.e. I(k) with I(k+1), I(k+1) with I(k+2), and so
on.
Another reason for the faulty exclusion of true surface voxels lies in mirror effects at grazing viewing angles. In every
image, some pixels on the rim of the object belong to object points which mirror the bright background. This effect leads
to unnaturally high gray values of the pixels. These pixels correspond to surface voxels viewed under the outer angles of
their angular subset. This means that the angular subset will contain largely differing gray values. These errors can be
reduced considerably by omitting the outer angles of the angular subset.
A third problem comes from the chosen pixel-voxel resolution. Suppose that a false surface voxel is n voxels away from
the real surface, i.e. from the true surface voxel. Now, if we calculate the projections of both these voxels as in Section 3,
the difference of the x-coordinates of the resulting pixels gives the location x«a) of the false pixel relative to the true pixel
Fig. 6: Samples of original image sequence of a dummy without the wig at viewing angles -40°,0°,+40°.
IAPRS, Vol. 30, Part 5W1, ISPRS Intercommission Workshop "From Pixels to Sequences", Zurich, March 22-24 1995