Full text: From pixels to sequences

  
192 
in dependence of the viewing angle a. The distance between the two locations of the false pixel under two succeeding 
viewing angles is given by D(0°-10°) = x{0°) - x((10?). On the assumption that the voxel spacing is small compared to the 
distance of the rotation centre, and that the resolution is chosen such that a voxel projects exactly onto one pixel, then 
D(0?-10?) can be approximated by (Fromherz, 1994b) 
D(0°-10°) = n sin(10°) (3) 
For n equal up to 5, this distance lies in the sub-pixel domain. This shows that comparing subsequent luminance pairs 
I(k), I(k+1) as suggested above is futile because it means comparing the pixel with itself. This estimation has two 
immediate consequences. First, the gray values in an angular set must be compared in steps of 20° or 40°, respectively, 
i.e. I(k) with I(k+2), or I(k) with I(k+4), respectively. And, secondly, the resolution has to be raised by doubling the voxel 
spacing. 
The raised resolution also helps to reduce another problem arising from the relation between the physical dimensions of 
the dummy's head and the image dimension. Using the normal resolution and sub-sampled images, one pixel 
approximately corresponds to a size of 4 mm. Thus, a human pupil, for instance, is covered horizontally only by about 
one or two pixels. This is not enough to accurately reconstruct a complicated surface like the eye region of a human 
head. 
The choice of the threshold depends on the average image brightness. For our image set, recorded under diffuse 
illumination, a threshold of 10 to 15 gray levels showed good results. As to the steps in which to compare the gray values 
in an angular set, steps of 40? were found to be the best choice. Finally, it has to be mentioned that certain areas of the 
object don't show enough contrast to be processed. The corresponding voxels, according to the sculptor's principle, 
remain unaltered. 
To illustrate the results of the shape from contours algorithm (Fromherz, 1994a), a slice of the voxel cluster at the height 
of the eyes is shown in Fig. 7(left). Fig. 7(right) shows the same slice after 45 iterations of the shape from luminance loop 
when the algorithm stopped because no more voxels were changed. An image of the reconstructed dummy after the 
contours algorithm and after the luminance loop is presented in Fig. 8. 
6.2 Multiple Depth Maps 
The depth from contours algorithm was applied to the image sequence of the dummy. According to the steps of viewing 
angles, we computed a set of depth maps in steps of 10? for the angles of +90°. This set of depth maps already contains 
enough volume data to produce a full 3D description of the dummy. However, due to the perspective projection and the 
limited image sequence of only +90°, the description would not be as accurate on the back of the dummy's head as if a 
full rotational scan was used. 
In Fig. 6, examples of three original video frames of viewing angles 0° and +40° are displayed. The shaded depth maps 
of the same viewing directions, computed with the full image sequence of +90°, are shown in Fig. 9. 
  
Fig. 7: Slice of the voxel cluster at eye height: Slice after contours algorithm (left) and after the luminance 
algorithm (right).
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.