Full text: Proceedings of the CIPA WG 6 International Workshop on Scanning for Cultural Heritage Recording

- 81 - 
180°. This way we increase the chance that the node does 
not lie in the occluded area in at least one of the planes. 
The projected node is now intersected with both images in 
the same way as in Step 4 (Figure 11c). If at least one 
image says "this node is white", it is set to white. 
Otherwise, if at least one image says "this node is grey", it 
is set to grey and only if both images agree that the node is 
black, it stays black. The intersection with the object in the 
image is performed in the same way as the intersection of a 
node with object's silhouettes in Shape from Silhouette 
input images. 
6. If the node is set to grey it is divided into 8 child nodes of 
the current level + 1, all of which are marked 
"black" 
7. Processing of the current node is finished. If there are 
more nodes in the current level set the current 
node to the next node and go back to Step 4. If all nodes 
of the current level have been processed, increment 
the current level_and go to Step 3. 
8. The final octree model has been built (Figure 1 Id). 
Shape from Silhouette 
Shape (t orn Structured Light 
(a) Binarization of input images 
(b) Initial octree 
(d) Final model 
(c) Intersection testing 
Figure 11. Algorithm overview 
5. RESULTS 
Figure 12. 
We build input images with size 640 x 480 pixels, in which 1 
pixel corresponds to 1 mm in the x-z plane of the world 
coordinate system. 
Having built the camera model and the input images we can test 
our 3D modeling algorithms with varying modeling parameters. 
As the measure of the accuracy of the models we compare the 
size (width, height and length) and the volume of the model 
with the size and the analytical volume of the object. 
5.1 Synthetic object 
As the synthetic object we create a sphere with radius r=200 
mm, shown in Figure 13a. If we place the center of the sphere in 
the origin of the world coordinate system (see Figure 12), the 
sphere will look the same from all possible views. For our vir 
tual acquisition system we can assume having neither camera 
nor light occlusions and we can construct perfect input images 
of the sphere (Figure 13b and c) which can be used for any 
view. 
(a) (b) (c) 
Figure 13. Synthetic sphere (a) and an input image for Shape 
from Silhouette (b) and Shape from Structured Light (c) 
For tests with synthetic objects we can build a model of a virtu 
al camera and laser and create input images in such way that the 
images fit perfectly into the camera model. This way we can 
analyze the accuracy of the constructed models without impact 
of camera calibration errors. The parameters and the position of 
the camera and the laser are arbitrary, so we choose realistic 
values. We assume having a virtual camera with focal length / 
= 20 mm, placed on the y axis of the world coordinate system, 
2000 mm away from its origin (Figure 12). We set the distance 
between two sensor elements of the camera to d x d = 0.01 mm. 
The laser is located on the z axis of the world coordinate 
system, 850 mm away from its origin, and the turntable 250 mm 
below the x-y plane of the world coordinate system, with its 
rotational axis identical to the z world axis, as shown in 
Note that the image from Figure 13c can not be obtained using 
the laser from Figure 12. Instead, we assume seeing the 
complete profile of the sphere, in order to be able to reconstruct 
the complete object using Shape from Structured Light only. 
Since the sphere does not contain any cavities, Shape from 
Silhouette can also reconstruct it completely. Therefore, we can 
measure the accuracy of each of the methods independently, as 
well as of the combined method. 
In the first test we build models using 360 views with the 
constant angle of 1° between two views, while increasing octree 
resolution. It turned out that the shape from Silhouette method 
performed best with an octree resolution of 128 3 , where the 
approximation error was +0.83% of the actual volume, the
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.