Full text: Close-range imaging, long-range vision

oughout the 
some control 
) mark points 
instead. 
t to define a 
control points 
| relation to a 
control points 
e subsequent 
  
Hints. 
exact image 
images were 
2 images in a 
| of the setup 
Bg op 
    
L-visualization. 
ontrol points on 
points in all the 
lock adjustment 
ously calibrated 
ite results. The 
nm, the rotation 
3. OBJECT RECONSTRUCTION 
3.1 Image Segmentation 
In our experiment, the object is rotated and the images are 
captured and preprocessed. First, the contour of the real object 
must be extracted from the input images. Therefore, a 
monochromatic background was used to distinguish the object 
from the environment. The decision if a pixel represents 
background or object is based upon its position in the IHS- 
colorspace. Since the blue background is sufficiently 
homogeneous, we can easily define a hue domain which is 
considered background. In figure 4 we show the original image 
on the left, and the result of the segmentation on the right. 
    
Figure 4. Image segmentation using an IHS color space 
histogram. Original image (left) and the resulting silhouette 
extraction (right). 
3.2. Shape Modelling Using Voxel Carving 
When the camera geometry is known, a bounding pyramid can 
be constructed for every image. All voxels are projected into the 
every image, if the image coordinate defines a background 
pixel, the voxel is marked to be deleted (voting). The shape is 
computed volumetrically by carving away all voxels outside the 
projected silhouette cone (see Fig. 5). The intersection of all 
silhouette cones from multiple images defines estimate 
geometry of the object called visual hull. When the greater 
numbers of views are used, this technique progressively refines 
the object model. Finally, the voxels are purged using a 
threshold for the number of votes. 
    
Figure 5: Voting-based carving of a voxel cube using various 
silhouettes under central projection. 
3.3 Color Image Matching 
In order to refine the model, color image matching was used to 
get into the visual hull and carve away the voxels in the critical 
areas. The image matching was done, using the normalized 
cross correlation, which will be explained more detailed later in 
this chapter. However, the search region in the second image 
can be narrowed down to a broad line since the image 
orientation is known. Although an object point corresponds to 
exactly one image point, the reversion is not valid. Instead, 
every object point along a line of sight may be projected into the 
same pixel. Only by the use of a second image, we are able to 
derive a unique point in space. But we can also use this line of 
sight to limit the positions in the second image, where the object 
point may appear. This situation is illustrated in figure 6 and it 
is known as the epipolar line. 
Aw 
y 
# 
  
Figure 6. Epipolar geometry. 
We can use this line to limit the search area for image matching, 
since it is a very time consuming process. 
In this work we do not actually calculate the epipolar line, 
instead we trace the pixel of interest back into the object space 
step by step. For each step, we project this three dimensional 
coordinate into the second image, where we perform the 
correlation. 
The following equation is used to trace pixels back into object 
space: 
X 7X X-—X, 
A y,- A» 7 R| Y 7X, (1a) 
=C Z-Z, 
X X; 7X9 Xo 
=> Y |-RA4| y; - yo *| Yo 
Z -—6 Zo 
-171- 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.