Full text: XVIIIth Congress (Part B5)

f the rigid 
trix). One 
on matrix 
ation (3x3 
the three- 
the linear 
js that do 
ecting the 
obtain a 
afely used 
nimization 
d into the 
ch triplet; 
1 of all 3D 
trinocular 
nt viewing 
5). In the 
hange of 
port. 
cular view, 
)erformed. 
and back- 
.uminance 
version of 
s are then 
only those 
s that too 
eled edge, 
1 the other 
ce. Notice 
count, the 
S. 
ds up the 
matching 
overcome 
luminance 
t a single 
eral edges 
proposed 
ne-to-one” 
handle 
; that are 
sted, each 
ne space. 
ated by a 
curacy can 
nt length). 
«projected 
ecting the 
the optical 
nera. The 
xd by their 
ourse, are 
esponding 
trinocular image. 
3-D localization of reference marks: for each trinocular 
view, all visible fiducial marks are located with subpixel 
accuracy. A point-matching is then performed over such 
points and, wherever a matching is found, the back- 
projected point is determined. By doing so, we obtain, for 
each trinocular view, the 3D camera-coordinates of a 
subset of the fiducial marks. Some a-priori knowledge on 
the relative position of the targets in the scene will help 
identifying and labeling them. After labeling them, the 
reference points can be matched throughout different 
triplets, thus allowing us to compute the camera motion 
between them. As described above, the rigid motion that 
best overlaps targets of different triplets is taken as the 
relative motion of the camera system from one triplet to 
another. This operation is carried out for all the 
consecutive pairs of views. By using camera motion, the 
coordinates of all 3D edges can now be converted into a 
common reference frame. At this point, if camera motion 
has been accurately estimated, a simple merging of 3D 
edges obtained from each triplet provides a complete 
description of the observed object. 
3-D surface interpolation: In some cases it is highly 
desirable to obtain a 3D model whose shape is described 
by a closed surface rather than by edges. Moreover, for 
applications like image synthesis or virtual reality, there is 
a need for 3D models where besides the shape, also the 
original pictorial information on the surface (texture) is 
recovered. For this reason, the last processing step, is 
the construction of a surface that, by passing through all 
edges, approximates the object surface. The 3D surface 
is obtained using an optimized surface interpolation 
technique which, in fact, is a discretization of the thin- 
plate spline algorithm (Discrete Smooth Interpolation [5]). 
This technique allows the presence of local 
discontinuities in the interpolated surface, while 
performing a spline-like interpolation on smooth surface 
regions. This technique is particularly suitable for 
interpolation of 3D shape information, being the shape 
information typically characterized by edges and depth 
discontinuities (i.e. at object borders). 
The operation of recovering the original pictorial 
information (texture) is done through a back-projection of 
the luminance information associated to the original 
images (from the original viewpoint to the scene space). 
Roughly speaking, the images are projected on the 
interpolate surface in a similar way as a “slide projector” 
would do it. In order to obtain good quality results from 
this texture mapping operation, particular care must be 
used in compensating the different conditions of 
illumination for the different viewpoints of the original 
images. The quality of the texture mapping is also 
affected by the quality of camera calibration and camera 
motion estimation, which normally causes undesirable 
errors in overlapping the texture from different viewpoints. 
5. EXPERIMENTAL RESULTS 
Some examples of application of the system are 
presented in this paper. Two sample objects, a fish- 
shaped hand-crafted object and a toy train engine, have 
been used to test the proposed full-3D reconstruction 
procedure. Each object has been placed on a low-cost 
moving support in front of the trinocular camera system. 
509 
The cameras are placed at the vertices of a triangle in 
order to avoid matching ambiguities and to guarantee 
favorable conditions in the relative epipolar geometry. 
Figures 2 and 5 show, for each object, one of the original 
images taken by the camera system. 
Figure 3 shows, for the object "train", a view of the 3D 
edges localized in one trinocular shot. Thanks to the 
accuracy of the 3D edge reconstruction and camera 
calibration algorithms, with this technique it has been 
possible to achieve a relative accuracy of 200/300 ppm in 
the 3D location of edges. 
Figures 4 and 6 show the obtained reconstruction after 
merging the 3D edges obtained from all trinocular views. 
As we can see, edges from different triplets merge in a 
very precise fashion, which confirms the quality of the 
camera motion estimation. The maximum diameter of the 
bundles of homologous edges results as being smaller 
than 0.5 mm, which corresponds to a relative precision of 
300/400 ppm. 
Figure 7 shows, for the object "fish", a synthetic 
perspective view of the reconstructed surface of the 
object, where the pictorial information has been mapped 
from the original images through texture-mapping. The 
fidelity of the rendering and the sharpness of the 
projected texture prove the good quality of the proposed 
texture-mapping procedure. 
6. CONCLUSIONS 
The experimental results have shown that, in spite of the 
low cost of the system, the achieved level of accuracy is 
quite high. In fact, consider just one trinocular view, the 
3D co-ordinates of visible sharp edges can be computed 
with a precision of about 200/300 ppm. When 
considering a series of trinocular views for a complete 
reconstruction of the scene, the accuracy remains nearly 
unchanged (300/400 ppm), which emphasizes the quality 
of the camera motion estimate. 
Further improvements of the proposed reconstruction 
method are currently under development, especially 
those related to the 3D edge-merging process and the 
problem of "full-3D" interpolation of surfaces of complex 
volumes. In particular, we are focusing on the integration 
of volumetric reconstruction methods and the above 
technique. 
REFERENCES: 
[1] C. Braccini, G. Gambardella, A. Grattarola, S. 
Zappatore: "Motion estimation of rigid bodies: effects 
of the rigidity constraints." EURASIP, 1986. Signal 
Processing lll: theories and applications. pp. 645- 
648. 
[2] T.S. Huang, O.D. Faugeras: "Some properties of the 
E matrix in two-view motion estimation." /EEE Trans 
on Pattern Analysis and Machine Intelligence. Vol. 
11, No. 12, Dec. 1989, pp. 1310-1312. 
[3] S. Soatto, R. Frezza, P. Perona: “Motion estimation 
on the Essential Manifold.” In: Computer Vision - 
ECCV '94. Third European Conference on Computer 
Vision. Proceedings. Vol.ll., Stockholm, Sweden, 2- 
6, May 1994. pp. 61-72. 
[4] Tsai, R.Y., 1987. A Versatile Camera Calibration 
Technique for High-Accuracy 3D Machine Vision 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.