Full text: Close-range imaging, long-range vision

nd only image 
7 useful when in 
; calling p, and 
iages, the image 
e third view are 
[7] 
h equations are 
lines. The point 
ntal matrix, but 
s and remove 
le to derive the 
third view; e.g., 
iven by: 
[8] 
th ej. 
[9] 
1ird view. 
a scale factor, 
ıdences: using 
ions, 4 of them 
plet of images, 
rithm [Fischler 
at support two 
zeometry. The 
el (T tensor) to 
rom a minimal 
mages, a set of 
r, is available. 
‘y consecutive 
y tensors (Ts, 
1ces which are 
two adjacent 
Vas Xb»Yb» XesYe) 
first tensor is 
his means that 
| therefore this 
| is tracked as 
rrespondences 
justment. 
  
t of images 
  
fgure.5: Extracted lines with Canny operator (a) and merged segment (b-c, d-e). Aggregated lines classified according to their 
direction (f,g,h). 4 control points measured manually on the body and used for the adjustment (1) 
! Initial approximation of the unknowns 
ase of its non-linearity, the bundle-adjustment (section 3.3) 
4 initial approximations for the unknown interior and 
or orientations. 
approach based on vanishing point is used to compute the 
jx parameters of the camera (principal point and focal 
4). The vanishing point is the intersection of parallel lines 
bject space transformed to image space by a perspective 
formation of the camera. Man-made objects are often 
qt in the images, therefore geometric information of the 
aired scene can be derived from these features. 
, semi-automatic process to determine the approximations of 
‚interior parameters consists of: 
inight lines extraction with Canny operator ( Figure 5, a) 
wrging short segments taking into account segments slope 
nd distance from the center of the image (Figure 5, b-c, d-e); 
iteractive identification of three mutually orthogonal 
directions; 
lissification of the extracted and aggregated lines according 
btheir directions (Figure 5, f,g,h); 
omputation of the three vanishing points for each direction 
Collins, 1993]. Each line lj is represented by its 
hmogeneous coordinates (ai, bi, ci); if there are only two 
ines, the cross product of them gives the coordinates of the 
wüshing point; if n lines l;, b, ... 1, are involved, we get the 
lest fit" vanishing point forming the matrix L as: 
n aa; a;b: 81€; 
L= a;b; bb; bici [10] 
i=l JC: C: C 
aci bic; i% 
e 
md computing the vanishing point as the eigenvector 
isociated with the smallest eigenvalue. 
ktermination of the principal point and the focal length of 
te camera [Caprile and Torre, 1990]. 
k approximations of the exterior orientation are instead 
mputed using spatial resection. In our case, 4 object points 
wsured on the human body (Figure 5, i) are used to compute 
approximations of the positions of the cameras. 
3 Bundle adjustment 
“ing the process described in section 3.1, a total of 148 
imspondences are found in the images of Figure 1 and then 
morted in the adjustment. The points used for the space 
tion are imported as control points. Ten additional 
meters [Brown, 1971] are used to model systematic errors: 
camera constant correction, two principal point coordinate 
ses, five parameters modelling the radial and tangential lens 
Wortion and two parameters for a affine scale factor and shear 
yer, 1992]. In our case, the principal point coordinate 
offsets, the parameter for the correction of the camera constant 
and the first term of the radial lens distortion turned out to be 
significant. The theoretical precision of the tie points is ox — 
15.5 mm, oy = 9.8 mm, o; = 14.2 mm while the standard 
deviation of unit weight a posteriori is 1.8 micron (1/4 of the 
pixel size). The computed camera poses and 3-D coordinates of 
the tie points are shown in Figure 6. 
* » à wo & 
as $c < > o 
B ve Y 
t 
» . : 4 
* y 
$e Te : ? 
* LE » » ge ® ax 
* * ed * » 
a 75 a ® 3 ^9 
> es € se EY 
$ 
& 2 2 
® "a $* 
+ © 
  
Figure 6: Recovered camera positions and object points 
4. MATCHING PROCESS AND 
3-D RECONSTRUCTION OF THE HUMAN BODY 
In order to produce a dense and robust set of corresponding 
image points, an automated matching process is used 
[D'Apuzzo, 2002]. It establishes correspondences between 
triplets of images starting from few seed points and is based on 
the adaptive least squares method. One image serves as 
template and the others as search image. The matcher searches 
the corresponding points in the two search images 
independently and at the end of the process, the data sets are 
merged to become triplets of matched points. For the process, 
all consecutive triplets are used. The 3-D coordinates of each 
matched triplet are then computed by forward intersection using 
the results of the orientation process. At the end, all the points 
are joined together to create a unique point cloud. In order to 
reduce the noise in the 3-D data and get a more uniform density 
of the point cloud, a spatial filter is applied: the object space is 
divided into boxes and the points contained in each box are 
replaced by the center of gravity of the box. After the filtering 
process, a uniform 3-D point cloud is obtained, as shown in 
Figure 7. The generation of a surface model from unorganised 
3-D point clouds requires non standard procedures which can 
be found in commercial packages. A standards 2.5 Delauney 
triangulation can not create a correct meshed surface from the 
obtained 3-D point shown in Figure 7. 
—593— 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.