Full text: Proceedings, XXth congress (Part 5)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
  
control information, usually not all the additional parameters 
(APs) are recovered. 
3.2 Matching process 
In order to recover the 3D shape of the static human figure, a 
dense set of corresponding image points is extracted with an 
automated matching process [D’Apuzzo, 2003]. The matching 
establishes correspondences between triplet of images starting 
from some seed points selected manually and distributed on the 
region of interest. The epipolar geometry, recovered in the 
orientation process is also used to improve the quality of the 
results. The central image is used as template and the other two 
(left and right) are used as search images (slaves). The matcher 
searches the corresponding points in the two slaves 
independently and at the end of the process, the data sets are 
merged to become triplets of matched points. The matching can 
fail if lacks of natural texture are presents (e.g. uniform colour); 
therefore the performance of the process is improved with 
Wallis filter to enhance the low frequencies of the images. 
3.3 3D reconstruction and modeling 
The 3D coordinates of the 2D matched points are afterwards 
computed with forward intersection using the results of the 
orientation process. A spatial filter is also applied to reduce the 
noise in the 3D data (possible outliers) and to get a more 
uniform density of the point cloud. If the matching process 
fails, some holes could be present in the generated point cloud: 
therefore a semi-automatic closure of the gaps is performed, 
using the curvature and density of the surrounding points. 
Moreover, if small movements of the person are occurred 
during the acquisition, the point cloud of each single triplet 
could appear misalign respect to the others. Therefore a 3D 
conformal transformation is applied: one triplet is taken as 
reference and all the others are transformed according to the 
reference one. 
Concerning the modeling of the recovered unorganized 3D 
point cloud, we can (1) generate a polygonal surface with 
reverse-engineer packages or (2) fit a predefined 3D human 
model to our 3D data [D' Apuzzo et al., 1999; Ramsis]. 
3.4 Results of the modeling of a static character 
The presented example shows the modeling of a standing 
person with a digital still video camera Sony F505 (Figure 2). 
    
    
Figure 2: Four (out of 12) images (1200x1600 pixels) used for 
the 3D static human body reconstruction. 
The automatic tie points identification (section 3.1.1) found 
more than 150 correspondences that were imported in the 
bundle as well as four control points (measured manually on the 
body) used for the space resection process and the datum 
definition. At the end of the adjustment, a camera constant of 
8.4 mm was estimated while the position of the principal point 
was kept fix in the middle of the images (and compensated with 
the exterior orientation parameters) as no significative camera 
roll diversity was present. Concerning the distortion parameters, 
only the first parameter of radial distortion (K1) turned out to 
be significant while the others were not estimated, as an over- 
parameterization could lead to a degradation of the results. The 
final exterior orientation of the images as well as the 3D 
coordinates of the tie points are shown in Figure 3. 
ë & 989 
$ . 
   
   
imma 
SAN ES 
  
| 
i 
| 
i 
: . : LE bk 
Figure 3: Recovered camera poses and 3D coordinates of the tie 
points (left). The influence of the APs on the image grid, 3 
times amplified (right). 
Afterwards, the matching process between 4 triplets of images 
produced ca 36 000 2D correspondences that have been 
converted and filtered in a point cloud of ca 34 000 points 
(Figure 4). The recovered 3D point cloud of the person is 
computed with a mean accuracy in x-y of 2.3 mm and in z 
direction of 3.3 mm. The 3D data can then easily be imported in 
commercial packages for modeling, visualization and animation 
purposes or e.g. used for diet management. 
      
Figure 4: 3D point cloud of the human body imaged in Figure 2 
pre and after the filtering (left). Visualization of the recovered 
point cloud with pixel intensity (right). 
is 
"m 
4. MODELING A MOVING CHARACTER WITH A FIX 
CAMERA 
Nowadays it is very common to find image streams acquired 
with a fix camera, like in forensic surveillance, movies and 
sport events. Due to the complicate shape of the human body, a 
fix camera that images a moving character cannot correctly 
model the whole shape, unless we consider small part of the 
body (e.g. head, arm or torso). In particular, in the movies, we 
can often see a static camera filming a rotating head. Face 
modeling and animation has been investigated since 20 years in 
the graphic community. Due to the symmetric forms and 
geometric properties of the human head, the modeling requires 
very precise measurements. A part from laser scanner, most of 
the single-camera approaches are model-based (requiring fitting 
and minimization problems) while few methods recover the 3D 
shape through a camera model. Our solution tries to model the 
head regarding the camera as moving around it. Therefore it can 
be considered as a particular problem of the previous case. We 
have only to assume that the head is not deforming during the 
movement. 
An example is presented in Figure 5. The image sequence, 
found on the Internet and with a resolution of 256x256 pixels, 
shows a person who is rotating the head. No camera and scene 
information is available and, for the processing, we consider the 
images as acquired by a moving camera around a fix head. 
     
  
   
   
    
  
  
   
    
    
   
   
    
    
   
  
  
  
  
  
  
  
  
  
    
   
  
   
    
    
    
    
  
  
  
   
    
    
   
   
    
   
   
   
   
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.