Full text: Proceedings, XXth congress (Part 5)

   
anbul 2004 
as an over- 
'esults. The 
as the 3D 
>s of the ti 
e grid, 3 
  
  
| of images 
have been 
000 points 
. person is 
n and in z 
mported in 
| animation 
  
in Figure 2 
recovered 
TH A FIX 
s acquired 
1ovies and 
an body, a 
t correctly 
part of the 
novies, we 
1ead. Face 
20 years in 
forms and 
19 requires 
>r, most of 
ring fitting 
ver the 3D 
model the 
fore it can 
s case. We 
during the 
sequence, 
256 pixels, 
and scene 
onsider the 
head. 
  
    
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
8 E 8 I L 
  
  
Figure 5: A fix camera imaging a rotating head (16 frames). 
Due to the small region of interest (the head) and the very short 
baseline, the corresponding points for the image orientation are 
selected manually in the first frame: then they are matched 
automatically in all the other images using a tracking algorithm 
based on least squares template matching. For the datum 
definition and the space resection algorithm, 4 points extracted 
from a face laser scanner data set are used. Afterwards, with a 
bundle-adjustment we recovered the camera parameters: no 
additional parameters (APs) were used and only the focal length 
was computed. The recovered epipolar geometry is displayed in 
Figure 6. 
| 
  
Figure 6: Recovered epipolar geometry between an image triplet. 
Finally we applied the matching process described in section 
3.2 on 3 triplets of images to get the 3D point cloud of the head. 
The results, with related pixel intensity, are shown in Figure 7. 
  
Figure 7: 3D model of the moving head. 
S. MODELING A MOVING CHARACTER WITH A 
MOVING CAMERA 
A moving character imaged with a moving camera represents 
the most difficult case for the character reconstruction problem 
from image sequences. The camera can be (1) moved with a 
mechanical arm or on a small railroad or can be (2) stationary 
but freely rotating on a tripod or on the shoulder of a 
cameraman. In particular, sport videos are usually acquired 
with a ‘rotating system’ and often far away from the scene. In 
these cases, the baseline between the frames is very short and 
because of the movements of the character, a standard 
perspective approach cannot be used, in particular for the 3D 
modeling of the character. Nevertheless, we recover camera 
parameters and 3D object information through a camera model, 
without any model-based adjustment. 
5.1 Image acquisition and orientation 
A sequence of 60 images has been digitized from an old 
videotape, using a Matrox DigiSuite grabber, with a resolution 
of 720x576 pixels (Figure 8). For the orientation and 
reconstruction only 9 images are used (those images where the 
moving character has both feet on the ground). From a quick 
analysis of the images, we can deduce that a rotation is mainly 
occurring during the video acquisition while no zooming effects 
are presents. A right-hand coordinate system with the origin in 
the left corner of the court is set and some control points are 
defined knowing the dimensions of the basketball court. 
Because of the low image quality (interlaced video) the image 
measurements were performed manually. 
    
Figure 8: Moving character filmed with a moving camera. 
All the measurements are imported as weighted observations 
and used as tie points in the adjustment. At first, for each single 
frame, DLT and space resection are used to get an 
approximation of the camera parameters. Afterwards a bundle 
adjustment is applied to recover all the parameters, using a 
block-invariant APs set. We allowed free rotations and very 
small translation of the camera, weighting the parameters and 
applying significance tests to analysis the determinability of the 
APs. The adjustment results (Go,pox=1.3 pixel) show a focal 
length value of 22.7 mm and a pixel aspect ratio of 1.12. The 
non-unity of the aspect ratio can come from the old video 
camera or because of the frame grabber used in the digitization 
process. Concerning the lens distortion, only Kl (radial 
distortion) turned out to be significant while the others 
parameters could not be reliable determined. The principal 
point was kept fix in the middle of the images and compensated 
with the exterior orientation parameters. Figure 9 shows the 
global distortion effect on the image grid (3 times amplified) as 
well as the recovered camera positions. 
5.2 3D reconstruction and modeling 
For man-made objects (e.g. buildings), geometric constraints on 
the object (e.g. perpendicularity and orthogonality) can be used 
to solve the ill-posed problem of the 3D reconstruction from a 
monocular image. In case of free-form objects (e.g. the human 
body) we could use a probabilistic approach [Sidenbladh, 
2000; Sminchisescu, 2002] or other assumptions must be 
provided [Remondino et al., 2003]: 
1. the perspective collinearity model is simplified into a scaled 
orthographic projection; 
2. the human body is represented in a skeleton form, with a 
series of joints and connected segments of known relative 
lengths; 
3. further constraints on joints depth and  segment's 
perpendicularity are applied to obtained more accurate and 
reliable 3D models. 
This reconstruction algorithm is applied to every frame of the 
sequence, given the image coordinates of some joints of the 
human body and the relative lengths of the skeleton segments. 
For each image, a 3D human model is generated but, because of 
the orthographic projection, the models are no more in the same 
reference system. Therefore a 3D conformal transformation is 
applied using, as common points, the 2 feet (which are always 
the ground) and the head of the character (known height). The 
object position of the feet is recovered with a 2D projective 
  
  
  
   
  
  
  
  
  
  
  
  
  
  
   
  
  
  
   
  
   
   
  
  
  
   
  
  
  
  
  
  
  
  
  
  
  
  
  
   
  
  
  
   
  
  
    
  
   
     
   
   
     
   
    
   
    
   
   
  
  
  
  
  
    
  
    
   
   
   
    
    
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.