Full text: Proceedings, XXth congress (Part 5)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
  
invariance between two planes (basketball court and its image) 
that undergo a perspective projection [Semple et al., 1952]: the 
relationship between the two planes is specified if the 
coordinates of at least 4 corresponding points in each of the two 
projectively related planes are given. On the other hand, the 
object position of the head is computed as the middle point 
between the 2 feet. The invariance property and the conformal 
transformation are applied to each ‘orthographic’ model of the 
sequence and the obtained 3D coordinates are then refined 
using the camera parameters recovered in the orientation 
process. The final result is presented in Figure 9, together with 
the reconstructed scene. 
  
  
  
FTU A 
F iii 
Figure 9: Influence of APs for the analyzed camera (upper left). 
The camera poses as well as the 3D reconstruction of the 
basketball court and the moving character (other images). 
  
The recovered poses of the moving human can be used for gait 
analysis or for the animation of virtual characters in the movie 
production. 
5.3 Other example 
Another sequence, presented in Figure 10, is analyzed. The 
camera is far away from the scene and is rotating (probably on 
a tripod) and zooming to follow the moving character. The 
calibration and orientation process, performed with a self- 
calibrating bundle adjustment with frame-invariant APs sets, 
recovered a constant increasing of the camera focal length and, 
again, a non-unity of the pixel aspect ratio (1.10 + 4.5¢-%). 
    
Figure 10: Some frames of a video sequence of a basketball 
action. The camera is rotating and zooming. 
Because of the low precision of the image measurements (O.priori 
= 2 pixel) and the unfair network geometry, the principal point 
of the camera and the other terms used to model the lens 
distortion are not computed as very poorly determinable. The 
final standard deviation resulted 1.7 pixels while the RMS of 
image coordinates residuals are 38.45 um in x direction and 
29.08 um in y direction. 
The 3D reconstruction of the moving character is afterwards 
performed as described in section 5.2. In this case, the 
orthographic models of each frame could not be transformed 
into the camera reference system with a conformal 
transformation. Nevertheless the recovered 3D models are 
imported in Maya to animate the reconstructed character and 
generate new virtual scenes of the analyzed sequence (Figure 
11). 
   
   
Figure 11: 3D models of the moving character visualized and 
animated with Maya. 
To improve the visual quality and the realism of the 
reconstructed 3D human skeleton, we fitted a laser scanner 
human body model [Cyberware] to our data (Figure 12). The 
modeling and animation features of Maya software allow a 
semi-automatic fitting of the laser-data polygonal mesh to the 
skeleton model. The inverse kinematics method and a skinning 
process are respectively used to animate the model and bind the 
polygonal mesh with the skeleton [Learning Maya, 2003; 
Remondino et al., 2003]. 
  
Figure 12: Two examples showing the results of the 3D 
reconstruction and the modeling process. Original frame of the 
sequence (left), reconstructed 3D human skeleton (middle) and 
fitting result, from a slightly different point of view (right). 
6. CONCLUSION 
The photogrammetric analysis of monocular video sequences 
and the generation of 3D human models were presented. 
The image orientation and calibration was successfully 
achieved with a perspective bundle adjustment, weighting all 
the parameters and analysing their determinability with 
statistical tests. The human modeling, in particular from old 
videos, showed the capability of videogrammetry to provide for 
virtual characters useful for augmented reality applications, 
persons identification and to generate new scenes involving 
models of characters who are dead or unavailable for common 
   
     
    
   
   
   
   
   
   
  
   
   
   
  
   
    
  
   
   
   
   
   
  
  
   
   
  
  
  
  
  
  
   
  
  
  
  
  
   
  
  
   
  
   
   
   
   
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.