Full text: XIXth congress (Part B5,1)

  
Fua, Pascal 
  
  
(a) (b) (c) (d) (e) 
Figure 1: The layered human model: (a) Model used to animate heads, shown as a wireframe at the top and as a shaded surface at the 
bottom. (b) Skeleton. (c) Ellipsoidal metaballs used to simulate muscles and fat tissue. (d) Polygonal surface representation 
of the skin. (e) Shaded rendering. 
Using Models: Thus, for both optical and video-based motion capture, we have developed robust model-based fitting 
techniques to overcome ambiguities inherent to the raw data. To be suitable for animation, models tend to have many 
degrees of freedom and our algorithms are designed to handle them. They can also deal with ambiguous, noisy and 
heterogeneous information sources, such as optical markers, stereo, silhouettes and 2—D feature locations. 
In the remainder of this paper, we first introduce the animation models we use and their specificities. We then discuss our 
approach to skeleton-based motion capture using optical markers. Last, we present the techniques we have developed for 
video-based modeling of faces and bodies. 
2 ANIMATION MODELS 
The modeling and animation of Virtual Humans has traditionally been decomposed into two subproblems: facial anima- 
tion and body animation. 
In both cases, muscular deformations must be taken into account, but their roles and importance differ. Facial animation 
primarily involves deformations due to muscular activity. Body animation, on the other hand, is a matter of modeling a 
hierarchical skeleton with deformable primitives attached to it so as to simulate soft tissues. It is possible to model bodies 
in motion while ignoring muscular deformations, whereas to do so for facial animation is highly unrealistic. 
For heads, we use the facial animation model that has been developed at University of Geneva and EPFL (Kalra et al., 
1992) and is depicted by Figure 1(a). It can produce the different facial expressions arising from speech and emotions. To 
simulate muscle actions, we use Rational Free Form Deformations (RFFD) because they are simple, easy to use, intuitive 
and computationally inexpensive (Kalra et al., 1992). The muscle design uses a region based approach: Regions of interest 
are defined and associated with a muscle made of several RFFDs. Deformations are obtained by actuating those muscles 
to stretch, squash, expand and compress the inside facial geometry. For complex expressions, the model may be used to 
simultaneously render the deformations of various parts of the face. 
Our body model (Thalmann et al., 1996) is depicted by Figure 1(b,c,d,e). It incorporates a highly effective multi-layered 
approach for constructing and animating realistic human bodies. Ellipsoidal metaballs are used to simulate the overall 
behavior of bone, muscle, and fat tissue; they are attached to the skeleton and arranged in an anatomically-based ap- 
proximation. Skin construction is a three step process: First, the implicit surface resulting from the combination of the 
metaballs influence is automatically sampled along cross-sections (Shen and Thalmann, 1995, Thalmann et al., 1996). 
Second, the sampled points become control points of a B-spline patch for each body part (limbs, trunk, pelvis, neck). 
Third, a polygonal surface representation is constructed by tessellating those B-spline patches for seamless joining of dif- 
ferent skin pieces and final rendering. This simple and intuitive method combines the advantages of implicit, parametric 
and polygonal surface representation, producing very realistic and robust body deformations. 
3 SKELETON-BASED MOTION CAPTURE 
Our goal is to increase the reliability of an optical motion capture system by taking into account a precise description 
of the skeleton's mobility and an approximated envelope (Herda et al., 2000). It allows us to accurately predict the 3-D 
  
254 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 
  
loc 
dr: 
In 
we 
be 
mé 
ide 
In 
5 NO m] 
A FÉ m Pl
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.