Full text: Real-time imaging and dynamic analysis

    
   
  
  
  
  
   
  
   
   
   
  
  
   
  
   
   
  
  
  
  
   
   
  
   
  
  
  
   
   
   
  
     
   
   
   
  
  
  
  
   
  
   
  
   
  
   
  
   
   
   
  
     
ng of Facial 
254. 
jnizing Facial 
3-1069. 
pression from 
-74, No. 10, 
'sychiatrist to 
chnology, Vol. 
'ession —From 
chnology, Vol. 
Printice-Hall. 
yringer-Verlag, 
94. Computer 
ce on Imaging 
93. Dynamic 
by Recurrent 
>E, HC92-59, 
1994. Lecture 
juter Vision — 
Jsing Wavelet 
al Expression 
147, pp15-22. 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 
FACIAL MOTION ANALYSIS DURING MASTICATION BASED-ON 
FACTORIZATION 
Toshio KAWASHIMA*, Masashi TODA*, Yoshinao AOKI* 
Kiwamu SAKAGUCHI**, and Takao KAWASAKI** 
{School of Engineering*, School of Dentistry**} 
Hokkaido University 
Kita-13, Nishi-8, Sapporo, 060-8628, JAPAN 
kawasima@media.eng.hokudai.ac.jp 
Commission V, Working Group 4 
Key words: Mastication, Motion Analysis, Factorization 
Abstract: 
We propose a direct facial motion estimation method based-on factorization. In the method, we 
can measure the facial motion of a subject masticating without marker. The measurement process 
is divided into two stages; learning stage and measurement stage. In the learning stage, we attach 
markers to a set of measurement points on a subject’s face. We capture several examples of facial 
motion image sequences with the marker location. Once the matrix equation is derived, we can 
directly estimate the location of measurement points from facial image without markers. In the 
report, we state the detail of the method, and discuss the limitation of this approach. 
1. INTRODUCTION 
Face and gesture image analysis is an attractive area 
because the information contained in the motion data 
is essential for communication between human and 
machine. Facial motion analysis is also important for 
medical diagnosis. In dentistry, facial motion around 
lips, perioral motion, is an index of stomatognathic 
function. 
Most studies[1] in dental application attach mark- 
ers to subject's face. This is because precise mea- 
surement requires exact localization of characteris- 
tic points. In addition, the head of a subject must 
be fixed to the special chair to prevent perturbation. 
These restriction limits the clinical application of fa- 
cial motion analysis. 
In this report, we tried a direct estimation of fea- 
ture points without attaching any markers to face. 
In the recent work of Covell [2], he proposes “eigen- 
points” approach to locate control points from an un- 
marked image. His method were applied to morphing 
and used to match corresponding points of two im- 
ages. 
We follow this approach. Instead of sample face 
images of subjects, we preliminary measure sample 
image sequence of facial motion with markers. The 
sequence and the location of marker points are used 
as a training sample. From the relation ship between 
gray levels of an image and its marker location, we 
construct an estimation equation using SVD (singu- 
lar value decomposition). The SVD decompose an 
observation into an orthonormal basis of observation 
and a potential motion parameter. From the result 
of the SVD, we form an estimation equation. 
In section 2, we outline the principle of the method. 
Experimental results of the method are shown in sec- 
tion 3. A simple experiment of mastication analysis 
is shown in the section. 
2. DIRECT ESTIMATION OF 
CHARACTERISTIC POINTS FROM 
IMAGES 
Problem Definition: Estimate the location of virtual 
feature points of a subject from an image sequence 
around lips without markers. 
The term “virtual feature point” is the place where 
a mark to be expected. In [2], they divide the mea- 
surement into two stages. The first stage calculates 
the estimation equation using SVD. In the stage, they 
mark a set of control points where geometrical cor- 
respondence between images is explicitly defined by 
   
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.