Full text: Real-time imaging and dynamic analysis

  
ME ewe we wed 
also became 
patterns of 
p' a“ 
bend 
ard| forward 
0.26 
0.44 
0.47 
0.39 
0.63 
| ry EHM 
0.14 
  
patterns of 
lates. 
] bend 
ard| forward 
0.23 
0.54 
0.61 
0.47 
0.61 
0.15 
sss d 
  
| "NC | CN L| || | 
  
Table 4 Distances between input action patterns of 
unspecified person B and templates. 
emplate keep beni bend 
um still nod-| shake |. til backward| forward 
keep still | 0.09 | 0.27 | 0.44 | 0.22| 0.39 0.25 
nod 0.43 | 0.33 | 0.58 | 0.61 0.48 0.55 
shake | 0.35 | 0.67 | 0.26 | 0.49| 0.66 0.70 
tilt 0.41 | 0.72 | 0.66 | 0.30 | 0.48 0.46 
bend 
backward 0.43 | 0.53 | 0.44 | 0.41 0.27 0.56 
bend |0.40|047| 055 |0.36| 048 | 0.22 
forward 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Table 5 Distances between input action patterns of 
unspecified person C and templates. 
  
emplate kee bend 
p ; en bend 
input still nod | shake | i backward| forward 
keep still | 0.07 | 0.21 | 0.39 | 0.31 0.36 0.24 
nod 0.44 | 0.24 | 0.56 |0.53| 0.52 0.49 
shake | 0.38 | 0.54 | 0.28 | 0.52] 0.57 0.59 
tilt 0.37 | 0.52 | 0.51 | 0.20| 0.44 0.35 
  
  
  
  
  
  
  
  
  
  
  
  
bend 
achavard 9-33 | 0.60 | 0.57 |0.44| 0.22 | 0.50 
bend |550|042| 0.62 |0.48| 0.46 | 0.19 
forward 
  
As a result, it was made clear that each head motion was 
recognized by applying DP matching to the displacement 
velocity vectors of both pupils. The proposed method is 
available for not only the specified person but also 
unspecified persons. The average time required for total 
processing from the take of the image to the indication of 
the matching result is about 0.11 sec per frame. This 
processing speed makes it possible to recognize the head 
motions in real time. 
4. CONCLUSION 
We have proposed the method for recognizing the perplex 
situations in word processor work. By observing the 
subjects in word processor work, it was made clear that 
the perplexed behaviors were mainly shown in their head 
motions. To chase both pupils and set the region of 
interest around them make it possible to capture the head 
motion in real time. The distances between the unknown 
input motion pattern and the template patterns are 
calculated by DP matching. As the result of DP matching, 
it was made clear that the basic actions of perplexed 
behaviors were recognized. The proposed method is 
available for not only the specified person but also 
unspecified persons. To recognize the perplex situations 
correctly, it is desirable to combine the program which 
measures the time interval between the key strokes with 
this program for recognizing head motions. The method 
can be applied to the development of the software which 
responses automatically when the operator falls into the 
perplex situations. 
441 
In the case that the DP matching is implemented every 
frame, matching results are often incorrect in the 
beginning of action. It is desirable that the recognition of 
the head motions is done based on the matching result 
data for several frames such as the frequency or the 
number of times the same action is selected. 
Most people keep still after they “tilt”, “bend forward” and 
“bend backward”. In contrast to these actions, “keeping 
still”, “nodding” and “shaking” are continued for a while. 
For these reasons, the connection to "keeping still" and 
the continuity of the same action are thought to be 
significant. It is thought that the perplex situations can 
be recognized more surely by making the sequences of 
action names and executing the pattern matching of the 
sequences. 
REFERENCES 
Choi, C., Harashima, H. and Takebe, T., 1991. Analysis of 
facial expressions using three-dimensional facial model. 
IEICE Transactions on Information and Systems (D-II), 
J74-D- II (6), pp.766-777. 
Kamitani, T. and Marutani, Y., 1994. Detection of emotive 
change by capturing the blinking intervals. Record of 
the '94 Kansai-Section Joint Convention of IEE Japan, 
p.G283. 
Kamitani, T. and Marutani, Y., 1995a. Discrimination of 
human intention using facial images. Proc. of the '95 
IEICE General Conf., p.A-257. 
Kamitani, T. and Marutani, Y., 1995b. Detectability of the 
annoyed state by video images. Proc. of the 39th Annual 
Conf. of ISCIE, pp.545-546. 
Kamitani, T. and Marutani, Y., 1995c. Analysis of 
perplexed behavior by DP matching. Record of the '95 
Kansai-Section Joint Convention of IEE Japan, p.G345. 
Kamitani, T. and Marutani, Y., 1996. Analysis of perplex 
situations in word processor work using facial images. 
Image Labo, Vol.7, No.4, pp.324-334. 
Kamitani, T. and Marutani, Y., 1997a. Recognition of 
basic actions for the detection of the personal difficulty 
using the position of pupils. Technical Report of IEICE, 
HCS96-41, pp.19-26. 
Kamitani, T. and Marutani, Y., 1997b. Recognition of 
basic actions for the detection of the personal difficulty 
using the position of pupils. Proc. of the '97 IEICE General 
Conf., p.A-14-6. 
Kamitani, T. and Marutani, Y., 1997c. Analysis of perplex 
situations in word processor work using facial image 
sequence. Proc. of SPIE: Human Vision and Electronic 
Imaging II (EI'97), Vol.3016, pp.324-334. 
Kamitani, T. and Marutani, Y., 1997d. Recognition of 
perplexed behaviors using DP matching. Proc. of the 36th 
SICE Annual Conf., Domestic Session Papers Vol.1, 
pp.377-378. 
Kamitani, T. and Marutani, Y., 1997e. Recognition of 
perplexed behaviors by DP matching, Human Interface 
News and Report, Vol.12, No.4, pp.475-482. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.