Full text: XIXth congress (Part B5,1)

  
Gonzalo-Tasis, Margarita 
  
Symbolic models are not enough accurate at first sight, but they provide a low-level support allowing integrate different 
kinds of information. 
Most popular models are variants of the well-known Kohonen's Self-Organizing Maps algorithm [Ko97]. Parametrised 
SOM provide differentiable manifolds as models, whereas Local Linear Maps (LLM) associate to each neuron a locally 
valid linear mapping (see [Ri97] and references therein). 
Overimposed structure given by the tangent bundle (with its dual the cotangent bundle or phase space) allow us to 
connect vertically both variants in terms of transition functions. These functions express changes of references to 
connect the same visual or mechanical events from different aspect graphs; or alternately, to construct paths between 
adjacent vertices in symbolic representations. 
2 IMAGE INTERPRETATION 
2. Hand Images Generation 
Our goal is to identify postures from a simulated Stanford/JPL three-fingered artificial hand generated by Open/GL. 
OpenGL is versatile, adaptable and portable. 
We have inspired in a hand model well-known with a complex architecture 
and functionality (3 DOF for each finger with cylindrical and piecewise 
linear components) similar to an anthropomorphic hand. 
  
FRONT 
We have develop a 3D model and with the OpenGL virtual camera we 
obtained monocameral successive views of the hand. 
  
  
Each view, that is a bidimensional projection of the scene, is preprocessed 
converting in a 256-gray level, and subtracting the background image of the 
PERSPECTIVE hand image 
  
2.2 Low-Level Processing 
From this solid modeling, SUSAN (Smallest Univalue Segment Assimilating Nucleus) allows us to process in low-level 
each image, extracting geometric characteristics as points and segments, that verify incidence conditions. These 
characteristics correspond to visible parts of the shape of 3D articulated hand. SUSAN generates a coarse estimation of 
visible boundaries and meaningful points modeled and grouped; additionally we could extract geometrical information 
from graphical evaluation of the depth of meaningful points in terms of ray tracing. 
Thus, we obtain a set of segments that bounds the shape of the simulated artificial hand and a set of corners (see 
junction detections in [GF98b]) identified in terms of fast variations of curvature from the boundary shape. 
3 POSTURES RECOGNITION 
Multiple junctions appearing in the knuckles are not stable; depending on the posture, they can be evolving from T 
(extension) to an arrow À (flexion) or vice versa. Hence, identification of junctions would require perform tracking and 
grouping in a simultaneous way, by taking care with partial or self-occlusions (see [FG98]), but this approach is too 
expensive in time. 
Then, instead of tracking multiple junctions we extract regions that furthermore the multiple character of some 
junctions, each region contributes only to a double ordinary corner. In fact, this is procedure more stable to small 
perturbations and noise corrupting. In this way, each T-junction corresponding to an extension of a knuckle is captured 
as two L-junctions, one for each region. The same operation is carried out with other triple junctions. 
We found a set of pair of parallel segments (modulus some threshold depending on robotic or human hand) taking those 
who have minimal separation distance. This makes easier the construction of a virtual skeleton of the hand. Virtual 
skeleton is the key for posture identification in terms of incidence conditions. 
  
300 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.