Full text: From pixels to sequences

  
  
  
  
  
1 
4 
X 
(a) 
95 
237 
Detecting Grasping Opportunities in Range Data 
Martin Rutishauser and Frank Ade 
Image Science Laboratory 
Swiss Federal Institute of Technology 
CH-8092 Zürich 
phone: +41 1 632 {5054,5280} 
fax: +41 1 632 11 99 
email: {rutis,ade}@vision.ee.ethz.ch 
KEYWORDS: Range Data, Segmentation, Evidence Accumulation, Grasping 
ABSTRACT 
We have investigated the problem of removing objects from a heap without having recourse to object models. 
As we are relying on geometric information alone, the use of range data is a natural choice. The objects are to 
be grasped by a two-fingered gripper and therefore the system has to see opposite patches of the object surface. 
To ensure this, we use two range views from opposite sides. Each of the two acquired data sets is tessellated into 
triangles. A merge of them is performed in such a way that a “mutual approximation” is achieved in regions 
with overlap. A single triangular tessellation of the whole data set serves as a primary world representation. 
This representation is then segmented, i.e., partitioned into assemblies of contiguous triangles which correspond 
to objects or object parts in the scene. This is done by deleting all jump discontinuity points and all points 
with a concave curvature above a certain threshold. A connected component labeling completes the final world 
representation. Two heuristics help the system select a “focus of action” which consists of a suitable component. 
Good grasping point pairs on it are then identified which fulfill certain quality criteria. 
1 INTRODUCTION 
Perceptual and reasoning capabilities are crucial if robots are to work in unstructured or weakly structured 
environments. A large class of tasks in such a context is the manipulation of objects. Up to now the dominating 
paradigm in this field has been the object model based approach for object recognition and pose determination. 
This approach has been thoroughly explored through many years and successfully adapted to many different 
tasks. Practically all vision based robot systems which are used in industry today rely on the model based 
object recognition paradigm. However, there are many material-handling tasks where it is desirable that the 
robot vision system has also the ability to deal with objects of which it has no stored models. 
Examples include the clearing of objects (which should not be there) from floors, working surfaces, conveyor 
belts, cafeteria trays and such like. Singulation of unknown objects from a heap or from a dense object stream 
(as can be found in a part feeding system in manufacture) is another application. Sometimes it is not necessary 
at all to recognize objects to be fed to a machine. Such cases should be exploited, whenever possible, because 
teaching a robot vision system an object model is a very time-consuming and therefore costly process. Even 
if object models are available, it could be too difficult or too time-consuming to actually use this knowledge 
in order to identify an object in a heap. Instead, this knowledge could be withheld at first. The robot could 
function as an intelligent singulator, expose the individual object to a camera or other sensors and only then 
invoke a recognition procedure to determine the identity and pose of the object. Because recognition of an 
object presented alone is much easier than in a cluttered context, and also much faster, it is often the case that 
the extra manipulation required is outweighed by a gain in speed. 
IAPRS, Vol. 30, Part 5W1, ISPRS Intercommission Workshop “From Pixels to Sequences”, Zurich, March 22-24 1 995 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.