Full text: XVIIIth Congress (Part B5)

  
  
m 
STEP description 
st 
Inference 
  
     
  
  
Measurement requests 
- geometry 
- surface characteristics 
  
  
  
  
Object models 
  
  
  
Matching 
Sensor models \ 
  
  
  
  
  
Symbolic scene 
description 
  
  
  
    
e Uo 
E 
Sensor data acquisition 
   
  
  
LR 
Sensor data 
Registration, 
data fusion 
Common 3D (‘multi-layer’) 
surface representation 
Segmentation, 
feature extraction 
  
  
  
  
   
  
  
  
  
  
Figure 3: Object recognition approach using sensor fusion and active exploration. 
shown in Fig. 3. The main idea behind this concept is that 
the complexity within the recognition process must be kept 
small. We start recognition with a small number of captured 
sensor data. In this case the search space is small but it is 
to be expected, of course, that matching will not come up 
with the recognized object. The hypothesis generation and 
verification scheme is now used to call for new sensor data. 
Next goal-driven new measurements are carried out which 
over several refinement steps may lead to a recognition in 
steps: at the beginning the object class is identified and at 
the end of the analysis the unknown object is recognized. 
In the circular process (Fig. 3) the fusion of multi-sensor 
data is the other important characteristic. Clearly, the use 
of information from different sensors can be used to im- 
prove the quality of the segmentation result. E.g. range 
images contain information about the 3D shape of the im- 
aged object more explicitly than intensity images. There- 
fore, segmentation of range images in physically meaning- 
ful parts is often much easier than the segmentation of in- 
tensity images. However, considering the spectrum of avail- 
able sensors and the variable lighting, non-geometric prop- 
erties can be captured as well. Surface roughness can be 
obtained either by high resolution distance imagery or by 
the use of high resolution intensity images (in connection 
with dedicated lighting). Surface color is captured by the 
color CCD camera. Using different light incidence angles, 
a general surface classification can be obtained from image 
sequences. 
Having this data there are two tasks to be done: it must be 
incorporated into the segmentation and into the modelling 
of the object. Concerning the modelling we chose the ISO 
10303 standard (STEP, (ISO 1994)), and particularly the 
application protocol “Core Data for Automotive Mechanical 
Design Processes” (10303-214) as a basis for deriving ob- 
ject models. This protocol allows surface properties like 
surface coating or surface roughness to be specified. 
62 
A prerequisite for the segmentation is that all sensor data 
is transformed to a common representation. In our case, 
all data is projected onto the reconstructed object surface, 
forming several layers of information. This requires the re- 
gistration of the data. Often this step is done using the 
orientation of the sensor given by the measuring system, 
which makes a very precise and thus expensive position- 
ing necessary. Another possibility is to use given sensor 
orientations just as an approximation and to fit the data ac- 
cording to positions of points which can be identified auto- 
matically in both datasets. We demonstrate this approach 
in the next paragraph. 
4 FIRST RESULTS 
To investigate our object recognition and location concept, 
we have carried out some experiments. Fig. 4(a) shows 
an industrial object as seen from one camera of the stereo 
camera. The object is made of free-form shaped sheet 
metal. At the dark areas in Fig. 4(a), the metal has been 
cut out by a laser cutter. The images have a resolution of 
512x512 pixels. 
By image matching, the relatively coarse height model 
shown in Fig. 4(b) is obtained. As expected, this coarse 
model cannot deal properly with the breaklines of the cut- 
out regions of the object. Nevertheless, since the cut-out 
regions show up very well in the intensity imagery, we can 
extract them using standard image processing. As shown 
in Fig. 4(c), however, this usually yields some spurious data 
as well. Thus, to improve our results, we use the larger of 
the detected features to form areas of interest which are 
then captured using the range sensor. 
Fig. 4(d) shows a range image of the lower left part of the 
object. The image consists of 256x256 3D data points. 
Clearly, besides capturing the breaklines very well, the data 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996 
Figure 4: ( 
intensity im 
height mod 
is also sui 
we have fo 
height moc
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.