Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

748 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008 
The approach we take for the integration assumes that quality 
segments exist in each channel, and so extracting the highest 
quality segments from the individual channels has the potential 
of providing a segmentation that feature the dominant 
phenomena in the scene, and thereby a meaningful partitioning. 
Generally, our objective is to obtain segments that are uniform 
in their measured property, where optimally, all data units 
belonging to the segment will have similar attributes. 
Additionally, we aim for segments that are spatially significant 
and meaningful. As such, we wish to assemble large group of 
data units, preferably of significant size in object space. These 
segments should not lead however to under-segmentation. 
In order to meat the need for significant grouping in object 
space, we set the score of a segment with respect to its 3D 
coverage. Due to varying scale within the scan the segment size 
in image-space cannot be represented by the number of pixels. 
3D coverage, R, is therefore calculated via 
/? = Aj/?(.?)¿/s « ^ /9 (s) (6) 
seS s * S 
The 3D coverage of the segment does not guarantee it 
correctness. As an example, it may happen that meaningless 
strips will be extracted in the range channel (see Figure 3a). In 
order to reduce the appearance and the influence wrong 
segments, we enforce uniformity standards that relate to the 
measured property. In the present case this variability is 
modeled using a preset threshold values on the within-segment 
dispersion. 
The proposed model is applied as follows. First, the largest 
segment is selected from all channels, if the segment quality is 
satisfactory it is inserted into the integrated segmentation. All 
pixels relating to this segment are then subtracted from all 
channels and the isolated regions in the other channels are then 
regrouped and their attribute value is computed. Following, is 
the extraction of the next largest segment and the repetition of 
the process until reaching a segment whose size is smaller than 
a prescribed value and/or preset number of iterations. We note 
that due to the non-parametric nature of the mean-shift 
segmentation, re-segmenting the data between iterations has 
little effect. 
Figure 2. Polar representation of the individual cues used for the segmentation. The horizontal and vertical axes of the images 
represent the values of <p, 6 respectively, (top) intensity values as distances p (bright=far), "no-retum" and "no-reflectance" pixels are 
marked in blue, (middle) surface normals represented in different colored by their value, (bottom) color content as projected to the 
scanner system (see text).
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.