Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision“, Graz, 2002 
  
and C, and we determine the canonical feature 7; which 
maximizes Vo; (and thus represents the baseline of con- 
sensus) and the redundancy Redp,. We keep the subset of 
features which maximizes Redp,, among all the possible 
combinations (the process takes less than 15 minutes on a 
333 MHz processor). We show for each group on table 1 
  
  
  
  
  
  
  
  
  
r Features Redp, 
Gl 5 Ia, C , C4, 151, dr» 0.59 
G2 7 Ia, D, R, dus Ch Ms1, d,2 0.37 
G3 10 Ia, D, R, duis dys, Cha, de, Ms1, 0.68 
dy», dra 
GA 8 ] Ia. C, D. 5,6, d. ds, dr: 0.64 
  
  
Table 1: Feature subsets which maximize redundancy. 
the feature subset that maximizes the redundancy and the 
value of the redundancy criterion. Groups 3 and 4 have 
the highest scores, so one can claim that the subjective 
and objective features are substantially related along the 
first canonical dimension. But this extent of agreement be- 
tween numerical features and evaluators cannot be reached 
for group 2, which shows quite a low redundancy score. 
When we compare the behavior of the mean of the marks 
(recall that the PCA has revealed a 1-dimensional evalua- 
tor space) and that of the canonical features, we observe 
that some segmentations cause great discrepancy. In sim- 
pler words for these images the note of the evaluators can- 
not be predicted based on the chosen feature set. Thus 
we developed a procedure to eliminate segmentations that 
cause conflicting votes among the evaluators and kept only 
the consensus images, that is those images that received 
the same relative ranking from all the evaluators. CA re- 
sults with the pruned consensus set are shown in table 2. 
This second table reveals a great improvement both in the 
  
  
  
  
  
r Features Redp, 
Gl 7 Ia, C,D,R. ds Caet d. 0.73 
G2 6 | fa, durs Ca.£t Mar. d,2 et di» 0.40 
G3 10 Ia, C > D. R, dui: du2, Ca, mga, 0.79 
dy» et dr» 
G4 7 Ta 4,4102: 4. dec, dy2 et dr» 0.80 
  
  
  
  
  
  
Table 2: Feature subsets which maximize redundancy 
(learning on images of consensus). 
consistency between the selected features across groups as 
well as in the redundancy marks. While groups 1, 3 and 4 
show this improvement, group 2 remains still a poor pre- 
dictor of evaluator marks from the features. Hence we re- 
moved this group from the rest of the experiment. 
Thus we get three psychovisual feature subsets which pre- 
dict the vote ofthe evaluators reasonably well. One method 
to collapse these three sets to one “best” set would be to 
use cross-validation across groups. For example we use 
the feature set of group 1 and use it on predicting the data 
of the other two groups, i.e., groups 3 and 4, and calculate 
the redundancies Red(Ia, C, D, R, duo, C4, de) on these 
groups. We repeat this calculation similarly for the other 
two feature sets. Then we take the average of the redundan- 
cies of each feature set on the three segmentation groups, 
and choose the largest one. 
6 CONCLUSION 
We have presented the framework for a new feature ex- 
traction method for a task-oriented segmentation that com- 
bines both the statistical properties of image features and 
segmentation quality assessments of a jury. The method- 
ology has been applied to the task of segmentation build- 
ings in medium-resolution images. This study was the first 
step for the extraction of psychovisual image features. The 
work will continue to build membership functions of fea- 
tures in a Tverskian context. 
ACKNOWLEDGMENTS 
We would like to thank Pr. B. Burtschy (from ENST) for 
his valuable advice during discussions on canonical analy- 
sis. 
REFERENCES 
Chassery, J. M. and Montanvert, A. (1991). Géométrie discrète en anal- 
yse d'images. Hermès, 34, rue Eugène Flachat 75017 Paris. 
Colliot, O., Bloch, L, and Tuzikof, A. V. (2002). Characterization of 
approximate plan symmetries for 3d fuzzy objects. In IPMU, Annecy, 
France. 
Coster, M. and Chermant, J. (1985). Précis danalyse d'images. CNRS. 
Erdem, C., Tekalp, A., and Sankur, B. (2001). Metrics for performance 
evaluation of video object segmentation and tracking without ground- 
truth. In IEEE, editor, /nternational Conference on Image Processing 
(ICIP 01), volume 2, pages 69—72. 
Gagalowicz, A. and Monga, O. (1985). Un algorithme de segmentation 
hiérarchique. In INRIA, editor, Reconnaissance des Formes et Intelli- 
gence Artificielle (Grenoble), volume 1, pages 163-178. 
Huang, Q. and Dom, B. (1995). Quantitative methods of evaluating im- 
age segmentation. In IEEE, editor, /nternational Conference on Image 
Processing (ICIP'95), Washington DC, USA. 
Ji, Q. and Haralick, R. M. (1999). Quantitative evaluation of edge de- 
tectors using the minimum kernel variance criterion. In IEEE, editor, 
International Conference on Image Processing (ICIP'99), Kobe, Japan. 
Kam, L. (2000). Approximation multifractale guidée par la reconnais- 
sance. PhD thesis, Orsay University. 
Kanungo, T. and Haralick, R. M. (1995). Receiver operating curves and 
optimal bayesian operating points. In IEEE, editor, International Con- 
ference on Image Processing (ICIP'95), Washington DC, USA, volume 3, 
pages 256—259. 
Letournel, V. (2000). Avancement de thése. Technical Report CTA 2000 
R 086, Centre Technique d’Arcueil. 
Santini, S. and Jain, R. (1999). Similarity measures. JEEE Transactions 
on Pattern Analysis and Machine Intelligence, 21(9):871—883. 
Saporta, G. (1990). Probabilités, analyse des données et statistique. 
Technip. 
Suk, M. and Chung, S. M. (1983). A new image segmentation technique 
based on partition mode test. Pattern Recognition, 16(5):469—480. 
Tinsley, H. E. A. and Brown, S. D. (2000). Handbook of Applied Multi- 
variate Statistics and Mathematical Modeling. Academic Press. 
Zhang, Y. J. (1996). A survey on evaluation methods for image segmen- 
tation. Pattern Recognition, 29(8):1335—1346. 
A - 204
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.