Full text: Proceedings (Part B3b-2)

460 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 
(b) 
Figure 3: Reconstruction of x coordinate of Figure 2 above using first six Fourier descriptors out of 100 
The first descriptor is the average, and will be left out. (a) 
Second descriptor only (b) Sum of second and third (c) Sum of 
second, third and fourth (d) Sum of second, third, fourth and 
fifth (e) Sum of second, third, fourth, fifth and sixth 
For least squares matching, the number of terms has to be the 
same on both sides of the equation. Since the number of points 
on each feature may not be the same, we use the lower number 
of 3 coefficients between the compared shapes for each 
matching. Using less coefficients gives us the opportunity to 
compare the same first coefficients with each other, without 
changing the shape itself, it is also more efficient, since the 
coefficients need to be calculated once. The other solution 
would be to delete some of the points in the shape with the 
higher number of points, but this would change the contour 
itself. The results are comparable to Belongie (Belongie et al., 
2002), with the added benefit of having a simpler algorithm 
than shape contexts, although our method only considered the 
outline contours, not contours within contours. 4 
4 RESULTS 
We tested the method with both geographic and non-geographic 
features. The geographic features are manually extracted from 
maps and images. The non-geographic images, on the other 
hand, are obtained from the silhouette database from the Brown 
University (Database, 2007). The contours used in this study are 
represented by a set of sampled points. Some contours were 
digitized with intentional errors, others were sampled (at regular 
intervals) from an output from an edge-detector. There is 
nothing unique about the detected edges, such that they are not 
intersection points or break points. The number of sampled 
points was also varied. The results are generated by comparing 
each feature against all the features used in the experiments. 
The similarity between the features are generated using equation 
(16) which exploits the empirical estimate of variance and the 
condition number of the equation system generated from the 
homography transform between the Fourier descriptors of the 
contours. We tabulated the matching recognition performance 
of the proposed method in the form of a confusion matrix which 
is shown in Figure 5. Ideally, the regions marked by red outlines, 
which correspond to the clusters of objects, should have highest 
similarity and the other regions in the matrix should have no 
similarity. Representing a high match by white and no match by 
black color codes, the performance of the method provides 
shades of gray which shows robust matching performance. An 
affine projection of F15 gives close to 0 error when compared to 
the original, as expected, due to round-off. It even provides 
robust matching for different projections, like with the two 
instance of the Mexico. Similar performance results are 
observed when the features are occluded as shown for two 
instances of the Lake Superior and the occluded hand 
silhouettes. We have observed similar performances for three 
instances of the Mexico map, where the occluded version very 
well matches with the two other instances. An interesting 
observation is that, the match score tells us that F15 and F16 
have some similarity which can be considered true since both 
are silhouettes of planes. We should note that for a human 
observer, all the geographic features have some similarity, such 
as most maps used in the experiments have small peninsulas 
visible at one end of the feature. This observation, however, is 
not valid for the Staten Island, which has a more elliptical shape 
and has a smoother outline compared to the rest. 
5 CONCLUSION 
This paper provides a novel approach to matching objects 
represented in the form of a silhouette. Compared to many other 
recognition method in the literature, our method allows 
extracted silhouettes and their outlines to contain noise and 
occlusions. Additionally, the method resolves projective 
deformations to the objects which occur due to perspective 
viewing effects. The proposed approach exploits the projective 
geometry, which results in a robust and computationally simple 
procedure. An important contribution in this regard is the 
elimination of the point correspondences, having the same 
staring points on the silhouette outline and the direction of 
digitization of the outline. Experimental results show the 
robustness of the proposed method. 
REFERENCES 
Belongie, S., Malik, J. and Puzicha, J., 2002. Shape matching 
and object recognition using shape contexts. IEEE Trans, on 
Pattern Analysis and Machine Vision 24(24), pp. 509-522. 
Database,2007.http://www.lems.brown.edu/vision/researchareas 
/siid/index.html. Brown University. 
DeValois, R. L. and DeValois, K., 1980. Spatial vision. Ann. 
Rev. Psychol. 
Hartley, R. and Zisserman, A., 2000. Multiple View Geometry. 
University Press, Cambridge. 
Loncaric, S., 1998. A survey of shape analysis techniques. 
Pattern Recognition 31(8), pp. 983-1001. 
Meyer, Y., 1993. Wavelets, algorithms and applications. 
Society for Industrial and Applied Mathematics. 
Noor, H., Mirza, S., Sheikh, Y., Jain, A. and Shah, M., 2006. 
Model generation for video based object recognition. ACM 
Multimedia 2006.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.