Full text: Proceedings International Workshop on Mobile Mapping Technology

7A-2-6 
6.2 Aboveground Objects Extraction 
REFERENCES 
After shadow extraction, adjacent areas with high differences of 
intensity means and small variances are called aboveground 
object areas. We define a larger window that contains a pair of 
such areas. A segmentation is performed once again to achieve 
better boundaries. The above mentioned method is used to select 
and link segments to form the boundaries. 
6.3 Distinguishing Natural Objects and Man-Made Objects by 
Shape Classification 
Aboveground objects and their shadows have their specific 
shapes, for example, natural objects such as trees and man-made 
objects such as buildings and trucks. Natural objects often occur 
in an irregular and complex area. In contrast, man-made objects 
often have a regular contour shape, such as a box surface or 
parallel edges. There are numerous methods to distinguish 
between different shapes (Ballard and Brown 1982). In our 
study, a contour shape is measured by the complexity of direction 
change from pixel to pixel in a 3 x 3 image window. The. 
direction change can be represented through Freeman chain 
coding (Freeman 1974). Within a 3 x 3-image window, the 
direction is coded as: 
4 3 2 
5 0 1 
6 7 8 
Suppose a set of chain codes is c 0 C,C 2 For any i, 
0<i<n, if c M =C i =C I+1 , then the pixel direction at C { has 
no change; otherwise there is a direction change at C ( . Moreover, 
if ( 2,. . , then we can say direction 
ma x 2Jc* +1 -c k \\<m 
at C- has no change within a step size of m; otherwise there is a 
direction change at C ( within a step size of m. Similar to what is 
found in the fractal dimension of a shape calculation, the greater 
the direction change, the more complex is the curve. Those 
shapes with high complexity are classified as natural objects 
while shapes with low complexity are classified as man-made 
objects. 
7. CONCLUSIONS AND ACKNOWLEDGEMENTS 
This paper presents research results of feature extraction from 
mobile mapping imagery sequence using geometric constraints 
derived from GPS/INS and stereo models. The extracted features 
are fed to object recognition models, for example, neural 
networks. 
The research was supported by National Science Foundation 
(NSF project # CMS-9812783) and OSU Center for Mapping. 
Mobile mapping data used in the paper are from Transmap Inc. 
in Columbus, OH. 
Ballard, D.H. and C.M. Brown 1982, Computer Vision, Prentice- 
Hall, Inc., New Jersey. 
Beucher, S. and F. Mayer 1993, The morphological approach to 
segmentation: the watershed transformation. In E. Dougherty, ed. 
Mathematical Morphology in Image Processing, Marcel Dekker 
Inc., pp. 433-481. 
Canny, J. 1986, A Computational Approach to Edge Detection, 
IEEE Trans. Pattern Anal. Machine Intell. (PAMI), vol.8, No.6. 
Freeman, H. 1974, Computer Processing of Line Drawing 
Images, Computer Surveys, Vol.6, No.l, pp.57-98. 
Gruen, A. and H. Li 1997, Semi-Automatic Linear Feature 
Extraction by Dynamic Programming and LSB-Snakes, 
Photogrammetric Engineering & Remote Sensing, vol.63, No. 8, 
pp.985-995. 
Kass, M., A. Witkin and D. Terzopoulos 1987, Snakes: Active 
Contour Models, International Journal of Computer Vision, 
Vol.l, No.4, pp.321-331. 
Li, R., F. Ma, and Z. Tu 1998, Object Recognition from AIMS 
Data Using Geometric Constraints, Project Report, Department 
of Civil and Environmental Engineering and Geodetic Science, 
The Ohio State University, 62p. 
Pratt, W.K. 1991, Digital Image Processing, Second Edition, 
Wiley-Interscience. 
Serra, J. 1982 & 1988, Image Analysis and Mathematical 
Morphology, Academic Press, London, Vol.l and Vol.2. 
Tao, C., R. Li, and M. A. Chapman 1998, Automatic 
Reconstruction of Road Centerlines from Mobile Mapping Image 
Sequences, Photo grammetric Engineering & Remote Sensing, 
vol. 64, No. 7, pp.709-716.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.