Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
objects, such as buildings, do not coincide on the different 
images (e.g., parallel green, yellow and red edges in the upper 
row of buildings). This suggests that the Instantaneous Field 
of View of the sensor had a significant spectral dependence. 
Unsupervised classification. In solving remote-sensing 
problems, classification - sometimes combined with 
contextual information - is usually expected to provide the 
final answer. To increase the reliability and robustness of 
classification, many researchers favor supervised techniques. 
In our object recognition scheme, classification is just one of 
the early vision processes that provides only partial, 
incomplete information for object recognition. Since the 
number of classes is usually not known a priori and no 
training data is available, we employ unsupervised 
classification methods. 
Unsupervised classification explores the inherent cluster 
structure of the feature vectors in the multidimensional feature 
space. Clustering usually results in a grouping, where the 
variance within a cluster is minimized, while maximizing it 
between the clusters. Clusters are not intrinsic properties of 
the set of features under consideration. There is a risk that, 
instead of finding a natural data structure, we would be 
imposing an arbitrary or artificial structure, for example, by 
selecting an unreasonable number of clusters. Therefore, it is 
inevitable to analyze the distribution of the classes and their 
separability in feature space. 
In this test, we merged the visible-NIR bands (3-10) of the 
multispectral scanner data by using the well-known 
ISODATA methods. At the heart of the ISODATA scheme is 
an updating loop that, using a distance measure, reassigns 
points to the nearest cluster center, each time the center is 
moved (Nadler and Smith, 1993). Since the number of 
different cover-types is scene dependent and usually not 
known a priori, the dataset was classified several times with 
increasing the number of classes each time. Because some of 
the spectral bands are highly correlated, different band 
combinations were additionally tested. Each classification 
was compared with the ground truth. Additionally, the 
separability of classes was analyzed. Different separability 
measures are described in the literature. To find the best 
definition is not a trivial task (Schowengerdt, 1997). For our 
clusterings, the different separability measures (Mahalanobis, 
divergence, Jeffries-Matusita, etc.) provided very similar 
results. 
We obtained the best clustering results, when using the 
complete 8-band dataset (see Fig. 3b, 3c). However, when 
using only 4 bands, selected from different spectral positions, 
still acceptable results were obtained. Six major cover types 
were distinguished in the scene (Figure 3b), namely water and 
roof (black, 1), roof (dark green, 2), vegetation (red, 3 and 4), 
and roof and bare soil (light gray and white, 5 and 6). Using 
more classes, for example ten (Figure 3c), some of the classes 
were split, giving rise to new classes with relatively low 
separability. Comparing the cluster maps with the aerial 
photographs reveals that despite the confusion between water 
and roof pixels, and bare soil and roof pixels, the boundary 
between man-made surfaces (buildings, walkways, driveways, 
roads) and vegetated natural surfaces is always recognizable. 
Note that other boundaries, such as the ones between bare soil 
and grass, and between vigorous and sparse vegetation, are 
also present, even though these boundaries are not related to 
any objects of interest. It is very important to emphasize that 
no building or roof spectra exist, as it is well known from 
previous studies. For example, the 6-class clustering classified 
roof pixels into four different classes with distinctly different 
spectra throughout the entire range. 
To include information about the quality of the clustering in 
the visual representation, we introduce the concept of weak 
and strong boundaries. Weak boundaries are located between 
pixels belonging to classes with low separability; they are of 
secondary importance. In the 6-class clustering, all 
boundaries are strong. However, the 10-band clustering 
rendered 3 weak boundaries, from a total of 45. The use of 
weak and strong boundaries helps considerably in organizing 
and simplifying edges. 
Fig. 2 a. Visible image and detected edges; b. NIR image and detected edges; c.Thermal image and detected edges.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.