Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B4-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008 
1353 
ArcMap for interpretation and class labeling. Objects relating to 
each signature were identified in the second-unsupervised 
classified image and the classes were labeled through various 
image interpretation techniques. Subsequently, final signatures 
were used as the input signature file in a supervised 
classification procedure to extract a semi-supervised classified 
image (Fig. 3). Erdas Imagine Accuracy Assessment function 
used for the analysis of the results of supervised and 
unsupervised methods. For this matter, control points were 
selected with fair distribution over the image and their identity 
were recognized using all image data and vector maps. For each 
clip of the image, at least 20 control points were used; bearing 
in mind to choose at least one control point for each of the 
object classes. These control point then were used in the 
accuracy assessment process. We repeated accuracy assessment 
procedure for all the classification results of MS and pan- 
sharpened images. In aggregate, the accuracies of unsupervised 
and supervised methods in Kappa index were calculated about 
0.8 and 0.9, respectively. Although the accuracy of the 
supervised method is better than the unsupervised one, there 
were larger areas in the supervised results which were assigned 
as unclassified due to the lack of comprehensive training 
signatures. 
Figure 1. Supervised classified image 
Figure 2. Signature plot of unsupervised classified image 
Figure 3. Unsupervised (semi-supervised) classified image 
It becomes obvious that any classifier requiring high training 
accuracy may not achieve good generalization capability (Ng et 
al, 2007). That is the reason we have larger unclassified areas in 
the supervised method’s results. 
5.4 Fuzzy Object Extraction 
For the fuzzy object extraction method, we produced files of 
training sites for all tracks of the image. In order to minimize 
redundant differences in homogeneous areas of the image, a 
segmentation function by eCognition software applied on the 
image tracks. After segmentation, training sites were used to 
define object classes. At first, parent classes labeled, then, 
different classes in each parent class were defined. At last, all of 
image tracks were classified and converted to polygon vector 
layers (Fig. 4). In some cases, there were many pixel sized 
polygons in the extracted vector layers. To minimize the 
number of small polygons, classification process repeated from 
the segmentation step down to vectorization phase. As the last 
method of image information extraction, Neural Network 
classification implemented with the help of IDRISI. For this 
method, training sites defined in IDRISI and the tags of related 
object classes were assigned. The classification process repeated 
with 1000 iteration to achieve the least RMS error. 
Figure 4. Fuzzy classified image using eConition
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.