Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B4-3)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B4. Beijing 2008 
Yi-jin and Xiao-wen (2004) have presented satellite image 
analysis for map revision. They have underlined utilization of 
knowledge derived from existing GIS database as a priori 
knowledge in image classification process. They have used 
cartographic semantics to extract objects from images based on 
geometry and topology rules. Finding the difference between 
image and vector map, new geographic features and the 
changed features are detected. Depending on the results, the 
map is updated. 
Sahin et al. (2004) have analyzed automatic and manual feature 
extraction from KVR-1000 aerial images for the revision of 
1:5000 to 1:25000 scale maps and Zhou et al. (2005) have 
analyzed semi-automatic map revision through human- 
computer interaction as a faster and more reliable method. Sahin 
et al. have stated that buildings, roads, water structures and 
forest classes can be extracted by automatic methods and the 
remaining object classes for the mentioned maps can be 
extracted by manual methods. Using GCPs* collected for the 
study area, they have analyzed geometric accuracy of the KVR- 
1000 ortho-images. Afterwards, they have used manual and 
object oriented information extraction methods to extract 
needed object classes from the images. 
3. TEST DATA AND STUDY AREA 
The input data comprised a recently acquired QuickBird scene 
and an Ikonos image frame together with aerial photographs and 
digital maps, all covering the same area. The scales of digital 
maps were 1:2000 and 1:5000 over the whole area. The Ikonos 
image was the primary source of information for the analyses 
and the QB scene, which only partially covered the study area, 
was used for error checking and the interpretation of features 
which were not sufficiently discernible in the Ikonos image. 
Aerial photographs also were used as supplementary 
information to improve the accuracy of decisions based on 
image analysis processes. 
4. GEOMETRIC CORRECTION AND IMAGE FUSION 
Map revision involves data sources from various entities 
including vector layers, scanned data, aerial photographs and 
satellite imagery. In order to establish maximum geometric 
compatibility, all of the mentioned data sources should be 
geocoded and projected in a common coordinate system. The 
old maps aimed to be updated had a good geometric accuracy 
and the maximum compatibility between different layers of the 
common areas and the adjacent sheets in the edges. Accordingly, 
we used these maps as reference vectors to rectify other data 
mentioned above. With the help of different geometric 
correction models, specifically polynomial and rational 
functions, aerial photographs and satellite imagery were 
geocoded and projected in UTM** coordinate system. Later on, 
corrected images made ready for further processing and analysis. 
In order to exploit the maximum capabilities of the images, we 
needed to extract pan-sharpened products. Results of 
multiplicative, IHS, PCA and wavelet image fusion methods 
were analyzed to obtain suitable pan-sharpened product 
according to specific needs of map revision. Quality of the 
results of different image fusion methods were assessed through 
visual interpretation. Multiplicative and wavelet methods kept 
* Ground Control Points 
** Universal Transverse Mercator 
spectral richness of the original multi-spectral images better 
than IHS and PCA methods, at the same time, the two last 
methods were better in keeping spatial precision of the 
panchromatic band. After that, smaller objects were extracted 
from the pan-sharpened image. From the visual point of view, 
pan-sharpened images acquired from PCA and IHS methods 
proved more useful for detection of boundaries of fine objects. 
Pan-sharpened images produced by multiplicative and wavelet 
fusion methods were exploited in automatic classification 
procedures. 
5. OBJECT EXTRACTION 
5.1 Preprocessing 
There were diverse groups of surface cover types in the area, so 
we needed to analyze the image in smaller parts for easier 
processing and interpretation. For the same reason, prior to the 
analysis of the images, a grid with cells of 1000x1000 meter 
dimensions was constructed. With respect to the cell-size of 
Ikonos panchromatic band (which is one meter), each cell of the 
grid covered tracks of 1000x1000 pixels of the image. These 
image tracks were clipped with the overlying grid cells. In some 
cases, four or nine neighboring cells merged to clip the image in 
bigger parts. We have used Erdas Imagine, IDRISI, eCognition 
and ArcGIS in this project. 
5.2 Visual Feature Extraction 
Different image analysis methods like Principal Components 
Analysis (PCA), image ratios and different band combinations 
were employed in the visual interpretation. Color composites of 
principal component images offered greater help in the 
extraction of some object classes like buildings. These 
composites had better contrast in some areas which in the 
original image were not so easy to detect and distinguish the 
differences. Ratio images produced by NDVI and WI indices 
were useful to extract groups of classes like vegetation and wet 
(and shadow) areas. Each of these groups of classes was 
extracted separately. Color combinations of the Ikonos image 
were also a great help in the visual interpretation process. The 
QuickBird color composite with 2.5x2.5 meter pixels were a 
supplementary aid in judgement of some indistinct features. 
5.3 Automatic Classification 
Supervised and unsupervised image classification methods were 
applied using MS and pan-sharpened images. For the supervised 
method, training samples were selected through interactive 
visual on-screen inspection. These training sites were selected 
using Erdas Imagine Classifier and Viewer solutions. We 
should have selected training sites for each track of the image 
and for each of the MS and pan-sharpened images. Digital maps 
and aerial photographs also used for better recognition of 
training sites. Afterward, we produced the final classified image 
using training sites (Fig. 1). After running Imagine Classifier 
module with 10 output classes, signature files were edited using 
Signature Editor (Leica Geosystems, 2003). A special color was 
set for the signature of each of the output classes for better 
contrast and their plots were compared (Fig. 2). Signatures with 
high correlation were merged considering them as different 
color hues of the same major classes (such as vegetation). Once 
final signatures were decided and a unique color specified for 
each signature, setting were saved in the signature files. The 
edited signature files then used in the second phase of 
unsupervised classification. The resulting classified images then 
overlaid on the original MS and pan-sharpened images in
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.