International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999
156
Geometric Correction Technique for Landsat Images
Quarter Scene (South-East)
Full Scene
Ground
Control
1 st Order
Polynomial
2nd Order
Polynomial
neglecting
DEM
neglecting DEM and
earth curvature
with DEM and
earth curvature
constant
height: 40m
with
DEM
without
DEM
RMS at
Control Points
0.4704
0.3265
0.4016
0.4016
0.4016
0.4016
0.5820
0.5723
RMS at
Check Points
0.3466
0.3410
0.3952
0.3952
0.3952
0.3952
0.4889
0.4837
Table 2. Geocoding RMS errors for one Landsat image.
4. GEOMETRIC PREPROCESSING
To use remotely-sensed imagery and their classification
results in GIS, these images have to be geometrically
transformed to a reference coordinate system. Using the
polynomial correction techniques, an image can be registered
to a map coordinate system allowing its pixels to be
addressed in terms of map coordinates rather than pixel and
line numbers (Richards, 1994). Many applications of remote
sensing image data require more than one scene of the same
geographic area, acquired at different dates, to be processed
together. Such a situation arises when, as in most monitoring
projects, changes are of interest, in which case registered
images allow a pixel by pixel comparison to be made. There
are two ways to register two images to each other. Two
images can be registered separately to a common map
coordinate system. Alternatively, one image can be chosen as
a master image to which the other, known as the slave, is to
be registered. We chose the first method for our study. In our
case, the reference coordinate system was the one of ATKIS
(Gauss-Kriiger coordinate system) covering the complete area
of our interest. The images were geocoded using 2nd degree
polynomial procedures with a nearest neighbour resampling
technique.
The maximum residual error was about 0.5 pixel (15 meters)
for Landsat images. Table 2 depicts some approaches for
geocoding. It shows that in our test area, where the total
difference in height is very low (150 m max.; average 5-40
m), a 2nd order polynomial approach for geocoding a
Landsat TM scene provides the best results. In the best case,
we are using 30 to 40 control points for the computation of
the transformation matrix. It was found that a digital
elevation model could not lead to better results. The latter
coincides with reports of Bahr and Vogtle (1991). They
pointed out that the theoretical height error (h) is given by
the following equation:
where:
hg = flying altitude
P = pixel size
s/2 = swath width
In our case: Landsat TM: 705km * 30m / 90.25km = 234m,
i.e., if the difference of terrain heights is more than 234m, we
introduce an error of approximately 1 pixel, if we don’t use
an elevation model.
The assessment of residual errors was made by overlaying
geometrically correct ATKIS datasets and by measuring
check points in both data layers. If the registration error is
greater than 1 pixel, this may result in the identification of
spurious areas of change between the multitemporal datasets
(Jensen, 1986).
5. THE AUTOMATED APPROACH: DESIGNING A
LANDUSE CLASSIFICATION SYSTEM
5.1. Methodology
In Anderson et al. (1973), the authors stated that, there is no
ideal classification of landuse, and it is unlikely that one will
be ever developed. Each landuse classification is made to suit
the needs of a specific user, and few users will be satisfied
with an inventory that does not meet most of these needs. In
attempting to develop an operational automated classification
system for use with remote sensing techniques, certain
guidelines of criteria must first be addressed. In a monitoring
approach, there are many repeated tasks. Most of them are
very time consuming and therefore call for automated
processing means. Figure 2 gives an overview of a common
classification scheme.
The steps of radiometric preprocessing, like atmospheric
correction, are not content of this paper. More information
about image processing can be found in Richards (1994),
Lillesand and Kiefer (1987) and Campbell (1996). The next
task, training data selection for a classification, is the most
time consuming process and demands a lot of user expertise.
For a single application, this workflow is a robust procedure.
But disadvantages occur, if the user is willing to perform
change detection analysis over a time frame of interest. In
this case, normally all processing steps have to be repeated.
Figure 3 gives an overview of a possible modem workflow of
training data collection.