IAPRS & SIS, Vol.34, Part 7, “Resource and Environmental Monitoring", Hyderabad, India, 2002
The landscape contains a wealth of topographic features that
reflect active faulting, and can provide constraints on crustal
deformation on time scales longer than those of the
instrumental record or earthquake cycle. It can provide a
warning of earthquake activity due to undetected faults, that
are buried or whose scarps are obscured by erosion. It is
possible to obtain more detail than can be seen in the 100m
DEMs that are for the most part the best resolution currently
available in earthquake prone areas outside the USA. For
instance, dry valleys (wind gaps) related to abandoned
drainage channels, and smaller-scale stream systems
sensitive to change in base level, provide much information
about how a fault evolves with time. High-resolution digital
topographic data with improved vertical accuracy will
enable quantitative analysis of drainage patterns and
geomorphic measures related to uplift and tilting, allowing
interpretations at present only possible through studies in the
field, relative rates of deformation due to faulting to be
determined, and landscape evolution models to be tested.
3.0 ACCURACY OF A DEM
3.1 Physical accuracy
The main factors that affect the accuracy of a DEM that has
been processed, edited and archived for use are:
* — Accuracy of the source data and/or derived elevation;
Terrain characteristics;
Sampling method (grid [grid spacing], TIN]
Interpolation method;
Representation (raster, tessellation, contours...)
The relationship between accuracy and spacing is highly
dependent on the nature of the terrain. The formula of
Ackermann (1980) has widespread use:
Oz? =(0.d)2 + 02
where 0,2 variance of interpolated arbitrary points in
the DEM
d mean (representative) point interval between
measurements (grid spacing)
[] proportionality factor depending on the type
of terrain
a measurement error
Ackermann (1980) shows that for aerial photography 0
varies between 0.023 for ‘difficult’ terrain to 0.004 for
'simple' terrain. Other useful work has been done by
Torlegard et al, (1984) and more recently a thorough
investigation has been carried out by Li (1992, 1993a, b)
who summarises and extends previous work. From this
work it is concluded that:
® there is a strong dependency on terrain type;
e vertical accuracy of a DEM for a given terrain type
seems, to a good approximation, to be linearly
dependent on grid spacing;
e higher accuracy will be obtained if breaklines are
included in the data set;
* accuracy results will vary from two data sets of the
same area.
We can also take from Ackermann's formula that the
accuracy is dependent on measurement accuracy, and that,
for automatically generated DEMs, depends the quality of
the stereomatching. It is well known that automatic
stereomatching will produce blunders due a number of reasons
such as occlusions, data acquired at different times and poor
texture in the images. The removal of these depends on editing
techniques, which are generally manual, but may be aided by using
other data sets if available: this leads to the use of data fusion
techniques.
Most DEMs will have been derived by resampling the original
measurement and the reconstruction of surfaces will depend on
interpolation. However the method of interpolation is less
important than the quality of the original data, but in assessing the
quality of a DEM the processes that have been used will affect the
accuracy and it is important to know both the source of the data
and the method of interpolation.
3.2 Methods of interpolation
Interpolating the sample points in order to generate the gridded
surfaces is a trivial process that affects the accuracy of the resultant
DEM significantly. Since the input sample data and the terrain
being modelled may have different characteristics and hence no
single interpolation method is best suitable for all situations.
There are two classes of interpolation; deterministic and
probabilistic. Deterministic methods are based directly on the
surrounding measured values and/or mathematical formulae
applied to those values. Probabilistic models are based on
statistical properties and include autocorrelation, which is the
strength of similarity between measured samples accounting for
distance and direction. Probabilistic methods are also called
geostatistical interpolation methods. These methods aim to reduce
the error between predicted values and the statistical model of the
surface. The most commonly used interpolation methods are
inverse distance interpolation, nearest neighbourhood interpolation,
splines and Kriging. Kriging is a geostatistical method.
The inverse distance method generates surfaces with a dimpled
effect and valleys between sample data point locations, (Maune et
al, 2001). The accuracy of the results of nearest neighbourhood
method depends ón the success of finding the right neighbour. The
nearest neighbour method is good when the data points are
unequally sampled and/or unequally distributed.
Splines are a general class of interpolation techniques that use a
mathematical formula to create a surface that minimises overall
surface curvatures, resulting in a smooth surface that passes
through the input points. Splines are very suitable for gently
varying terrain with smooth slope transitions, and not suitable for
sharp changes in slopes such as cliffs. Splines are very helpful in
regenerating unresampled valleys and summits from available data
points.
Kriging uses autocorrelation of the sample points and the distance
of the sample points to the prediction location to derive weights for
interpolation. In cases where the accuracy of the sample points is
not known, then Kriging can use the local trend of the sampled
data to derive the weights for interpolation. Hence erroneous data
points can be ignored and hence error accumulation, because of the
use of erroneous data points can be avoided. But, as the model
fitted to the predicted surface is by considering the overall trend of
the area for which the DEM is generated, Kriging works better if
the area is homogenous.
618