2004
small
long
‘such
ed on
bject-
rpose
5, and
les Or
r and
is on
phical
Y. the
lefine
“ina
eature
| have
nation
S. Lee
riteria
ftware
esults
nation
ere Is
these
aimed
¢ line
ers as
blem.
rocess
uild a
2000)
prove
multi-
rk on
ational
nercial
me of
in an
pher's
.reGIS
upport
id map
d ina
1 line
cus of
r only
liti and
nt role
.2001).
Master
major
f map
etween
:essing
ouring
outines
;eyond
hat use
youring
line or
to an
t of an
(1973)
as the
S tools
hat the
comes
International Archives of the Photogrammetry, Remote Sensing
from Attneave's (1954; cited by Visvalingham, 1999) theory that
curvature conveys informative points on lines. Many other pieces of
research have subsequently enhanced Douglas and Peucker's
algorithm (e.g. Wang and Muller, 1993 ànd 1998: Visvalingham and
Whyatt, 1993; Ruas and Plazanet, 1996) in the area of curvature
approximation applying various thresholds. Oosterom (1995)
criticized these types of algorithms as time-consuming, so he
introduced the reactive-tree data structure for line simplification that
is applicable to seamless and scale-less geographic databases. There
is still, however, a need for the cartographer's interaction in
generalizing lines/curves to make them “fit-for-use”.
A majority of map features are represented as lines or polygons that
are bounded by lines. Skopeliti and Tsoulos (2001) developed a
methodology for the parametric description of line shapes and the
segmentation of lines into homogeneous parts, along with measures
for the quantification of shape change due to generalization. They
stated that measures for describing a positional accuracy are
computed for manually generalized data or cartographically
acceptable generalization results. Muller et al, (1995) imply that
ongoing research into line generalization is not being managed
properly. Most of the research in generalization has focused on single
cartographic line generalization instead of working on data modelling
in an object-oriented environment to satisfy database generalization
requirements. In contrast, other researchers (e.g. Visvalingham and
Whyatt, 1993) have highlighted a need to evaluate and validate
existing generalization tools rather than developing new
generalization algorithms and systems. So far standard GIS software
applications do not fully support automatic generalization of line
features. This research focuses on integration and utilization of
generalization operators using the ArcGIS 8.2 Generalize tool in
order to generalize a road network database from GEODATA TOPO-
250K Series 2 to produce smaller scale maps at 1:500,000 and
1:1000,000.
40 GENERALIZATION FRAMEWORKS
An excellent classification of generalization assessment tools based
on measures, conditions and the interpretation of generalization result
is provided by Skopeliti and Tsoulos (2001). Peter and Weibel (1999)
presented a general framework for generalization of vector and raster
data to achieve more effective translation generalization constraints
into assessment tools to carry out the necessary generalization
transformation. Peter (2001) developed a comprehensive set of
measures that describe geometric and semantic properties of map
objects. These are the core parts of a generalization workflow from
initial assessment of the data and basic structural analysis, to
identification of conflicts and guiding the transformation process via
the generalization operators, and then qualitative and quantitative
evaluation of the results. The following discussion provides a critical
review of the relevant generalization research based on measures,
constraints or limitations, and integration of measures into the
generalization process.
In connection with generalization constraints, Peter (2001)
categorized constraints based on their function (graphical,
topological, structural and Gestalt) and spatial application scope
(object level micro, class level macro, and group of
objects/region/partition of the database level — meso). The constraints
relevant to the micro level (object) include minimum distance and
size (graphical), self-coalescence (graphical), separatability
(graphical), separation (topological), islands (topological), self-
intersection (topological), amalgamation (structural), collapsibility
(Structural), and shape (structural). To assess generalization quality
for linear features, constraints have been employed (Peter and Weibel,
1999: Yaolin et al. 2001). Constraints for the micro level (object
classes) include size ratio (structural), shape (structural), size
distribution (structural) and aliment/pattern (Gestalt). Finally, Peter
(2001) divided meso level (objects groups) constraints into
neighbourhood relationships (topological), spatial context (structural),
aggregability (structural), auxiliary data (structural),
a ignment/pattern (Gestalt), and equal treatment (Gestalt). For a
detailed description of the above constraints readers are referred to
Peter and Weibel (1999); Skopeliti and Tsoulos (2001); Peter (2001);
and Jiang and Claramunt (2002).
and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
n2
Ww
In relation to application of measures for the evaluation of
generalization results, there are several measures to assess
performance. These can be classified as being either qualitative and
quantitative methods. To date most of the generalization
transformation results have been evaluated qualitatively based on
aesthetic measures. Recently Skopeliti and Tsoulos (2001) developed
a methodology to assess linear feature integrity by employing
quantitative. measures that determine if specific constraints are
satisfied. Researchers began to develop formal approaches that
integrated generalization constraints and measures for development
of coherent frameworks and workflows (e.g. Peter and Weibel, 1999;
Yaolin et al. 2001). In this regard, Skopeliti and Tsoulos (2001)
incorporated positional accuracy measures to quantitatively describe
horizontal position and shape, then to assess the positional deviation
between the original and the generalized line, and to relate this to line
length after and before the generalization. A technique such as cluster
analysis (qualitative assessment) was used for the line shape change
and the averaged Euclidean distance (quantitatively assessment).
Also, McMaster (2001) discussed two basic measures for
generalization that include procedural measures and quality
assessment measures. These measures involve a selection of a
simplification algorithm, selection of an optimal tolerance value for a
feature as complexity changes, density of features when performing
aggregation and typification operations, determining transformation
of a feature from one scale to another such as polygon to line, and
computation of the curvature of a line segment to invoke a smoothing
operation.
It should be noted that quality assessment measures evaluate both
individual operations, e.g. the impact of simplification, and the
overall quality of generalization (i.e. poor, average, excellent).
Despite all these efforts there is no comprehensive, universal and
concrete process for generalization measurement techniques.
However, Ibid (2003) provided a review of existing measurement
methods for automatic generalization in order to design a new
conceptual framework that manages the measures of intrinsic
capability, in order to design and implement a generalization
measurement library. To apply quantitative measures, Kazemi (2003)
used two methods of the Radical Law (Pfer and Pillewizer, 1966;
Muller, 1995) and an interactive accuracy evaluation method to
assess map derivation. The Radical Law determines the retained
number of objects for a given scale change and the number of objects
of the source map (Nakos, 1999).
While the majority of developed frameworks for the generalization of
cartographic data, such as those by Lee (1993), Brassel and Weible
(1998) and Ruas and Plazanet (1996), deliver generic procedural
information (Peter and Weibel, 1999), the one briefly discussed in
this paper is designed more specifically for the derivation of multiple
scale maps from a master road network database (see Kazemi, 2003).
Large portions of Kazemi's proposed framework may be considered
generic (e.g. conditions/parameters/constraints definition). However,
most parts deal specifically with road generalization. Generalization
operators in the ArcGIS software are tested to generalize roads above
the conceptual generalization framework for derivative mapping. The
method is empirically tested with a reference dataset consisting of
several roads, which were generalized to produce outputs at
1:500,000 and 1:1,000,000 scales (Ibid. 2003). According to visual
interpretation, the results show that the derived maps have high
correlations with the existing small-scale road maps such as the
Global Map at 1:1,000,000 scale. As the methodology is only tested
on roads, it is worthwhile to extend it to various other complex
cartographic datasets such as drainage networks, power lines, and
sewerage networks, in order to determine the suitability of the
methodology proposed here. Additionally, various kinds of linear.
areal and point cartographic entities (e.g. coastlines, rivers,
vegetation boundaries, administration boundaries, land cover,
localities, towers, and so on) should also be studied.
There is no universal semi-automatic cartographic generalization
process (Costello et a/., 2001; Lee, 2002), because off-the-shelf tools
do not provide an aesthetically robust and pleasing cartographic
solution. The current ArcGIS map production tools are significantly