×

You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Title
The 3rd ISPRS Workshop on Dynamic and Multi-Dimensional GIS & the 10th Annual Conference of CPGIS on Geoinformatics
Author
Chen, Jun

365
ISPRS, Vol.34, Part 2W2, “Dynamic and Multi-Dimensional GIS", Bangkok, May 23-25, 2001
2.1. Mechanism of Polygon Elision
Nearly every simplification technique in the literature uses
some variation or combination of four basic polygon elision
mechanisms: sampling, adaptive subdivision, decimation, and
vertex merging.
Sampling schemes begin by sampling the geometry of the
initial model. These samples can be points on the 2-D manifold
surfaces in the model or voxels in a 3-D grid superimposed
upon the model. The algorithm then tries to create a polygonal
simplification that closely matches the sampled data. Varying
the number of samples regulates the accuracy of the created
simplification.
Adaptive subdivision approaches create a very simple
polygonal approximation called the base model. The base
model consists of triangles or squares, shapes that lend
themselves to recursive subdivision. This process of
subdivision is applied until the resulting surface lies within
some user-specified threshold of the original surface.
Conceptually simple, adaptive subdivision methods suffer two
disadvantages. First, creating the base model involves the
very problem of polygonal simplification that the algorithm is
attempting to solve. For this reason adaptive subdivision
approaches have been more popular for the specialized case
of terrains, whose base model is typically just a rectangle.
Second, a recursive subdivision of the base model may not be
able to capture the exact geometry of the original model,
especially around sharp comers and creases in the mesh
[Hoppe 96].
Decimation techniques iteratively remove vertices or faces
from the mesh, retriangulating the resulting hole after each
step. This process continues until it reaches a user-specified
degree of simplification. If decimation algorithms do not permit
a vertex or face removal that will change the local topology of
the mesh, the decimation process may be unable to effect high
degrees of simplification.
Vertex merging schemes operate by merging two or more
vertices of a triangulated model together into a single vertex,
which can in turn be merged with other vertices. Merging two
corners of a triangle makes that triangle degenerate. Such
triangles can then be eliminated, decreasing the total polygon
count. Vertex merging approaches do not necessarily require
manifold topology, though some algorithms use a limited
vertex merge called an edge collapse, in which only the two
vertices sharing an edge are collapsed in each operation.
These algorithms generally assume manifold topology
implicitly.
2.2. Use of Error Metric
Simplification methods can be characterized by how they use
an error metric to regulate the quality of the simplification. A
surprising number of algorithms use no metric at all, but simply
require the user to run the algorithm with different settings and
explicitly select appropriate LOD switching distances. For large
databases, however, this degree of user intervention is simply
not practical. Those algorithms that utilize an error metric to
guide simplification fall into two categories:
Fidelity-based simplification techniques allow the user to
specify the desired fidelity of the simplification in some form,
then attempt to minimize the number of polygons, subject to
that fidelity constraint.
Polygon-budget simplification systems attempt to maximize the
fidelity of the simplified model without exceeding a specified
polygon budget. For example, adaptive subdivision algorithms
lend themselves nicely to fidelity-based simplification, simply
subdividing the base model until the fidelity requirement is met.
Polygon-budget simplification is a natural fit for decimation
techniques, which are designed to remove vertices or faces
one at a time and merely need to halt upon reaching the target
number of polygons. As mentioned above, however, topology
constraints often prevent decimation algorithms from reducing
the polygon count below a certain level. To be most useful, a
simplification algorithm should be capable of either
fidelity-based or polygon-budget operation. Fidelity-based
approaches are crucial for generating accurate images,
whereas polygon-budget approaches are important for
time-critical rendering. The user may well require both of these
possibilities in the same system.
2.3. Preservation of Topology
In the context of polygonal simplification, topology refers to the
structure of the connected polygonal mesh. The local topology
of a face, edge, or vertex refers to the connectivity of that
feature’s immediate neighborhood. The mesh forms a 2-D
manifold if the local topology is everywhere homeomorphic to
a disc, that is, if the neighborhood of every feature consists of
a connected ring of triangles forming a single surface. Every
edge in a mesh displaying manifold topology is shared by
exactly two triangles, and every triangle has exactly three
neighboring triangles, all distinct (a 2-D manifold with boundary
allows the local neighborhoods to be homeomorphic to a
half-disc, which means some edges can belong to only one
triangle). A topology-preserving simplification algorithm
preserves manifold connectivity. Such algorithms do not close
holes in the mesh, and they therefore preserve the genus of
the simplified surface. Global topology refers to the
connectivity of the entire surface. A simplification algorithm
preserves global topology if it preserves local topology and
does not create self-intersections within the simplified object
[Erikson 96]. A self-intersection, as the name implies, occurs
when two non-adjacent faces intersect each other.
Topology-preserving algorithms preserve the genus of the
simplified object, so no holes will appear or disappear during
simplification. The opacity of the object seen from any distance
thus tends to remain roughly constant. This constraint limits
the simplification possible, however, since objects of high
genus cannot be simplified below a certain number of
polygons without closing holes in the model. Moreover, a
topology-preserving approach requires a mesh with valid
topology to begin with. Some algorithms, such as [Schroeder
92], are topology-tolerant: they ignore regions in the mesh with
invalid local topology, leaving those regions unsimplified. Other
algorithms faced with such regions may simply crash.
Topology-modifying algorithms do not necessarily preserve
local or global topology. The algorithms can therefore close up
holes in the model as simplification progresses, permitting
drastic simplification beyond the scope of topology-preserving
schemes. This drastic simplification often comes at the price of
poor visual fidelity, however, and distracting popping artifacts
as holes appear and disappear from one LOD to the next.
Some topology-modifying algorithms do not require valid
topology in the initial mesh, which greatly increases their utility
in. real-world CAD applications. Some topology-modifying
algorithms attempt to regulate the change in topology, but
most are topology-insensitive, paying no heed to the initial
mesh connectivity at all.
2.4. Catalog of Important Papers
The intent of this section is not to provide an exhaustive list of
work in the field of polygonal simplification, nor to select the
“best” published papers, but rather to briefly describe a few
important algorithms that span the taxonomy presented above.
Most of the papers chosen represent influential advances in
the field; a few provide more careful treatment of existing
ideas.
Table 1 summarizes the catalog. Each algorithm is broken
down according to which mechanism or combination of
mechanisms it uses, whether it supports fidelity-based
simplification or polygon-budget simplification, and whether the
algorithm preserves or modifies topology. The final column