The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
DEM, but for an irregular geometry body like a building and its
corresponding appearance attributes, for instance, the texture
image. This work is still not satisfactory, and it always needs
plenty of human-computer interactions to get a content result.
1.2.1 Simplification Considering Semantics
Based on web 3D GIS environment, this method embeds vd as
the characteristic dominant value in vertex attributes by Quadric
Error Metric (QEM), and it also translates the semantic inhibit
into the simplification for 3D object by means of figure
distilling. Combined with the model’s semantic characteristic, it
has well kept the model’s spatial characteristics in the process
(Coors, 2001)This method has imported the concepts of focus
structure and graphical abstraction to separately describe a sort
of expressing way and the semantic simplification, the former
has the most visual importance in the objects and the latter can
converse the semantic restricts into 3D objects under a
consideration of the background environment.
But this algorithm couldn’t directly create each building
model’s focus structure, because it may differ among every
object. Furthermore, the graphical abstraction may be difficult
to accord with the description of the focus structure, which
means what details should be saved is still not solved. Another
drawback is the neglect of the impact from the building’s inner
complex structure in its simplification method and process.
1.2.2 Simplification Considering Features
Martin Kada put forward a simplification algorithm aiming at
3D city building models, in which three kinds of characteristics
like co-planarity, parallelism and uprightness among surfaces
are detected and adopted by the edge collapse to simplify. This
method can well keep the shape and relational surface attributes
after removing some of the building’s characters such as texture
(Kada, 2002, 2007).
Its drawback is obvious: Firstly, the experimental model is so
simply that it couldn’t roundly represent the building model’s
complexity. Secondly, the drawback of co-planar contracting is
still unsolved, thus it is not fit for the simplification of the
complex building model in VGEs. Thirdly, the process is
strongly restricted by the minimum measurement of the original
building model’s components, and then its ability of reduction
is so limited. Furthermore, its simplifying ability is really
limited neither for arbitrary fold meshes nor for unfold meshes.
Besides, Thiemann has divided the model according to
characteristics checked out and express them in the CGS way.
And then, he made the simplification operation through the
CGS tree (Thiemann, 2002; Thiemann et al., 2004) Its
excellence is that simplification is on consecutive scale and also
owns the possibility for semantic expanding. But considered as
a method for generalization, it couldn’t combine the
neighbourhood buildings (Sester, 2007).
1.2.3 Simplification Based on Scale Space
Aiming at the building model data, Andrea Forberg brought
forward a 3D model simplification method based scale space
(Forberg, 2004, 2007; Forberg et al., 2002). Its main academic
foundation of is the mature scale space theory, including
mathematic morphology and curvature space theory. It has used
different form operation such as Erosion, Dilation, Opening and
Closing to control the unit and separate among different parts.
This method is developed and realized on ACIS 3D Geometric
Modeller and VRML.
The simplification method based on scale spatial theory gets its
prodigious limits in simplification. For instance, it sometimes
needs other means to deal with no-orthogonal characteristics
like housetop, including rotating, correcting and so on (Sester,
2007). Besides, this algorithm can’t assure the maintenance of
the building’s characteristics, and it neglects consider some
attribute contents like material and texture.
Facing with the models, in summary, the traditional
simplification methods have three drawbacks. Firstly, methods
above can not accurately locate the portions needed to be
simplified. Secondly, the most traditional simplification
methods in computer graphics are restricted to simplify a single
3D object with continuous surface mesh but not a set of meshes,
but a complex building model is just a set of mesh because each
component is a mesh. Thirdly, due to the simplification
methods do not take human perception information to drive the
simplification operations, so that the simplified results can not
accord with the rule of human perception, and the LOD models
derived from the simplification methods are difficult to ensure
the continuity on visual effect.
2. PERCEPTION-DRIVEN SIMPLIFICATION
FRAMEWORK
2.1 Perceptual Details of 3D Building Models
Two choices are given to us for exploration of perceptual
details of 3D building models: one is to do the detail analysis
based on a geometry definition of an object, such as analyzing
the geometrical characteristics represented by the building
model’s geometrical primitives (vertex, edge, triangle, or body,
etc.); the other is to enter on the object’s 2D rendering image.
Both these methods own their excellences and shortcomings.
Geometrical Characteristics: Now, these 3D models are
expressed in the form of geometrical primitives, after which it is
possible to compute various geometrical details of the 3D
model, such as curvature, length.
But, any geometrical details can not avoid the problem. A 3D
model is not only totally expressed by geometrical primitives,
but also by textures reflected to the material attributes on the
model’s surface. For those geometrical models with textures,
the textures contain a great lot of visual information having a
deep influence on the perception, while the pure geometrical
details’ calculation entirely neglects these visual details.
Rendered Image: It is possible to gain a model’s rendered
image by setting a camera and its parameters and pre-choose a
rendering mode. Because the human beings are acquiring an all-
sided perception directly by looking into the rendering images
using their naked eyes, it is possible to extract the 3D model’s
perceptual details by detecting the situation from human eyes to
models reflected by the model’s rendered images. The rendered
images have more accurately and integrally reflected the
information on the model’s surface, such as its figures, natures,
textures, illuminations, all of which are factors having a deep
impact on human perception. And the integrity of information
laid a foundation for the model perception.