The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008
648
If the CSF under all condition such as static and motion is
defined, then the details viewer perceived can be predicted.
3.2 Clustering Perception Information
Because the rendered image Irender contains all the details that
can be display on the screen, while the vision simulated image
Ifiltered only contains the perceptible details. To detect the
unperceived details, the Idifference is defined as:
I difference ^render ^filtered
The /difference takes the advantage of the dissimilar pixels in the
I ren der and 1'filtered to indicate the unperceived details in the image.
Because the gradation difference of the pixel is related to the
changes of the spatial frequency and contrast sensitivity. As a
result, the degree of the gradation difference is also
corresponding to certain spatial frequency and contrast
sensitivity range. In another word, the gradation difference
reflects the viewers’ perceive capacity to certain details.
Use the MLC (maximum likelihood classification) method to
classify the pixels in Idifference into four classes: none change,
slight change, moderate change and dramatic change. The
relation between the visual perceive capacity and classes is
show in table 1.
Idifference
HVS filtering
CSF Feature
Vision
no change
No filtering
Lowest CS
and SF
Highest
sensitive
slight change
Slight
filtering
Low CS and
SF
High
sensitive
moderate
change
Moderate
filtering
High CS and
SF
Low
sensitive
dramatic
change
Heavy
filtering
Highest CS
and SF
Lowest
sensitive
Table 1. The relationship between perceptual degree and the
difference image
According to the table 1, the dramatic change of the gradation is
corresponding to the lowest sensitive details in the image while
the no change to the highest sensitive details, therefore the
result is the foundation of the following grating of the triangle
perceptual importance.
4. PERCEPTION-DRIVEN SIMPLIFICATION
ALGORITHM
4.1 Transmission of Perception Information
After the classification, the imperceptible details should be
transmitted to the model’s geometry, and the mapping relations
between the 2D space and 3D space should be established.
In the real world, vision is formed when the light reflects into
the retina. Likewise, the 3D models are displayed on the screen
through pipelines of the display hardware, which transform the
vertexes to various coordinates. Finally, the reader images are
generated and displayed.
Figure 3. The principle of ray casting
According to the principle of ray projection and the render
pipeline, given the camera position and the projecting plane, a
radial through the pixel can link to the primitive of the
geometry. This method is termed ray casting (Figure 3).
The projection relationship is established between image space
and model space through ray casting. Perceptual information is
passed to the primitives can causes that primitive is of
perceptual information according with it in the image.
4.2 Aggregation Operation
Based on plenty of the experiments, Kobbelt’s group discovered
that the quality of the simplification has little to do with the
way choosing the new vertex, but rather the order of the fold or
contracting. This order is defined under the rule of error
measurement (Kobbelt et al., 1998). Therefore, they suggest
paying more attention to find the proper measurement.
Although the minimized geometric error guarantees the fidelity
of the simplified model, and the results are also predictable. The
perceptual differences caused by the operations of
simplification are ignored, which means the perceptual
differences can not be predicted in the simplification. Be similar
with the assumptions as the geometric error measurement, we
make the assumption that if the perceptual error is the minimum
during every simplification step, then the total perceptual error
minimizing is going to be achieved. As a result, we propose that
the simplify operations should fully consider about the
perceptual change of the 3D model, and integrate the geometry
with the perceptual error to measure the budget of the
simplification.
Based on the QEM proposed by Garland, the geometric error is
calculated using a Q-metrix, this paper adopts the ratio of the
new vertex’s geometric error and the diagonal line length of the
minimum bounding box (MBB) of the triangles as the
geometric budget of the merging operation (v,,v 2 ) —> v .
Cost G =
v Qy
L
v (Q, +Q 2 ) v
L
(3)
Where Costc= the simplified budget of the folding
operation (v,,v 2 ) —» v ;
L = the length of the diagonal line of the MBB