rbul 2004
HES
performed
F707. The
Is ca 34
h unique
JNESCO.
nistry of
:d (Figure
h a self
ints semi-
ig (LSM)
ns of the
„= 0.021
1ce stood,
| imported
IT, 1999].
measured
measured
oints was
red points
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004
+ on
i i SRI ci
po es 88 À e 6
Figure 13: The recovered positions of the cameras and the
measured points for the modeling of the empty niche of the
Great Buddha.
The surface generation was then performed with a 2.5 Delaunay
method, dividing the measured point cloud into separate parts.
A mesh was created for each single point cloud and then all the
surfaces were merged together with Geomagic Studio
[Raindrop]. The final model of the empty niche is displayed in
Figure 14.
Figure 14: 3D model of the empty niche of the Great Buddha,
visualized in shaded (left) and textured mode (right).
The recovered 3D model of the empty niche allows the
comparison between the actual situation and the previous one,
before the destruction (Figure 15 and 16).
e TES A — B octets d -— i
Figure 15: The niche of the Great Buddha: a comparison
between an image of the 1970s and the actual situation.
+
©
B & S mde 3 zb.
Figure 16: The reconstructed 3D models of the Great Buddha
and the actual empty niche. An image of the full 3D model of
the Great Buddha is shown in Figure 2.
4.2 The empty niche of the Small Buddha
The modeling of the empty niche of the Small Buddha was
performed using 9 images (Figure 17)
: TUNE aes ados
Figure 17: The empty niche of the Small Buddha.
The necessary tie points were measured semi-automatically with
LSM and then imported in a self-calibrating bundle adjustment.
The final average standard deviations of the object coordinates
are 0, = 0.015 m, 6, = 0.019 m, 0, = 0.022 m.
Figure 18: The camera poses and the manually measured points
for the modeling of the empty niche of the Small Buddha.
Afterwards manual measurements were performed in VirtuoZo
on the distortion-free images and a cloud of approximately 17
000 points was recovered (Figure 18).
For the mesh generation, we had to split the point cloud in
different parts, to be able to perform 2.5 Delaunay triangulation.