Full text: New perspectives to save cultural heritage

CI PA 2003 XIX"' International Symposium, 30 September - 04 October, 2003, Antalya, Turkey 
441 
tion, which could serve as a basis for the physical reconstruc 
tion. Using our data, a first model (1:200) has already been cre 
ated while a statue at 1/10 of the original size should be built 
and displayed in the Afghanistan Museum in Switzerland. Then 
this model will be used to study materials and construction 
techniques to be applied for the final reconstruction at full size. 
Originally our interest in the computer reconstruction of the 
Great Buddha was a purely scientific one. We'planned to inves 
tigate if such an object could be reconstructed fully automati 
cally using just amateur images taken from the Internet with 
photogrammetric methods [Gruen et ah, 2002]. In this case the 
main scientific challenge lies in the facts that no typical photo 
grammetric information (as interior and exterior orientation pa 
rameters) about these images is available and that existing 
automated image analysis techniques will most probably fail 
under the given circumstances. After learning about the efforts 
to actually rebuild the Great Buddha we decided to get involved 
in that project beyond a purely scientific approach and to con 
tribute as much as we could with our technology to the success 
of the work. We generated different versions of the Buddha, 
depending on which algorithms and images were used: Internet, 
tourist and metric images [Gruen et ah, 2003]. The results ex 
tracted from the Internet and tourist images served only for sci 
entific purposes. The physical reconstruction should be based 
on a 3D computer model derived from the three metric images. 
These photographs were acquired in Bamiyan in 1970 by Prof. 
Kostka, Technical University of Graz [Kostka, 1974]. They 
form the basis for a very precise, reliable and detailed recon 
struction with an accuracy of 1-2 cm in relative position and 
with an object resolution of about 5 cm. In order to achieve 
these values we had to apply manual image measurements, as 
the automatic procedures could not extract all the fine details. 
In this paper we only present the results of the computer recon 
struction obtained with the three metric images. For a more de 
tailed technical description of the digital photogrammetric pro 
cedures on all the data sets, we refer to [Gruen et al., 2002, 
2003]. 
3.1 Phototriangulation 
A contour plot of the big statue, done by Prof. Kostka [Kostka, 
1974], is also available (20 cm isolines, scale 1:100). From this 
plot some control points could be measured and used for the 
phototriangulation. Then, using the information in [Kostka, 
1974] and the control points, we achieved the first ap 
proximations of the exterior and interior orientation parameters. 
The final orientation of the images is achieved using a bundle 
adjustment [Gruen et ah, 2002] (Figure 4). 
Figure 4: The acquisition procedure (left) and the recovered 
camera positions after the bundle adjustment (right). 
3.2 Image coordinate measurement and point cloud 
generation 
Image measurements are performed with automated and manual 
procedures. We first applied a commercial package (VirtuoZo) 
and then our self-developed matching software for the 
automated reconstruction of the statue. But, at the end, we used 
manual measurements to get a very precise, reliable and 
detailed 3D model of the Buddha. 
2. THE METRIC IMAGES 
The metric images were acquired with a TAF camera 
[Finsterwalder et ah, 1968], a photo-theodolit camera that 
acquires photos on 13x18 cm glass plates. The original photos 
were scanned by Vexcel Imaging Inc with the ULTRA SCAN 
5000 at a resolution of 10 micron. The final digitized images 
resulted in 16930 x 12700 pixels each (Figure 3). Their 
acquisition procedure (Figure 4, left) is known as well as the 
interior parameters of the camera [Kostka, 1974]. * • 
: m t : 
- jTm> 
&r ; : 
I 
—’ -n—— — 
1 ^---7;-? 
-..L fip' 
A... 
Figure 3: The three metric images acquired by Kostka in 1970. 
3. PHOTOGRAMMETRIC PROCESSING 
The photogrammetric reconstruction process consists of: 
• phototriangulation (calibration, orientation and bundle 
adjustment), 
• image coordinate measurement (automatic matching or 
manual procedure) and point cloud generation, 
• modeling, i.e. surface generation and texture mapping for 
photo-realistic visualization. 
3.2.1 Automatic measurements with commercial software 
The 3D model of the Buddha statue was generated with the 
VirtuoZo digital photogrammetric system. The matching 
method used by VirtuoZo is a global image matching technique 
based on a relaxation algorithm [VirtuoZo NT, 1999]. It uses 
both grid point matching and feature point matching. The 
important aspect of this matching algorithm is its smoothness 
constraint satisfaction procedure. With the smoothness 
constraint, poor texture areas can be bridged, assuming that the 
model surface varies smoothly over the image area. Through 
the VirtuoZo pre-processing module, the user can manually or 
semi-automatically measure some features like ridges, edges 
and regions in difficult or hidden areas. These features are used 
as breaklines and planar surfaces can be interpolated, e.g. 
between two parallel edges. In VirtuoZo, first the feature point 
based matching method is used to compute a relative 
orientation between couples of images. Then the measured 
features are used to weight the smoothness constraints while the 
found approximations are used in the following global 
matching method [Zhang et al., 1992]. In our application, a 
regular image grid with 9 pixels spacing was matched using a 
patch size of 9 x 9 pixels and 4 pyramid levels. As result, a 
point cloud of ca 178 000 points is obtained (Figure 5). Due to 
the smoothness constraints and grid-point based matching very 
small features, like the folds of the dress were filtered or 
skipped.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.