-60-
the illumination or ambient light problems. Active sensors (e.g.
laser scanners) [Besl, 1988, Rioux et al, 1987] avoid these
limitations by creating features on the surface by controlled
projection of light. They have the advantage of acquiring dense
3D points automatically. Recent advances in laser, CCD
technology, and electronics made possible detailed shape
measurements with accuracy better than 1 part per 1000 at rates
exceeding 10,000 points per second. The scanning and imaging
configuration determine the point density. Many also produce
organized points, in the form of array or range image, suitable
for automatic modeling. A single range image is usually not
sufficient to cover an object or a structure. The amount of
necessary images depends on the shape of the object, amount of
self-occlusion and obstacles, and the object size compared to
the sensor range. The 3D data must be registered in a single
coordinate system. Several registration techniques are available;
most are based on the iterative closest point (ICP) approach.
For the approach to converge to the correct solution, it needs to
start with the images approximately registered. This will require
either the knowledge of sensor positions or manual registration
using features. Once the range images are registered in a single
coordinate system, they can be used for modeling. This step
reduces the large number of 3D points into triangular mesh that
preserves the geometric details and at the same time suitable for
fast rendering [Curless and Levoy, 1996, Soucy et al, 1995]. In
this process, the areas where the images overlap must be
integrated to create a non-redundant mesh. Other requirements
include filling of holes and removal of any outliers.
There are two main types of range sensors. The first is
triangulation-based that projects light in a known direction from
a known position, and measure the direction of returning light
through its detected position. The accuracy of measurements
will of course depend on the triangle base relative to its height.
Since, for practical reasons, the triangle base is rather short,
triangulation-based systems have a limited range of less than 10
meters (most are less than 3 meters). The second sensor type is
based on the time-of-flight. Those measure the delay between
emission and detection of the light reflected by the surface, and
thus the accuracy does not deteriorate rapidly as the range
increases. Time-of-flight sensors can provide measurements in
the kilometer range.
Notwithstanding the advantages of range sensors, we should
mention some drawbacks. They can be costly, bulky, affected
by surface reflective properties, and may be complex to operate
and calibrate. Also a range sensor is intended for a specific
range, thus one designed for close range may not be suitable for
long range. Comparative evaluation of image-based and range-
based methods can be found elsewhere [El-Hakim et al, 1995].
2.3 Image-Based Rendering
In image-based rendering (IBR), images are used directly to
generate new views for rendering without a geometric model
[e.g. Kang, 1999]. This has the advantage of creating realistic
looking virtual environment at speeds independent of scene
complexity. The technique relies on automatic stereo matching
that, in the absence of geometric data, requires a large number
of closely spaced images to succeed. The required computations
may need high processing power and large memory. Object
occlusions and discontinuities will also affect the output. The
ability to move freely into the scene and viewing objects from
any position will be limited without a geometric model. It is
therefore unlikely that IBR will be the approach of choice for
purposes other than visualization. For tourists where general
visualization is enough, this approach may be adequate, but for
historians and researchers, and of course for documentation,
geometric details are needed.
3. COMBINING MULTIPLE TECHNIQUES
From the above summary of current techniques, it is obvious
that none by itself can satisfy all the requirements of culture
heritage applications. Given that:
• Although laser scanning will provide all the details, it is
usually not practical to implement as the only technique
for every object and structure. Large buildings for example
will require a large number of scans and produce huge
number of points even on flat surfaces.
• Image-based modeling alone will have difficulty with
irregular and sculpted surfaces. Also it is important to
develop an approach that requires only a small number of
widely separated views and at the same time offers a high
level of automation and be able to deal with occluded and
unmarked surfaces.
Figure 1: Combined image-based and laser scanning methods.
(A) The Abbey of Pomposa. (B) Dazu, China
Therefore, combining techniques where the basic shapes are
determined by image-based methods and fine details by laser
scanning is the logical solution. This is best described by an
example. In figure 1, most of the structure is easy to model by
images taken with a digital camera. However, parts of the
surface contain fine geometric details that will be very difficult
or impractical to model from images, such as the enlarged
sections shown. Those parts are best acquired by a laser scanner
and added to the global model created from the images. This
involves matching and integrating local detailed points obtained
by the scanner to the global model. We measure several
features, usually six, using the images then extract the 3D
coordinates of the same features from the scanned data. This is
done interactively using intensity images generated by the laser
scanner. The transformation parameters are then used to register
the two coordinate systems of the two data sets. The details of
each approach and the combined approach will be described
next.