Full text: Proceedings of the CIPA WG 6 International Workshop on Scanning for Cultural Heritage Recording

interactive processing (e.g. when extracting measurements, 
intersections or orthophotos) as well as for using it for 
visualization purposes and animations. There are two different 
issues with meshing: triangulation, which creates the initial 
mesh from the point data and mesh simplification, which 
reduces the amount of data. 
4.4.1 Triangulation 
There are different approaches to triangulating scan data: 
• Per scan triangulation based on the 2D grid provided by 
the scanner. In this case, triangulation is relatively easy as 
the neighbourhood relations are given by the 2D grid. The 
only challenge for the algorithm is to disconnect the mesh 
along depth discontinuities. The main problem with per 
scan triangulation is that the meshes from different scans 
need to be merged after triangulation, which can be a 
difficult task, in particular if we have a cluttered scene 
rather than a single object (Turk, 92). 
• Triangulation based on the 3D point cloud. Several 
algorithms exist that directly triangulate the multi scan 
point cloud (Bemadini, 99; Lorensen, 87). However, it is 
often a challenging task to achieve satisfying results, 
especially for a cluttered scene with high dynamic range. 
Vertices of the resulting meshes usually do not directly 
correspond the measured points and the resolution of the 
original scan might get lost during the triangulation. 
As for the registration, each algorithm has its advantages in 
certain situations and therefore the processing software should 
offer both possibilities to the user. As the currently available 
automatic algorithms often produce erroneous results from the 
complex and noisy raw data, the system should also include 
some tools for mesh editing. 
‘rivimrcir 1 ■'Tir 1 1 
r* <•“ ***>. ** 
Figure 3: Snapshot of the 3DVeritas application showing a point 
cloud of the Nurgahe model and a zoom into the 
resulting multi-resolution mesh. 
4.4.2 Mesh simplification 
One reason for meshing is to reduce the amount of data. 
Therefore, the meshing algorithm should include simplification 
of the mesh. Several algorithms exist that compress the mesh 
data while minimising the loss of information (Garland, 97; 
Cignoni, 98). Generally it can be said that a trade-off has to be 
made between the quality of the resulting mesh (how close is 
the mesh to the original data for a given amount of triangles) 
and the time to compute the simplified mesh. Figure 3 shows a 
point cloud of the Nurgahe data set and a detail of the resulting 
multi-resolution mesh. A loss of information is unavoidable at a 
certain point of compression, therefore the ideal algorithm 
builds a continuous hierarchy of mesh details, which allows to 
access a low-count mesh when possible while still having 
access to the high detail mesh when necessary (Hoppe, 96). This 
allows working with huge mesh interactively while still having 
access to the full resolution of the original scan. 
4.5 Texture processing 
The purpose of texture processing is to integrate the 3D 
measurements from the laser scanner with 2D information taken 
with an external or internal camera. The 2D images are usually 
taken within the visible spectrum, but can also be from the non- 
visible spectrum. The purpose for applying texture to the 3D 
data is manifold: 
• Intuitive interaction with the acquired data. 
• For visualisation and animation purposes. 
• To add information which is not present in the 3D data 
(decay of material, frescos, etc). 
• Increasing the resolution of the data. 
4.5.1 Camera calibration 
In order to project a 3D point into an image and thus assign a 
colour value, the software needs to know the external and 
internal camera parameters. The external parameters are the 
translation and rotation of the camera relative to the global 
reference frame, the internal parameters vary according to the 
mathematical model that is used to describe the camera. The 
simplest model (Tsai, 87) uses 5 parameters (focal length, 
centre of project, pixel aspect ratio and 1 st degree radial 
distortion). There are different ways to obtain the required 
parameters: 
• Data sheet. The manufacturer usually specifies the internal 
parameters on the data sheet. However, for a non-metric 
camera, these values are usually not precise enough 
accurate texture mapping. For a scanner with an in-built 
camera, the relative position between camera and laser (i.e. 
the external camera parameters) are also known a-priori. 
Again, the accuracy of the values is usually not enough for 
high-quality mapping, additionally the quality of the 
images itself is usually fairly low. However, it can be 
sufficient for simple visualisation purposes and to improve 
the interaction with the data. 
• Pre-calibration. The internal parameters can be computed 
for a specific camera using standard computer vision 
algorithms, which process images taken of a calibration 
object. Once calibrated, the internal parameters can be 
used for the mapping of further images. Naturally, the 
external parameters cannot be pre-calibrated. 
• Calibration from the range data. The external (and internal 
if necessary) parameters for each image can be calibrated 
using the 3D point cloud or the corresponding mesh. The 
calibration algorithm needs a set of corresponding 2D 
coordinates from the image and 3D coordinates from the 
point cloud to compute the required parameters. A simple 
algorithm uses user-interaction to identify the point 
correspondences. More advanced algorithms use computer 
vision techniques to automatically identify matching 
features in the rgb and range images. 
- 152 -
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.