Full text: Technical Commission VII (B7)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
in laser scanning software. The colouring is performed using 
the EOPs of the image, the geometric model for the sensor 
and the collinearity condition to obtain 2D pixel coordinates 
for each 3D lidar point. The 2D position is used to assign 
RGB values to the 3D points (Fig 4a). Advantages of the 
method are that a 3D impression of the material distribution 
can be quickly created, and points not in the geologically 
relevant parts of the scene can be rapidly highlighted and 
removed. The point cloud can easily be classified and 
segmented into the material classes. However, disadvantages 
are that point clouds are discontinuous (as described in 
Section 2), and during colouring no account is taken for 
geometric obstructions in the camera's field of view between 
a given point and the image plane, making incorrect 
colouring a possibility when images are not captured from a 
similar position as the laser scanner. These disadvantages can 
be alleviated using triangle meshes rather than point clouds — 
in that case the surface model is continuous, and more 
occluding geometry can be used to test visibility when 
colouring the mesh vertices (Fig 4b). 
  
Figure 4. Lidar geometry coloured using classification image 
in Fig. 2. a) point cloud; b) mesh. Height of area 
e. 15 m. 
4.2 Photorealistic modelling with multiple textures 
A mesh makes it possible to visualise the coloured lidar 
geometry in a continuous form, aiding ease of interpretation. 
However, the quality of the result is a function of the number 
of vertices and triangles in the mesh, where large triangles 
will cause degradation in detail. Texture mapping relates 
image pixels to the mesh geometry, making use of the full 
image resolution independently of mesh detail. During 
texture mapping, the vertices of each mesh triangle are 
projected into an appropriate image, defining an area of 
pixels to be used as texture. All defined patches are saved to 
one or more texture images and the position of each patch is 
stored as a property of the vertices in the original mesh. In 
3D viewing software, the mesh and texture images are 
loaded, and the stored texture coordinates define the linkage 
between vertices and image data, allowing on-screen 
rendering of the photorealistic model. 
Hyperspectral classifications can be textured on the lidar 
model; however, the value is limited by the coarse resolution 
of the HySpex sensor. Superimposing hyperspectral results 
on an existing photorealistic model (lidar geometry textured 
with conventional digital images) provides a far more useful 
544 
application of texture mapping. This multi-layer approach 
gives greater context to the thematic maps and is valuable for 
interpretation and validation. 
Software for performing the multi-layer texture mapping was 
implemented, along with 3D viewing software for 
interactively manipulating the layer combinations. Input data 
are the mesh model, Nikon imagery, and multiple 
hyperspectral images and processing products. In addition, a 
project file stores EOPs of all images as well as a designated 
layer number that each image should be mapped to. A typical 
layer configuration is for conventional images to form the 
first layer, and then hyperspectral results, such as MNF 
images and MTMF classifications to form subsequent layers. 
  
Figure 5. Photorealistic model combining conventional 
digital imagery and multiple hyperspectral 
processing layers. Layer 1 is the Nikon imagery 
(0% transparency) Layer 2 is an MNF image 
(Fig. 2c), with 5096 transparency, 50% horizontal 
cutoff and a sharp edge transition. Layer 3 is a 
classification (Fig. 2d), with 4096 transparency, a 
horizontal and vertical cutoff, and a soft edge 
transition. Height of model c. 20 m. 
The viewing software was written in C++ using the 
OpenSceneGraph library for 3D rendering, providing a high- 
level interface to OpenGL. OpenGL supports the use of 
multiple texture units, termed multitexturing, defining how 
image textures are stored and used in the graphics system. 
The number of layers is restricted only by the number of 
texture units supported by the graphics card of the computer 
viewing the processed model. During rendering, the layer 
combination is controlled using OpenGL state properties, 
making it possible to set the contribution of each texture unit. 
Most modern graphics cards allow the use of programmable 
parts of the graphics pipeline. The OpenGL Shading 
Language offers a high amount of flexibility for multiple 
textures, as user-defined functions for texture combination 
can be written and sent to the graphics card as fragment 
shaders. To take advantage of this, each layer is additionally 
given a transparency factor (0 to 100%) specifying the overall 
weighting of the layer. Horizontal and vertical factors are 
also given, to allow side-by-side visualisation of multiple 
layers. Each parameter is linked to a slide control in the 
graphical user interface, allowing interactive adjustment of 
the layer weighting (Fig. 5). The horizontal and vertical edge 
transition can be feathered to give hard or soft edges.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.