y. A
onal
> are
3-D
loud
tion-
m a
hing
We
2. METHODOLOGY
The processing flow of our method for providing location data
is shown in Figure 1 and described as follows. First, a template
image is generated using a calibrated camera image. Second,
photorealistic panoramic images from various viewpoints are
prepared as point-cloud images by a rendering of massive point-
cloud data. Third, the image-matching process uses the template
image with panoramic images as base images.
Finally, the location of the camera capture is detected by the
selection of a matched panoramic image from all the panoramic
images. In addition, the direction of the camera capture is
detected from a matched position on the matched panoramic
image. The spatial resolution of location matching depends
mainly on the spatial resolution of arbitrary viewpoints in the
panoramic image generation, and the spatial angle resolution in
location matching depends mainly on the resolution of the
generated panoramic image.
Input data
Camera image Colored point cloud Viewpoint parameters
i]
Point-cloud rendering
Point-cloud image lj
Template matching
Matching results
Lens distortion
Image projection
Template generation
Matching point in image Matched point-cloud image
Conversion from pixel to angle | | Translation reference
Output data
Camera rotation parameters Camera translation parameters
Figure 1. Processing flow
2.1 Point-cloud rendering
Massive point-cloud data are well represented in visualization
techniques. However, viewpoint translation in point-cloud
rendering reduces the visualization quality because of
noticeable occlusion exposure and a noticeably uneven point
distribution. Although the point cloud preserves accurate 3-D
coordinate values, the phenomenon of transparent far points
existing among the near points reduces the visualization quality
for users.
Splat-based ray tracing [4] is a methodology for improving the
visualization quality by the generation of a photorealistic
curved surface on a panoramic view using the normal vectors
from point-cloud data. A problem is the substantial time
required for surface generation in the 3-D workspace.
Furthermore, the curved-surface description is inefficient when
representing urban and natural objects in the GIS data.
An advantage of 3-D point-cloud data is that it allows accurate
display from an arbitrary viewpoint. By contrast, panoramic
imagery has the advantage of appearing more attractive while
using fewer data. In addition, panoramic image georeference [5]
and distance-value-added panoramic image processing [6] show
that both advantages can be combined for 3-D GIS visualization.
We therefore focus on the possibility that these advantages can
be combined by a point-cloud projection into panorama space.
In particular, we consider that a simpler filtering algorithm will
be important for achieving high-volume of point-cloud
processing at high speed. We have therefore developed a point-
based rendering application with a simpler filtering algorithm to
International Archives of the Photogrammetry, Remote Sensin
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
g and Spatial Information Sciences, Volume XXXIX-B4, 2012
generate photorealistic panoramic images with arbitrary
viewpoints, which we call LiDAR VR data [7, 81.
The processing flow of our methodology in this research is
described below. First, sensors acquire a point cloud with
additional color data such as RGB data or intensity data. The
sensor position is defined as an origin point in a 3-D workspace.
If color data cannot be acquired, distance values are attached to
a color index. We can therefore use a laser scanner, a stereo
camera, or a time-of-flight camera. Second, a LiDAR VR image
from the simulated viewpoint is generated using the point cloud.
Finally, the generated LiDAR VR image is filtered to generate
missing points in the rendered result using distance values
between the viewpoint and objects.
An example of point-cloud rendering is shown in Figure 2.
Figure 2. Part of a panoramic image in which the left image is
the result after a viewpoint translation of 6 m the sensor point
and the right image is the result after filtering
Panoramic image generation using the point cloud
First, the colored point cloud is projected from 3-D space to
panorama space. This transformation simplifies viewpoint
translation, filtering, and point-cloud browsing. The LiDAR VR
data comprise a panorama model and range data. The panorama
space can be a cylindrical model, a hemispherical model, or a
cubic model. Here, a spherical model is described. The
measured point data are projected onto the spherical surface,
and can be represented as range data as shown in F igure 3. The
range data can preserve measured point data such as X, Y, Z, R,
G, B, and intensity data in the panorama space in a multilayer
style. Azimuth and elevation angles from the viewpoint to the
measured points can be calculated using 3-D vectors generated
from the view position and the measured points. When azimuth
angles and elevation angles are converted to column counts and
row counts in the range data with adequate spatial angle
resolution, a spherical panoramic image can be generated from
the point cloud.
Spherical panorama model Range image (Panoramic image)
“Measured point” “Projected point”
(X, Y.Z), (R,G.B), (Intensity)
Azimuth—
|]
: Projected point
—
Elevation
3D vector from viewpointto measured point
Point cloud
XYZ * Azimuth
* Viewpoint * Elevation
*R,G,B, Intensity
Figure 3. LIDAR VR data comprising a spherical panorama
(left side of the figure) and range data (right side of the figure)
Range image
*Row index
* Column index
* R;,G,B, Intensity