distance in the z-buffer. If the calculated distance is smaller, the
previous ground point corresponding to the cell in z-buffer is
labelled as invisibility and the distance in the z-buffer is updated
with the small distance (Qin er al, 2003). In figure 1, the
distances between points in line AC and projection centre O is
bigger than those between points in line BD,DC and O, so the
points in line AC is invisible. Triangulated irregular network
(TIN) needs to be created first, considering that the ground is
represented by the discrete LIDAR points. Then project each
triangle onto image plane. Shown in Figure 2, rasterize each
triangle and use the z-value of point A, B, C to obtain the z-
value of each pixel in the triangle by interpolating . At last,
update the z-buffer.
O
Projection Center ^ —————3À,
B >
Imaging Plane seep A
D
E
Occluded area
Figure 1. Diagram of Z-Buffer algorithm
Figure 2. Rasterization of triangle
So, the procedure of the occlusion detection based on LIDAR
point is below:
1) Prepare data.
2) Create TIN.
3) Project each LiDAR point to the image plane and
calculate the z-value for each point.
4) Rasterize each triangle in image plane and update z-
buffer.
5) Compare z-value of each point with the corresponding
value in z-buffer and get the index matrix.
Shadow is due to the ground objects blocking the light. From
this perspective, it is the same with the problem of occlusion.
So it is available to use the same method to detect shadow. And
now the position of the projection centre is depend on the
zenith angle and azimuth angle of the sun which can be
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
calculated based on the imaging time or just measure the
direction of the shadow in the image manually. The projective
mode becomes parallel projection (Zhou ef al., 2005, Rau ef al.,
2002).
3. GPU ACCELERATION
The detection of occlusion and shadow is time consuming due
to the expensive computation of the visibility analysis and the
large volume of data. And in Z-Buffer algorithm, the obtaining or
updating of z-buffer is the slowest. GPU is much more
powerful than CPU in parallel computing. Two ways of
acceleration using GPU are introduced in this part. One is using
OpenGL. The other is CUDA.
OpenGL is the most widely used, supported and best
documented 2D/3D graphics API in industry. In OpenGL, there
is a buffer called Depth Buffer which is used to removal the
hidden surface. That means the 4) step in the procedure of
detection above can be implemented like this: open the Depth
Test in OpenGL and render the TIN in GL_FILL mode to off-
screen buffer, then read the data in the Depth Buffer which is
just what we need (z-value). The rendering is optimized in GPU
hardware, so it is very fast (Segal ef al., 2012).
Since the release of CUDA, it has become increasingly
convenient and efficient to use GPUs to speed up applications.
In the step 4), every triangle is rasterized independently. So it is
highly suitable for parallel computing. Every triangle is
processed by an independent thread. Shown in Figure 3, every
triangle is divided into two parts by line 1 which passes through
vertex B. Then determine the start and end points (point E and
F) of each row and the pixels between them. The coordinates
and z-value of point F are interpolated from point A and D.
This is the same with point E. Then calculate the z-value
between E and F by interpolation from E and F. After that,
calculate the next row in the same way. Because points in
different triangles may be projected onto the same pixel, atomic
operation is needed when updating the z-buffer.
Figure 3. Rasterization of triangle on GPU
4. EXPERIMENTS
The data in the experiments comes from the city of Huiyan in
Guangdong Province in China that contains approximately 1.2
million points with a point density of 1.4 points/m^ and a
7228*5228 aerial image with the resolution of 0.13m/pixel.
Shown in Figure 4, the first image is the original LiDAR point
cloud and the second is the original image. The third image
shows the hidden area in black. Figure 5 shows the original