Full text: Papers accepted on the basis of peer-reviewed full manuscripts (Part A)

In: Paparoditis N., Pierrot-Deseilligny M.. Mallet C. Tournaire O. (Eds). IAPRS. Vol. XXXV1I1. Part 3A - Saint-Mandé, France. September 1-3. 2010 
3.2.1 Laser-based occlusion detection 
The 3D model has previously been registered to the laser point 
cloud (see Section 3.1). Each façade can therefore be associated 
to a laser acquisition time interval. All the laser points acquired 
during this time interval are extracted from the original cloud. 
Besides, all the points belonging to the façade or its background 
can be removed. 
The points belonging to the ground are also extracted. A first 
set of ground points is detected within each vertical scan line as 
the lowest significant peak in the elevation histogram. These 
points are then used as seeds for a local surface growing algo 
rithm applied to the whole cloud. The ground points are itera 
tively and chronologically stored into a small size queue, the 
acquisition order corresponding to the progression along the 
street. At each iteration, a mean square plane is computed over 
the stored points. All the points of the cloud belonging to this 
plane are marked as ground points. The last acquired ground 
points are used for updating the queue. Thus, the seed location 
moves along the street at each iteration, and follows the ground 
curvature. 
The remaining points describe occluding objects related to the 
current façade. These points are projected onto the occlusion 
layers associated to the rectified images of the façade. As laser 
points only provide a spatial sampling of the objects, the points 
are replaced by squares corresponding to the base of the camera 
beam. The base height is derived from the laser vertical 
resolution and the distance to the camera. The base width is 
derived from the vehicle displacement between two laser scan 
lines. As in (Frueh et al. 2005), it would be interesting to 
involve the acquisition angle to refine the base width. 
Figure 3 shows three laser-based occlusion layers superimposed 
with the corresponding rectified images. Figure 4a and Figure 
4b show a fourth rectified image and its associated laser-based 
occlusion layer. The car has been correctly detected but not the 
pedestrian. A false detection can also be observed just above 
the car, caused by another pedestrian not visible in the image. 
Moving objects are particularly difficult to handle because laser 
data and images are not acquired exactly at the same time. The 
cameras are triggered every n meter whereas laser data are 
continuously collected according to scan lines. Two cases of 
failures are distinguished: 
• False-positive: an occluding object was detected in 
the laser cloud but it is not visible in the image. 
• False-negative: no occlusion was detected in the laser 
data although a mobile object is visible in the image. 
These two cases are handled using image information, as 
explained below. 
Figure 3. Example of laser-based occlusion layers superimposed 
with the corresponding rectified images. 
3.2.2 Image-based occlusion refinement 
In order to solve the false-negative cases, the laser-based 
occlusion detection is completed with an image-based 
technique based on (Bohm, 2004). Occlusions are detected with 
a background estimation technique. Each façade point is 
projected onto the various images, and the corresponding pixels 
are clustered in a RGB space. The cluster containing most 
pixels is assumed to describe the background, and the other 
pixels are marked as image-based occlusions in the 
corresponding occlusion layer. A dilatation and erosion are 
subsequently applied to remove small regions. 
Figure 4c shows a result of the image-based occlusion 
detection. Figure 4d shows the occlusion layer obtained by 
combining the laser-based and image-based detections. The 
mobile pedestrian has been almost entirely detected. The 
residual false detections have no effect on the final texture as 
the radiometry can be taken from another image. 
(c) (d) 
Figure 4. (a) Rectified image; (b) Laser-based occlusion layer; 
(c) Image-based occlusion layer; (d) Combination of laser- 
based and image-based occlusion detection. 
The false-positive laser-based detection is solved by a similar 
technique. First the occluding laser points are grouped into 
connected components describing potential occluding objects. 
The laser points associated to each occluding object are then 
projected onto the various images, and the corresponding pixels 
are clustered in a RGB space. Small clutter dispersion 
reinforces the presence of a static object at the location 
indicated by the laser point. However, high clutter dispersion 
indicates that the detected occluding object might actually be 
mobile and should not be taken into account at this particular 
location in the occlusion layers. Hence, if a majority of points 
are associated to a high dispersion, then the corresponding 
occluding object is discarded. It is a case of mobile occlusion 
that should be handled using image-based occlusion detection 
as explained previously. 
The method is illustrated in Figure 5. Figure 5a shows the 
dispersion scores computed for each laser point. Figure 5b 
shows the thresholded scores and Figure 5c shows the 
classification of the occluding objects as valid (black, low 
dispersion) or discarded (white, high dispersion). Figure 6 
shows an example of valid occlusion laser points, colored using 
image RGB information. Figure 7 shows an example of final 
texture computed with and without occlusion detection. Most 
occlusions have been detected and replaced. Two errors can still 
be observed. The car windscreen has not been removed because 
it was not scanned by the laser nor detected by image-based 
clustering. One of the pedestrians standing in front of the 
window has not completely disappeared either, because he 
stands very close to the wall.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.