Paolo Gamba
4. Finally, a further merging step takes into account separately all pixels left. In other words, we now neglect the segment analysis
of the first step for all those pixels that have not been aggregated into a plane, and we try to merge them one at a time with the
nearest plane. The largest value of the distance that a point is allowed to have from a plane and be anyway aggregated
correspond to the fourth parameter in Table 1, o4. Even in this processing step the computations is repeated after each
computation of the plane parameters; by this way, we take into account the newly added points in the successive iteration.
However, since at this point the planar patches are usually sufficiently large, the adjustments are very limited.
5. The last step involves all the planar surfaces already recognized, that may be further aggregated into larger planes to improve
the process output and a better recognition of the objects in the scene. The mean for realizing this operation is again a similarity
measure:
= aa’ +1 + bb’ +1 + cc’ +1 à
3 Ja? +1 Jb? +1 J +1
where a, b, c are the three parameters that identify each planar patch.
The approach does not guarantee that all the original range image is subdivided into planar regions, and isolated points or even small
regions may not considered in the final output. This could be considered a drawback, in some cases, since these areas represent the
"noise floor" of the algorithm, positions where the procedure is not able to recover the true surface from the distorted measurements.
However, it could be also an advantage for some kind of data. For instance, we should take in mind that LIDAR data is affected by
random noise at the edge of the buildings, because of false responses. So, it could be useful to discard these points both for a
successive edge analysis and also to prevent them from being cause of large errors in the object reconstructed shape.
Figure 1 provides a way to understand how the plane growing procedure acts through steps 1, 3, 4, and 5 of the algorithm. Indeed, we
may observe the original data (in this case, a very simple test image), the boundaries of the planar regions initially detected, the final
regions obtained after segment, point and plane aggregation, and the final result. Black pixels in fig. #(c) represent areas that we were
not able to characterize at the corresponding processing point.
2.1 Some considerations on similarity evaluation
Similarity is the key of the preceding algorithm, both for the initial data mining and the successive object reconstruction
process. In this subsection we shall discuss the formulas used for this evaluation, and explain a better formulation.
Moreover, we shall introduce with an example why this similarity evaluation could be modulated for a refinement of
data analysis and to discriminate as much as possible between natural and artificial objects.
The similarity index used for both the segment and the plane aggregations are based on the work in Jiang and Burke,
1994, but reflects a similarity evaluation with some drawbacks, that could be considered looking at the following figure,
where we plot sj; with respect to m;, taking m;=10, n;=n;=0 and following (1) or a different formulation (to be discussed
next and presented in (3)).
lr ir
0.8} 0.8
0.6 0.6
0.4 0.4
0.2 0.2
5 10 15 20 5 10 15 20
(a) (b)
Figure 2. Similarity index for a monodimensional segment with length equal to 10 with respect to a
second segment: in (a) the index is computed according to (1), while in (b) according to (3).
It is clear that fig. 2 (b), where similarity is computed according to
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 315