Full text: CMRT09

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3AA/4 — Paris, France, 3-4 September, 2009 
2.2 Data Driven Reconstruction 
Frequently, the representation of buildings is based on 
constructive solid geometry (CSG) or boundary representation 
(B-Rep). In contrast, we apply a representation of the buildings 
by cell decomposition (Haala et al., 2006). By these means, 
problems which can occur during the generation of 
topologically correct boundary representations can be avoided. 
Additionally, the implementation of geometric constraints such 
as meeting surfaces, parallelism and rectangularity is simplified. 
Due to the applied representation scheme, the idea of our 
reconstruction algorithm is to segment an existing coarse 3D 
building object with a flat front face into 3D cells. Each 3D cell 
represents either a homogeneous part of the facade or a window 
area. Therefore, they have to be differentiated depending on the 
availability of measured LiDAR points. After this classification 
step, window cells are eliminated while the remaining facade 
cells are glued together to generate the refined 3D building 
model. These steps are depicted exemplarily within Figure 4 
and Figure 5, and will be explained in the following sections. 
The processing is based on the facade and point cloud marked 
by the white polygon in Figure 3. 
~ „ 
y~T 
ti- 
i fTf 
1 
• 
[; 
• j 
- - 
! 
|* 
i ■ 
i 
1 
f: 
; _ 
M 
\ i : 
i I 
| 
: 
1 
1 
1 M 
1 1» 
1 
K 
1 
"I 
j 
Figure 4. Lindenmuseum, Stuttgart: LiDAR point cloud (left), 
and detected edge points and window lines (right) 
3 3 
m n 
Figure 5. Lindenmuseum, Stuttgart: classified 3D cells (left), 
3D facade model (middle), and refined 3D facade 
model (right) 
edge points. Figure 4(right) shows the extracted edge points at 
the window borders as well as the derived horizontal and 
vertical lines. Based on these window lines, planar delimiters 
can be generated for a subsequent spatial partitioning. Each 
boundary line defines a partition plane which is perpendicular 
to the facade. For the determination of the window depth, an 
additional partition plane can be estimated from the LiDAR 
points measured at the window crossbars. These points are 
detected by searching a plane parallel to the facade, which is 
shifted in its normal direction. The set of partition planes 
provides the structural information for the cell decomposition 
process. It is used to intersect the existing building model 
producing a set of small non-overlapping 3D cells. 
2.2.2 Classification and Reconstruction 
In order to classify the 3D cells into facade and window cells, a 
point-availability-map is generated. It is a binary image with 
low resolution where each pixel defines a grid element on the 
facade. The optimal grid size is a value a little higher than the 
point sampling distance on the facade. Grid elements on the 
facade where LiDAR points are available produce black pixels 
(facade pixels), while white pixels (non-facade pixels) refer to 
no-data areas. The classification is implemented by computing 
the ratio of facade to non-facade pixels for each 3D cell. Cells 
including more than 70% facade pixels are defined as facade 
solids, whereas 3D cells with less than 10% facade pixels are 
assumed to be window solids. While most of the 3D cells can 
be classified reliably, the result is uncertain especially at 
window borders or in areas with little point coverage. However, 
the integration of neighbourhood relationships and constraints 
concerning the simplicity of the resulting window objects 
allows for a final classification of such uncertain cells. Figure 
5(left) shows the classified 3D cells: facade cells (grey) and 
window cells (white). 
Within a subsequent modelling process, the window cells are 
cut out from the existing coarse building model. Thus, windows 
and doors appear as indentations in the building facade which is 
depicted in Figure 5(middle). Moreover, the reconstruction 
approach is not limited to indentations. Details can also be 
added as protrusions to the facade (Becker and Haala, 2007). 
However, the achievable level of detail for 3D objects that are 
derived from terrestrial laser scanning depends on the point 
sampling distance. Small structures are either difficult to detect 
or even not represented in the data. Nevertheless, by integrating 
image data with a high resolution in the reconstruction process 
the amount of detail can be increased (Becker and Haala, 2007). 
This is exemplarily shown for the reconstruction of window 
crossbars in Figure 5(right). 
2.3 Automatic Inference of Facade Grammar 
2.2.1 Point Cloud Segmentation 
At glass LiDAR pulses are either reflected or the glass is 
penetrated. Thus, as it can be seen in Figure 4(left), by laser 
scanning usually no points are measured in the facade plane at 
window areas. If only the points are considered that lie on or in 
front of the facade, the windows will describe areas with no 
point measurements. These no-data areas can be used for the 
point cloud segmentation which aims at the detection of 
window edges. For example, the edge points of a left window 
border are detected if no neighbour measurements to their right 
side can be found in a predefined search radius. In a next step, 
horizontal and vertical lines are estimated from non-isolated 
As it is already visible in Figure 3, the given scan configuration 
resulted in considerable variations of the available point 
coverage for the respective building. Thus, the bottom-up 
facade reconstruction presented in the previous section was 
realized for a facade, which is relatively well observed. This 
overall result is now used to infer the facade grammar. 
Frequently, such formal grammars are applied during object 
reconstruction to ensure the plausibility and the topological 
correctness of the reconstructed elements (Muller et al., 2006). 
In our application, a formal grammar will be used for the 
generation of facade structure where only partially or no sensor 
data is available. 
231
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.