ADAPTIVE BUILDING EDGE DETECTION BY COMBINING LIDAR DATA AND
AERIAL IMAGES
LI Yong, WU Huayi
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,
Wuhan 430079, China - liyongwhu@gmail.com, wuhuayi@lmars.whu.edu.cn
KEY WORDS: LIDAR, Aerial image, Fusion, Edge detection, Building extraction, DEM
ABSTRACT:
The building edge detection plays a key role during building extraction, which is important and necessary for building description.
The edges detected from aerial images have high horizontal accuracy and represent various edge shapes well. But the edge detection
in images is often influenced by contrast, illumination and occlusion. LIDAR data are suitable forjudging building regions, but miss
some edge points due to the laser pulse discontinuousness. In order to make full use of the complementary advantages of the two
data sources, a new adaptive method of building edge detection by combining LIDAR data and aerial images is proposed in this
paper. Firstly, the objects and ground are separated by a filter based on morphological gradient. The non-building objects are
removed by mathematical morphology and region growing. Secondly, the aerial image is smoothed by Gaussian convolution, and
the gradients of the image are calculated. Finally, the edge buffer areas are created in image space by the edge points of the
individual roof patch. The pixels with local maximal gradient in the buffer area are judged as the candidate edge. The ultimate edges
are determined through fusing the edges in image and the roof patch by morphological operation. The experimental results show that
the method is adaptive for various building shapes. The ultimate edges are closed and thin with one-pixel width, which are very
suitable for subsequent building modelling.
1. INTRODUCTION
1.1 Background
Building models are the essential components of the three
dimensional GIS. The building edges are the significant features
of buildings. The building edge detection plays a key role
during building extraction, which is important and necessary for
building description. The aerial images and LIDAR data all
provide powerful support to the task. The edges detected from
aerial images have high horizontal accuracy and represent
various edge shapes well. LIDAR point clouds are well suited
for judging the points that belong to each building surface,
which is beneficial to search the approximate location of each
building edges. However, each data source has its own
weakness beside strength for the edge detection.
In aerial images, the contrast between buildings and
backgrounds is often not so high, and there are too many
complex spectral and texture information in most of scenes,
including occlusion, shadows and so on. Those lead to the
complexity of edge detection. Some edges are not building
edges we need, while some edges of buildings are missed or
broken. Furthermore, edges of different objects or edges of
different layers of one building are likely to stick to each other.
In LIDAR point clouds, building edges can be located by
analyzing the height changing of laser footprints. However,
some edge points are not gathered by LIDAR for the laser pulse
discontinuousness, which cause that the horizontal accuracy of
edge detection from LIDAR data is poor.
1.2 Related work
As discussed above, both data sources have their own pros and
cons. In order to overcome the drawbacks using single data
source, it is considered as a promising strategy to automatically
detect building edges by fusion of LIDAR data and aerial
images. The correct edge detection is a challenging task due to
the scene complexity. So it is a research hot spot how to
combine the two different data sources in an optimal way so
that their weakness can be compensated effectively by each
other.
(Rottensteiner and Jansa, 2002) firstly generate initial 3D
planar segments of buildings from LIDAR point clouds, and
create polyhedral models and wire frames of buildings by
analyzing relations of neighbouring planar segments and
regularizing of building shape. Then, the initial polyhedral
building models are verified in the images to improve the
accuracy of their geometric parameters. The wire frames of
buildings are back-projected to the images to match with image
edges for improving the accuracy of the building outlines. Only
straight line segments are matched with the object edges.
(Sohn and Dowman, 2007) collect rectilinear lines around
building outlines by data-driven and model-driven ways for
building modelling. In the data-driven way, straight lines that
are significantly long and around building boundaries are
extracted from optical imagery. Then the lines are
geometrically regularized by analyzing the dominant line angles.
The lines produced in the data-driven way do not always cover
all parts of building edges because significant boundary lines
may be missed due to low contrast, shadow overcast and
occlusion effects. In the model-driven way, new lines are
extracted from point clouds in order to compensate for the lack
of data-driven line density by employing specific building
models based on the assumption that building outlines are
comprised of parallel lines. So only the building edges with
parallel and orthogonal structure can be considered.
197