XIX-B3, 2012
1d Ringle, K., 2008.
ning, photogramme-
[tural heritage build-
hotogrammetry, Re-
nces, Vol. XXXVII
Furukawa, R., 2004.
; and photogramme-
ling of cultural her-
grammetry, Remote
Vol. 35(5), pp. 401-
local scale-invariant
erence on Computer
pp. 1150-1157.
1, T., 2002. Robust
>xtremal regions. In:
| Conference, 2002,
. 2006. A multires-
hing approach: An
spot5-hrs stereo im-
ppographic Mapping
Il Satellites), ISPRS,
ind Zhang, L., July
EE Signal Process-
D. and Szeliski, R.,
1-view stereo recon-
B.
Modelling the world
1al Journal of Com-
P. and Thoennessen,
tion and multi-view
PR, IEEE Computer
itzgibbon, A., 2000.
cture Notes in Com-
5/-vedaldi/code/
A NEW APPROACH OF DIGITAL BRIDGE SURFACE MODEL GENERATION
Hui Ju
Department of Civil, Environmental and Geodetic Engineering, The Ohio State University,
470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210, USA — ju.32@osu.edu
Commission III, WG III/1
KEY WORDS: LiDAR Intensity Image, Aerial Image, Co-registration, DTM/DSM, Digital Bridge Surface Model
ABSTRACT:
Bridge areas present difficulties for orthophotos generation and to avoid "collapsed" bridges in the orthoimage, operator assistance is
required to create the precise DBM (Digital Bridge Model), which is, subsequently, used for the orthoimage generation. In this paper,
a new approach of DBM generation, based on fusing LiDAR (Light Detection And Ranging) data and aerial imagery, is proposed.
The no precise exterior orientation of the aerial image is required for the DBM generation. First, a coarse DBM is produced from
LiDAR data. Then, a robust co-registration between LiDAR intensity and aerial image using the orientation constraint is performed.
The from-coarse-to-fine hybrid co-registration approach includes LPFFT (Log-Polar Fast Fourier Transform), Harris Corners, PDF
(Probability Density Function) feature descriptor mean-shift matching, and RANSAC (RANdom Sample Consensus) as main
components. After that, bridge ROI (Region Of Interest) from LiDAR data domain is projected to the aerial image domain as the
ROI in the aerial image. Hough transform linear features are extracted in the aerial image ROI. For the straight bridge, the 1? order
polynomial function is used; whereas, for the curved bridge, 2™ order polynomial function is used to fit those endpoints of Hough
linear features. The last step is the transformation of the smooth bridge boundaries from aerial image back to LiDAR data domain
and merge them with the coarse DBM. Based on our experiments, this new approach is capable of providing precise DBM which can
be further merged with DTM (Digital Terrain Model) derived from LiDAR data to obtain the precise DSM (Digital Surface Model).
Such a precise DSM can be used to improve the orthophoto product quality.
1. INTRODUCTION and subsequently, compare them with 3D linear features
extracted from LiDAR data to obtain the smooth and precise
Nowadays, more and more orthophotos of highway corridor digital building models. Those methods show good results for
areas are demanded for the purpose of maintaining and automated generation of polyhedral building models for
advancing the public transportation system. Nevertheless, complex structures (Kim and Habib, 2009; Wu et al., 2011);
bridge areas present challenges, as without operator assistance, however, implementation of these methods could be complex
distortion is usually introduced. In order to avoid “collapsed” ^ and the computation load could be heavy. Most earlier research
bridges in the orthoimage, a precise DBM (Digital Bridge is focused on either generating DBM (Digital Bridge Model)
Model) is needed. The work presented in this paper is focused based on analysis of LiDAR point cloud profile (Sithole and
on a new method to generate the precise DBM based on fusing Vosselman, 2006) or bridge boundary extraction from DTM
LiDAR data and aerial imagery. Actually, fusing LiDAR data (Geopfert and Rottensteiner, 2010). Nevertheless, the
and aerial imagery to create the precise digital man-made object ^ determination of man-made object boundaries in LiDAR data is
model has been widely investigated in the photogrammetric rather complex. If the co-registration between LiDAR intensity
community. However, most methodologies utilize LiDAR and other high resolution imagery can be established, it is not
elevation information only; the LiDAR intensity information is necessary to generate the perfect DBM from LiDAR data
mostly ignored. The main difference with other methods is that domain, as the LIDAR data derived coarse DBM can be refined
our method is based on the co-registration between the LIDAR by introducing smooth bridge boundaries, extracted from the
intensity and aerial image pair. high resolution image. This is the concept of the proposed
approach.
1.1 Literature Review
: | ; 1.2 Proposed Method
Although LiDAR data can directly provide accurate and dense
surface measurements, it cannot well determine the man-made In this paper, the man-made objects to be modelled are bridges
object boundaries due to the irregular and sparse nature of ^ ^ whose shape is either straight or curved. The new idea is to
LiDAR points at break lines. On the other hand, the man-made transfer smooth bridge boundaries extracted from the aerial
object boundaries can be well extracted from aerial imagery. image to the LiDAR intensity image via the co-registration
Fusing clean and smooth boundaries from aerial imagery and between them. Ultimately, the accurate DBM is finalized in the
LiDAR elevation data becomes an efficient way to create the LiDAR data domain by fitting smooth boundaries to the coarse
digital man-made object model (Kim ef a/., 2008; Rottensteiner ^ boundaries derived from LiDAR data. The co-registration
and Briese, 2002; Sampath and Shan, 2007; Vosselman, 1999). method was firstly proposed based on our earlier research work
Reviewing related publications, most of the research is focused of co-registration between satellite and LiDAR intensity images
on precise digital building modelling. The main idea is to (Toth et al., 2011).
extract 2D outlines of buildings from aerial images, then project
them to the 3D LiDAR data space via the colinearity equation,
29