cted in the LIDAR
AR point cloud,
“points falling in a
erate the LiDAR
R intensity image
the following 2D
(1)
and (E, N) is the
ing Easting and
ite system origin,
ft corner, (Eo, No)
ipping Easting and
. LIDAR intensity
| the LiDAR data
int cloud used to
should be filtered
ased on elevation
ilar height as the
used for filtering.
icles on the bridge
s, and thus, those
e void areas in the
also exist; see the
eure 3 (a). In order
al outlier removal
n each point’s
bridge points. For
its closest » points
| distances obey a
n distances outside
mean and standard
removed from the
3 (a) and (b).
(b)
> surface (a) and
b)
\
lane estimation is
ind the robust 3D
nts is recomputed
stimated 3D plane
1ld be perfectly on
ane. Subsequently,
ned on the refined
Is à concave shape.
€ also determined
based on checking elevation value with respect to their
neighbourhood in a circular searching area, since the elevation
difference should be large for boundary points and small for
non-boundary points. The concave hull polygon (white)
connecting those concave hull boundary points (yellow) and the
bridge boundary points (blue) are illustrated in the Figure 4.
3.2 Co-registration
In our earlier research on multiple-domain imagery co-
registration, a new approach based on LPFFT (Reddy and
Chatterji, 1996; Wolberg and Zokai, 2000), Harris Corners,
PDF mean-shift matching (Comaniciu et al, 2003) and
RANSAC (Fischler and Bolles, 1981) affine transformation
estimation was proposed (Toth, et al, 2011). With a limited
dataset, the proposed method achieved promising results, and
thus, it is applied to estimate the geometric transformation
which is assumed an affine transformation, between the LiDAR
intensity and aerial images. Figure 5 shows the workflow of the
proposed co-registration approach.
— P. Ud
n
i
3
*
M
;
{
1
Bon Big E Nl Tp. ee So P E
Figure 4. Concave hull boundary points (yellow) and bridge
boundary points (blue)
/ /
/ / Simitarit i
/ / AM y validation
/ / PF l
e image A, 8 See e 1 eT Sunt ay P4 based on Monte
f / estimation
/ / Carlo
/
Affine
transformation © Regional feature |, Regional feature
estimation i matching PY generation
(RANSAC)
Figure 5. Workflow of the affine transformation estimation
First, the similarity transformation regarded as the coarse
geometric transformation is estimated via a standard LPFFT
registration method. Next is the similarity validation step where
the scale and rotation parameter are validated based on a Monte
Carlo process; more precisely, a Monte Carlo test is performed
for a set of scale and rotation values computed from the
originally estimated parameters (sg,$o) via following
equations:
s = {s|s; = so Hi + 65}
2
¢ = {loi = pot i-5¢} e
The second image is transformed using each scale and rotation
combination in the set. If the estimated scale and rotation
parameters are correct, the images should have comparable
orientation and scale. Then, FFT-accelerated NCC (Normalized
Cross Correlation), an efficient NCC computation method, can
be used to estimate the translation parameters for each image
pair by searching the maximum NCC values. Those maximum
NCC values of all image pairs should be kept at a significantly
31
high level, which means small scale and rotation changes
around the correct scale and rotation still lead to a high NCC
value. If the estimated scale and rotation are wrong, the
maximum NCC values of all image pairs should be small.
Figure 6 (a) shows the typical NCC surface based on the wrong
(So, $9) and (b) based on the correct (so, $9) paramters. If the
EOPs (Exterior Orientation Parameters) of aerial image are
available, it is also possible to introduce a rotation angle
constraint to improve the performance of similarity
transformation estimation. According to our experiences, the
scale and rotation parameters can be reliably estimated based on
orientation angle constraint and the Monte Carlo validation. The
translation parameters, however, may not be easily estimated by
LPFFT. Therefore, translation parameters are estimated through
the edge NCC matching method.
m
T ME
sui
$4.
Uus
e : SE 5 = 2 ii : eS
(a) (b)
Figure 6. NCC value surface; wrong scale and rotation
parameter (a) and correct scale and rotation parameter (b)
The second image is transformed using the estimated scale and
rotation angle, so the image pair should have similar orientation
and scale. A number of rectangle reference patches are
generated in the first image, and subsequently, those reference
patches are matched in the second image. Thus, the translation
parameters can be computed as the center point image
coordinate differences between the reference patch and matched
patch. The correct translation is determined by a statistical
analysis of all computed translations. The translations with
highest frequency are accepted as correct translations, see
Figure 7.
eost Nis i
= SE 3 me d DE Sx $ me x se
ae 4m x 3 + xe $
WEHEN DEREN HR PRR EEE
(a) (b)
Figure 7. NCC value surface; wrong scale and rotation
parameter (a) and correct scale and rotation parameter (b)
The Harris Corners detector is used to extract feature points,
and circular regions cantered on strong HC features are created
in both images, including the imported locations from the other
image. Next the scale- and rotation-invariant PDF descriptor is
used to describe the circular feature region. The PDF function is
represented in a 256-dimensional feature descriptor. The
similarity between two feature descriptors is computed via the
Bhattacharyya Coefficient, which is the cosine of angle
correlation between the two feature descriptors, defined as:
m
p p(p.q) => JPu- Au = cosd > 0 (3)
u=1