u. an
Figure 5. Extract from RCD30 RGB imagery showing the
identical location as Figure 6 (mapping vehicle was driving in
the middle lane with manholes visible on either side).
Figure 6. Mono client with ground-based imagery, and overlaid
3d vector data (left), map interface (bottom right) and feature
editor (top right)
6.3 3d point measurement accuracy comparison
In order to get a first assessment of the interactive 3d point
measurement accuracies within the ground-based and the
airborne stereo imagery, 40 well-defined road markings were
interactively measured in both data sets. Based on earlier
investigations and experience, the a priori point measurement
accuracy should be in the range of:
e approx. 3-4 cm in X and Y and 2-3 cm in Z for the ground
based stereovision data (Burkhard et al., 2012) and
e 0.5-1.0 pixels, i.c. 3-5 cm, in X and Y and 0.1-0.2 9/4, hg or
1-2 pixels, i.e. 4-10 cm, in Z for the airborne imagery
The analysis of the coordinate differences for the 40 points
yielded standard deviations of the differences of approx. 5 cm in
X and Y and better than 10cm in the vertical direction.
Assuming similar planimetric accuracies for both systems this
leads to a point coordinate accuracy of 3.5 cm in X and Y (for
each system). The standard deviation of the vertical differences
is also consistent with the a priori values for the Z component.
These first investigations also revealed some systematic
differences between ground-based and airborne coordinate
determination in the order of 10 cm in planimetry and 10 cm in
height for each driving direction, i.e. for each ground-based
trajectory. This is consistent with the expected direct georefe-
rencing accuracy of the ground-based system in the challenging
urban environment of the tests. In subsequent experiments, the
georeferencing approach described in 5.1 will be modified and
very likely improved, by co-registering the ground-based
imagery to the airborne imagery using the integrated geo-
referencing approach described in Eugster et al. (2012).
6.4 Accuracy of extracted 3d point clouds
Dense 3d point clouds were extracted for both the ground-based
and airborne imagery using the dense matching algorithms and
tools discussed in sections 3.2 and 4.3.2. Figure 7 shows the left
part of a raw depth map extracted from the corresponding stereo
pair and overlaid with the left stereo partner. A postprocessed
version of this depth map is also used for 3d monoplotting (see
Figure 6). Textured 3d point clouds can easily be derived from
these depth maps by projecting the image and depth information
into object space.
Figure 7. Left stereo normal image with overlaid dense depth
map (shown in left half of the image)
An initial accuracy evaluation of the extracted 3d point clouds
was performed by using four planar patches on the road surface
as reference surfaces. These test areas were extracted from the
following 3d point clouds:
e 3d point cloud derived by projecting the depth map of a
single stereo frame into object space (ground-based raw)
e 3d point cloud derived by fusing the depth maps of multiple
stereo frames and by projecting the interpolated depth map
into object space (ground-based interpolated)
* 3d point cloud derived from an airborne stereo image pair
(airborne)
Table 1 shows the typical point densities of 3d point clouds
extracted from the ground-based and the airborne imagery. The
table also shows the respective standard deviations and max.
differences from a plane fitted through the point clouds cover-
ing the four test patches with an area of ~ 22 m? per patch. The
preliminary results of the ground-based and airborne 3d point
cloud extractions yield good SDs in the order of 1 pixel or less,
i.e. <1 cm in the ground-based and « 5 cm in the airborne case.
ground- | ground- | airborne
based based
raw interpol.
avg. point density [pts / m^] | 1297 3326 109
avg. standard deviation [m] | 0.009 m | 0.008 m | 0.045 m
max error [m] 0.052 m | 0.045 m | 0.167 m
80
Table 1. Typical point densities of different point cloud data
sets together with their accuracies (standard deviations and max
difference from a plane fitted into the respective point cloud)
7. CONCLUSIONS AND OUTLOOK
The combination of high-resolution airborne and ground-based
imagery and their integration into predominantly image-based
3d modelling and 3d geoinformation services provides a
powerful solution for future road infrastructure management.