The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
3 EXPERIMENTS
The proposed workflow is demonstrated using two example video
datasets. The first dataset UAV was obtained from a Microdrone
(Microdrones, 2008) platform and captured near the ’’Drachen-
fels” close to the city of Bonn (Forstner and Steffen, 2007). The
second dataset (FLI-MAP video) was captured from a helicopter
during a LIDAR flight over Enschede (Fugro, 2008), see Table 1
for some parameters. In that table, the baselength refers to two
Parameter
UAV
FLI-MAP
Flying height H (m)
30
275
Image scale 1 : mt,
1:1,500
1:50,000
Frame size (pix)
848x480
752x582
Pixel size (//m)
12
8.6
Frame rate (Hz)
30
25
Approx, baselength b (m)
0.1
1
Length sequence (img)
280
150
Table 1: Some parameters from the datasets.
consecutive frames. The length of the sequence refers to the num
ber of images which were used for the examples. Noteworthy is
the small image scale from the FLI-MAP video; the calibrated fo
cal length of this video device is only 6mm. From this geometric
set-up no highly accurate forward intersection can be expected,
refer to the section on the resulting point cloud. Some undistorted
images from both sequences are shown in Figure 4.
3.1 Results
3.1.1 Super resolution images An example for a super reso
lution image is taken from the UAV dataset. The chosen target
scale factor is 1.5. In Figure 5 the gray value profiles (red chan
nel) across the building’s roof edge are shown. The left image
shows in its upper area a part of the original image, but scaled by
factor 1.5 (linear interpolation applied). The line across the edges
indicates the location of the grey value profile as shown below the
image. The SRI image, computed from the mean value of corre
sponding points is shown in the right part of Figure 5, including
the gray value profile captured at the same image position as in
the original image.
In general one can see that the SRI seems to be a bit sharper com
pared to the original one: The tiles on the roof are less smeared
than in the original image. The profile supports the visual impres
sion. Especially in the edge region more details are shown. As an
example two points at the profile graph are pointed out by a black
arrow. The arrow no. 1 points to the quite salient point in the
profile indicating the position of the steep edge where the light
grey becomes dark grey. The corresponding area in the profile
of the original image is smoother. The arrow no. 2 points to the
edge at the eave of the roof where the tiles are showing a lighter
colour compared to the red colour on the overall roof area 1 . In
the computed SRI image this edge is really existing, but not in
the original image.
The SRI as computed from the respective median value of cor
responding pixels is not shown here, because no significant dif
ference can be observed compared to the SRI computed from the
mean value. This can be explained by the use of solely redundant
matches: by this means no gross errors are expected which may
influence the SRI from the mean values and thus the robust values
from the median computation are close to the mean.
3.2 3D point cloud
In order to evaluate the expected accuracy from forward inter
section first a theoretic approximation for the height accuracy is
made. In the given examples, especially in the FLI-MAP video,
the height component is the critical component.
Generally, given an approximated stereo normal case, the height
difference between two points is estimated as
If only the uncertainty in parallax measures s px is considered,
the accuracy for height measurements is derived from the partial
derivative with respect to p x :
In the case at hand more than 2 rays are intersected and thus the
expected accuracy and reliability is higher, but nevertheless the
approximation reasonably reflects the quality for forward inter
section.