Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
  
ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision", Graz, 2002 
  
lem, the surface can be considered to be known. Hence, 
the problem of geometric distortions of features can be well 
controlled and matching becomes feasible, assuming that 
reasonable approximations of the exterior orientation pa- 
rameters are available. In fact, matching in object space, 
using iteratively warped images, becomes the method of 
choice. This matching procedure also offers the opportu- 
nity to match multiple images. Now we have the chance for 
a detailed reconstruction of complex surfaces from multiple 
images. 
Orientation with surface patches Originally, the idea of 
using surfaces in the form of DEMs for orienting models was 
suggested by Ebner and Strunz (1988). The approach is 
based on minimizing the z differences between the model 
points and points in object space found by interpolating the 
DEM. The differences are minimized by determining the ab- 
solute orientation parameters of the model. Schenk (1999a) 
modified the approach by minimizing the distances between 
corresponding surface elements. We propose the latter 
method for fusing aerial images with LIDAR. 
The advantage of using patches as sensor invariant features 
is the relatively simple process to extract them from laser 
points. The fitting error of the laser points to a mathematical 
surface serves as quality control measure. In contrast to 
the previous methods, no planimetric features need to be 
extracted. Patches are more robust than features derived 
from them, for example 3D edges. 
Unfortunately, the situation is quite different for determin- 
ing surface patches in aerial images. Although theoreti- 
cally possible by texture segmentation and gradiant anal- 
ysis, it is is very unlikely that surface information can be 
extracted from single images. Hence, image matching (fu- 
sion) is required. Quite often, surface patches have uniform 
reflectance properties. Thus, the grey level distribution of 
conjugate image patches is likely to be uniform too, preclud- 
ing both, area-based and feature-based matching methods, 
respectively. The most promising approach is to infer sur- 
face patches from surface boundaries (matched edges). 
As shown by Jaw (1999), the concept of using control sur- 
faces for orienting stereo models can be extended to block 
adjustment. In analogy to tie points, the author introduces tie 
surfaces. To connect adjacent models, the only condition is 
to measure points on the same surface. However, the points 
do not need to be identical—clearly, a major advantage for 
automatic aerial triangulation. 
Alternative solution with range images A popular way 
to deal with laser points is to convert them to range images. 
This is not only advantageous for visualizing 3D laser point 
clouds but a plethora of image processing algorithms can 
operate on range images. For example, an edge opera- 
tor will find edges in a range image, suggesting that the 3D 
edges used as sensor invariant features be determined from 
range images. At first sight, this is very appealing since it ap- 
pears much simpler than the method described in Section 2. 
Let us take a closer look before making a final judgment, 
however. 
Generating range images entails the interpolation of the ir- 
regularly spaced laser points to a grid and the conversion of 
elevations to grey values. While the conversion is straight- 
forward, the interpolation deserves closer attention. Our 
goal is to detect edges. Edges in range images correspond 
A - 314 
to rapid changes of elevations in the direction across the 
edge. This is precisely where we must expect large interpo- 
lation errors. It follows that the localization of edges in range 
images may not be accurate enough for precise fusion. 
  
Figure 2: Edge detection performed on a range image. The edges 
are affected by interpolation errors and usually not suit- 
able for sharp boundary delineation. 
Fig. 3.3 depicts a sub-image with a fairly large building (see 
also Fig. 3b). The DEM grid size of 1.3 meters (average 
distance between the irregularly distributed laser points) 
leads to a relatively blocky appearance of the building and to 
jagged, fragmented edges. Moreover, edges are predomi- 
nantly horizontal. It is quite difficult to detect non-horizontal 
edges (boundaries) in object space from range images. Fi- 
nally, when comparing the edges obtained from the range 
image with those determined by intersecting adjacent pla- 
nar surface patches it becomes clear that their use for fusing 
aerial images with LIDAR becomes problematic. 
4. FUSION OF AERIAL IMAGERY WITH LIDAR DATA 
After having established a common reference frame for LI- 
DAR and aerial imagery we are now in a position to fuse 
features extracted from the two sensors to a surface de- 
scription that is richer in information as would be possible 
with either sensor alone. We have strongly argued for an 
explicit description to aid subsequent processes such as ob- 
ject recognition, surface analysis, bare-earth computations, 
and even the generation of orthophotos. Since these appli- 
cations may require different surface descriptions, varying 
in the surface properties (quality and quantity), an impor- 
tant question arises: is there a general description, suitable 
for applications that may not even be known by the time of 
surface reconstruction? 
Surfaces, that is their explicit descriptions, play an impor- 
tant role in spatial reasoning—a process that occurs to a 
varying degree in all applications. We consider the surface 
properties listed in Table 2 essential elements that are, by 
and large, application dependent. In a demand-driven im- 
plementation, additional properties or more detailed infor- 
mation can be obtained from the sensory input data upon
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.