Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision*, Graz, 2002 
  
can extract 3D surface patches from laser points. Such 
patches may be planar or higher order surfaces, depend- 
ing on the scene. In built-up areas, usually many planar 
surface patches exist, corresponding to man-made objects. 
After surface patches have been extracted, a grouping pro- 
cess establishes spatial relationships. This is followed by 
forming hypotheses as to which patches may belong to the 
same object. Adjacent patches are then intersected if their 
surface normals are different enough to guarantee a geo- 
metrical meaningful solution. In fact, if the adjacency hy- 
pothesis is correct then the intersection is a 3D boundary of 
an object. Lee (20022) treats the steps of extracting and 
grouping patches as a perceptual organization problem. 
Table 3: Multi-stage feature extraction from LIDAR and 
aerial images. 
  
  
LIDAR aerial imagery 
  
raw data 3D point cloud 
pixels 
ng ng 
  
Let us now examine the features that can be extracted from 
images. The first extraction level comprises edges. They 
correspond to rapid changes in grey levels in the direction 
across the edges. Most of the time, such changes are the 
result of sudden changes in the reflection properties of the 
surface. Examples include shadows and markings. More 
importantly, boundaries of objects also cause edges in the 
images because the two faces of a boundary have different 
reflection properties too. Hence we argue that some of the 
2D edges obtained from aerial imagery correspond to 3D 
edges obtained from laser points. That is, edges are poten- 
tially sensor invariant features that are useful for solving the 
registration problem. Note that the 2D edges in one image 
can be matched with conjugate edges in other images. It 
is then possible to obtain 3D features in model space by 
performing a relative orientation with linear features. 
Are 3D surface patches also sensor invariant? They cer- 
tainly correspond to some physical entitities in object space, 
for example a roof plane, face of a building, or a parking lot. 
Surface patches are first order features that can be extracted 
from laser point clouds relatively easily. However, it is much 
more difficult to determine them from images. One way to 
determine planar surfaces from images is to test if spatially 
related 3D edges are lying in one plane. Surface patches 
then can also be considered sensor invariant features. 
3.3 Referencing aerial images to LIDAR data 
From the discussion in the previous section we conclude 
that 2D edges in images, 3D edges in models, and 3D sur- 
face patches are desirable features for referencing aerial 
images with LIDAR. Table 4 lists three combinations of sen- 
sor invariant features that can be used to solve the fusion 
problem. As pointed out earlier, we consider this first step as 
the problem of determining the exterior orientation of aerial 
imagery. Extracted features from LIDAR data serve as con- 
trol information. 
Table 4: Sensor invariant features for fusing aerial imagery 
  
  
  
  
with LIDAR. 
LIDAR aerial imagery Method 
3D edges 2D edges SPR, AT 
3D edges 3D edges ABSOR, AT 
3D patches 3D patches ABSOR, AT 
  
  
  
  
  
Orientation based on 2D image edges and 3D LIDAR 
edges The first entry in Table 4 pairs 2D edges, extracted 
in individual images, with 3D edges established from LIDAR 
points. This is the classical problem of block adjustment, 
except that our fusion problem deals with linear features 
rather than points. Another distinct difference is the number 
of control features. In urban areas we can expect many 
control lines that have been determined from LIDAR data. It 
is quite conceivable to orient every image individually by 
the process of single photo resecting (SPR), that is, the 
problem can be solved without tie features. This offers the 
advantage that no image matching is necessary—a most 
desirable situation in view of automating the fusion process. 
Several researchers in photogrammetry and computer vi- 
sion have proposed the use of linear features in form of 
straight lines for pose estimation. Most solutions are based 
on the coplanarity model. Here, every point measured on a 
straight line in image space gives rise to a condition equa- 
tion in that the point is forced to lie on the plane defined by 
the perspective center and the control line in obiect space, 
see, e.g. Habib et al. (2000). The solutions mainly differ in 
how 3D straight lines are represented. 
Although straight lines are likely to be the dominant linear 
features in our fusion problem, it is desirable to general- 
ize the approach and include free-form curves. Zalmanson 
(2000) presents a solution to this problem for frame cam- 
eras. In contrast to the coplanarity model, the author em- 
ploys a modified collinearity model that is based on a para- 
metric representation of analytical curves. Thus, straight 
lines and higher-order curves are treated within the same 
representational framework. 
With the recent emergence of digital line cameras it is nec- 
essary to solve the pose estimation problem for dynamic 
sensors. The traditional approach is a combination of di- 
rect orientation and interpolation of orientation parameters 
for every line. This does not solve our fusion problem be- 
cause no correspondence between extracted features from 
LIDAR and imagery is used—hence no explicit quality con- 
trol of the sensor alignment is possible. In Lee, Y. (2002b) 
the author presents a solution of estimating the pose for line 
cameras by using linear features. In this unique approach 
every sensor line is oriented individually, without the need 
for navigation data (GPS/INS). 
Orientation based on 3D model edges and 3D LIDAR 
edges We add fusion with 3D model edges more for the 
purpose of completeness than practical significance. In con- 
trast to the previous method, edges must be matched be- 
tween images to obtain 3D model edges. In general, image 
matching, especially in urban areas, is considered difficult. 
We should bear in mind, however, that in our fusion prob- 
A - 313 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.