Full text: Proceedings, XXth congress (Part 1)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part Bl. Istanbul 2004 
  
surfaces since they correspond to laser footprints instead of 
distinct points that could be identified in imagery (Baltsavias, 
1999). Conventionally, surface-to-surface registration and 
comparison have been achieved by interpolating both datasets 
into a uniform grid. The comparison is then reduced to 
estimating the necessary shifts by analyzing the elevations at 
corresponding grid posts (Ebner and Ohlhof, 1994; Kilian et al., 
1996). Several issues can arise with this approach. First, the 
interpolation to a grid will introduce errors especially when 
dealing with captured surfaces over urban areas. Moreover, 
minimizing the differences between surfaces along the z- 
direction is only valid when dealing with horizontal planar 
surfaces (Habib and Schenk, 1999). Postolov et al. (1999) 
presented another approach, which works on the original 
scattered data without prior interpolation. However, the 
implementation procedure involves an interpolation of one 
surface at the location of conjugate points on the other surface. 
Additionally, the registration is based on minimizing the 
differences between the two surfaces along the z-direction. 
Schenk (1999) introduced an alternative approach, where 
distances between points of one surface along surface normals 
to locally interpolated patches of the other surface are 
minimized. Habib et al. (2001) implemented this methodology 
within a comprehensive automatic registration procedure. Such 
an approach is based on processing the photogrammetric data to 
produce object space planar patches. This might not be always 
possible since photogrammetric surfaces provide accurate 
information along object space discontinuities while supplying 
almost no information along homogeneous surfaces with 
uniform texture. 
This paper deals with alternative approaches for utilizing linear 
features derived from LIDAR data as control information for 
aligning the photogrammetric model relative to the LIDAR 
reference frame. The following section addresses the general 
methodology and mathematical models of the suggested 
approaches including the techniques adopted for extracting the 
registration primitives from photogrammetric and LIDAR data. 
The last two sections cover experimental results (using aerial 
datasets) as well as conclusions and recommendations for future 
work. 
2. METHODOLOGY 
In this paper, two approaches will be applied to incorporate 
LIDAR lines in aligning the photogrammetric model to the 
LIDAR reference frame. The first approach incorporates 
LIDAR lines as control information directly in a 
photogrammetric triangulation. The second approach starts by 
generating a photogrammetric model through a 
photogrammetric triangulation using an arbitrary datum (no 
control information). LIDAR features are then used as control 
for the absolute orientation of the photogrammetric model. 
2.1 Approach 1: Direct involvement of LIDAR lines in 
photogrammetric triangulation 
Conjugate linear features in the photogrammetric and LIDAR 
datasets should first be extracted and then incorporated in a 
photogrammetric triangulation in which LIDAR lines will act as 
the source of control to align the photogrammetric model. The 
following subsections describe the procedures adopted to 
extract straight line features in both datasets and how they are 
included in the overall alignment procedure. 
Photogrammetric straight-line features 
The methodology for producing 3-D straight line features from 
photogrammetric datasets depends on the representation scheme 
of such features in the object and image space. Prior research in 
this area concluded that representing object space straight lines 
using two points along the line is the most convenient 
representation from a photogrammetric point of view since it 
yields well-defined line segments (Habib et al., 2002). On the 
other hand, image space lines will be represented by a sequence 
of 2-D coordinates of intermediate points along the feature. 
This representation is attractive since it can handle image space 
linear features in the presence of distortions as they will cause 
deviations from straightness. 
In general, the manipulation of tie straight lines appearing in a 
group of overlapping images starts by identifying two points in 
one (Figure la) or two images (Figure 1b) along the line under 
consideration. These points will be used to define the 
corresponding object space line segment. One should note that 
these points need not be identifiable or even visible in other 
images. Intermediate points along the line are measured in all 
the overlapping images. Similar to the end points, the 
intermediate points need not be conjugate, Figure 1. 
  
  
  
  
  
  
  
  
  
  
  
  
1 1 
ex * Tus 
es EK 
2 S a" ^X x. a’ 
e Aue 
^ CS o 
Image 1 Image 4 Image 1 Image 4 
(a) (b) 
e End points defining the line in object space 
x Intermediate points 
Figure 1: End points defining the object line are either 
measured in one image (a) or two images (b). 
For the end points, the relationship between the measured 
image coordinates {(X,, Yı), (X, ¥,); and the corresponding 
ground coordinates (X, Y Z) (X, Y Z,)} is established 
through the collinearity equations. Only four equations will be 
written for each line. The incorporation of intermediate points 
into the adjustment procedure is achieved through a 
mathematical constraint. The underlying principle in this 
constraint is that the vector from the perspective centre to any 
intermediate image point along the line is contained within the 
plane defined by the perspective centre of that image and the 
two points defining the straight line in the object space, Figure 
2. This can be mathematically described through Equation |. 
(x) © 
. J . . 
In the above equation, V is the vector connecting the 
perspective centre to the first end point along the object space 
line, J is the vector connecting the perspective centre to the 
second end point along the object space line, and ÿ is the 
vector connecting the perspective centre to an intermediate 
point along the corresponding image line. It should be noted 
that the three vectors should be represented relative to a 
common coordinate system (e.g, the ground coordinate 
system). The constraint in Equation 1 incorporates the image 
coordinates of the intermediate point, the Exterior Orientation 
Parameters (EOP), the Interior Orientation Parameters (IOP) 
    
   
   
   
   
   
  
   
  
   
   
   
   
   
  
  
   
   
  
   
   
   
  
   
  
   
  
   
  
  
  
  
  
  
   
   
   
   
  
  
   
  
   
  
  
  
  
   
  
  
  
   
  
  
   
  
  
   
   
  
  
  
  
  
  
  
    
Int 
inc 
coc 
Cor 
Wr 
Th 
inti 
e, 
Fig 
cor 
poi 
the 
not 
equ
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.