Full text: Proceedings, XXth congress (Part 2)

\ TION 
algary.ca, 
Brasil 
n elevation 
neral. laser 
(ie, object 
, especially 
ition of the 
vever, prior 
ach system 
systems is 
troducing a 
'€ extracted 
s from the 
constraints 
ing straight 
etween the 
systems. In 
evealed the 
ics is found 
termination 
n using the 
and dense 
's add to its 
its own 
information 
y is worse 
tacle in the 
licated and 
ially when 
ry and laser 
referable in 
tage in one 
1 the other. 
cher quality 
vever, the 
only after 
separately 
would be 
metric and 
ime. (Habib 
ough which 
ing systems 
he type and 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B2. Istanbul 2004 
extraction of registration primitives in addition to the 
registration steps required to reveal any calibration 
discrepancies in the systems. 
Most registration methodologies use discrete points as the sole 
primitive for solving the registration problem between two 
datasets. Such methodologies are not applicable to laser 
scanned surfaces since they correspond to laser footprints 
instead of distinct points that could be identified in imagery 
(Baltsavias, 1999). Conventionally,  surface-to-surface 
registration and comparison have been achieved by 
interpolating both datasets into a uniform grid. The comparison 
is then reduced to estimating the necessary shifts by analyzing 
the elevations at corresponding grid posts (Ebner and Ohlhof, 
1994; Kilian et al., 1996). Several issues can arise with this 
approach. First, the interpolation to a grid will introduce errors, 
especially when dealing with captured surfaces over urban 
areas. Moreover, minimizing the differences between the 
surfaces along the z-direction is only valid when dealing with 
horizontal planar surfaces (Habib and Schenk, 1999). Postolov 
et al. (1999) presented another approach, which works on the 
original scattered data without prior interpolation. However, the 
implementation procedure involves an interpolation of one 
surface at the location of conjugate points on the other surface. 
Additionally, the registration is based on minimizing the 
differences between the two surfaces along the z-direction. 
Schenk (1999) introduced an alternative approach, where 
distances between points of one surface along surface normals 
to locally interpolated patches of the other surface ar 
minimized. Habib and Schenk (1999) and Habib et al. (2001 
implemented this methodology within a comprehensive 
automatic registration procedure. Such an approach is based on 
processing the photogrammetric data to produce object space 
planar patches. This might not be always possible since 
photogrammetric surfaces provide accurate information along 
object space discontinuities while supplying almost no 
information along homogeneous surfaces with uniform texture. 
€ 
) 
In this paper, the registration procedure will utilize straight line 
primitives and 3D similarity transformation for aligning the 
photogrammetric model relative to the laser data reference 
frame. The following section previews the components of the 
general registration paradigm and the particulars of applying 
each component to the photogrammetric and laser datasets 
under consideration. The last two sections cover the 
experimental results as well as the conclusions and 
recommendations for future work. 
2. METHODOLOGY 
The registration process aims at combining multiple datasets 
acquired by different sensors in order to reach better accuracy 
and enhanced inference about the environment than could be 
attained through using only one sensor. The following 
subsections address the components and issues necessary for an 
effective registration paradigm (Brown, 1992). 
2.1 Registration primitives 
To register any two datasets, certain common features have to 
be identified and extracted from both sets. Such features will 
subsequently be used as the registration primitives relating the 
datasets together. The type of chosen primitives greatly 
influences subsequent registration steps. Hence, it is crucial to 
first decide upon the primitives to be used for establishing the 
transformation between the datasets in question. In this paper. 
171 
straight line features are selected for this purpose. This choice is 
motivated by the fact that such primitives can be reliably, 
accurately, and automatically extracted from photogrammetric 
and laser datasets. The procedure adopted to extract straight 
lines from the photogrammetric and laser datasets and how they 
are included in the overall alignment procedure is described 
below. 
Photogrammetric straight line features: The representation 
scheme of 3D straight lines in the object and image space is 
central to the methodology for producing such features from 
photogrammetric datasets. Representing object space straight 
lines using two points along the line is the most convenient 
representation from a photogrammetric point of view since it 
yields well-defined line segments (Habib et al., 2002). On the 
other hand, image space lines will be represented by a sequence 
of 2-D coordinates of intermediate points along the feature. 
This appealing representation can handle image space linear 
features in the presence of distortions as they will cause 
deviations from straightness. Furthermore, it will allow for the 
inclusion of linear features in scenes captured by line camera 
since perturbations in the flight trajectory would lead t 
deviations from straightness in image space linear feature: 
corresponding to object space straight lines (Habib et al., 2002) 
Manipulating tie straight lines appearing in a group ol 
overlapping images begins by identifying two points in one 
(Figure la) or two images (Figure 1b) along the line under 
consideration. These points are then used to define the 
corresponding object space line segment. It is worth mentioning 
that these points need not be identifiable or even visible in other 
images. Intermediate points along the line are measured in all 
overlapping images. Similar to the end points, the intermediate 
points need not be conjugate, Figure 1. 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
1 à 1 
, 3 - 4 es A a 
RE e ^. Sy 
: 2 i dm "M. i : 
KC += 
Image 1 Image 2 Image 1 Image 2 
M an 5: * s)» 
X x? x n NE a 
8 * 2 Ih, 
bs Y o UM x À A 
^ M e * NS 7 ^e 2 
Image 3 Image 4 [Image 3 [mage 4 
(a) (b) 
e End points defining the line in object space 
Intermediate points 
Figure 1. End points defining the object line are either 
measured in one image (a) or two images (b). 
The relationship between the image coordinates of the line end 
points f(x. y) (X, y and the corresponding ground 
coordinates (X, Ys Z), (X.. Y Z.)j is established through 
the collinearity equations. Hence, four equations are written for 
cach line. The intermediate points are included into the 
adjustment procedure through a mathematical constraint, which 
states that the vector from the perspective centre to any 
intermediate image point along the line is contained within the 
plane defined by the perspective centre of that image and the 
two points defining the straight line in the object space, 
Figure 2. That is to say, for a given intermediate point, a 
  
  
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.