Full text: Proceedings, XXth congress (Part 4)

\GERY 
1nover, 
dlr.de 
! Berlin, 
orbit. The 
y f Mars in 
Hannover, 
btained tie 
te Sensing 
ese results 
[. 
results of 
of the tie 
ent (Ebner 
HRSC is 
rated in a 
n verified 
1s of star 
ition have 
> HRSC is 
] for the 
sented. In 
of the tie 
ue shown 
and some 
2gy which 
1gh image 
observed 
e interior 
ible to use 
| accuracy 
eumann et 
r operator 
sise in all 
icient as 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004 
  
similarity measure. Each image is divided into subareas to 
ensure an cven distribution of the tie points over the whole area. 
To reduce ambiguities and computing time the matching 
location and a search space for the corresponding feature has to 
be determined. The principle of the transformation from object 
to linescanner coordinates is described in Dórstel, Ohlhof 
(1996). Since no epipolar geometry exists for linescanner 
imagery a feature in one image is transferred to the next image 
via the collinearity equations for 3-line imagery (1) according 
to the extended functional model of Ebner et al. (1994). 
X—-X X X Ü + AX, 
y- y, |=AM'(Ap,Aœ,Ax)D'(p,0,x)|| Y Y + AY, (1) 
= 2} 12, +45 
with 
AX 0 Ax 
AY, |z D(9,0,Kk)| Ay (2) 
AZ, Az 
The exterior orientation refers to a camera coordinate system 
common to all CCD lines and is expressed for a given readout 
cycle n as Xo, Yo, Zo, 9, €», x (Figure 1). The interior orientation 
parameters xo, yo, c are defined in the image coordinate system, 
three separate values exist for each line. The transformation 
between the image coordinate system and the camera 
coordinate system is given by Ax, Ay, Az, Aq, Ao, Ak, which 
have been determined in the geometric calibration for each line 
separately. M as well as D are rotation matrices, À is a scale 
factor. The image coordinates are given by x and y, which are 
derived automatically in this case via DIM. 
MT(Ag, Aw, AK) 
image coordinate system 
  
  
   
MT(g, w, K, Aq, Au, AK) 
   
camera 
coordinate 
system 
  
object coordinate system 
Figure 1: Coordinate systems (according to Kornus, 1999). 
D'(g. t, K) 
For the transformation from object space to image space as a 
function of the image line (readout cycle) n the additional 
condition (3) has to be applied. 
x(n)ex (n, X,(), Y, (n), Z, (1), 0), eX), k(n))=0 GB) 
847 
This problem can be solved using the well known Newton- 
method for the above zero-crossing detection where the 
derivative x'(n;) is replaced by the pixelsize of the image. 
n, — initial value for the image line : 
oii (4) 
n,, n, — x(n, )/ pixelsize i z 0,l,... 
The principle of transforming a feature from one image to the 
next is shown in Figure 2. The point P (extracted feature P") has 
an estimated elevation Zp taken from the MOLA DTM, where 
Az denotes the uncertainty of this value. This defines a range U, 
L that is projected to the right image where it defines the search 
space s. 
C' C 
  
  
  
  
  
MOLA 
  
Figure 2: Principle of estimating matching location and search 
space (according to Schenk, 1999). 
After matching all overlapping images pairwise in all 
combinations an undirected graph is generated. The nodes of 
the graph are the point features, the edges are the matches 
between them. This graph is divided into connected 
components. The next step is the generation of point tuples, 
whereas one point tuple is characterised by the property that not 
more than one feature per image is admissible. The complexity 
of this problem can grow exponentially. Instead of using tree 
search or binary programming techniques a RANSAC (Random 
Sample Consensus) procedure (Fischler, Bolles, 1981) is 
applied. The method relies on the fact that the likelihood of 
hitting a good configuration (correct tuple) by randomly 
choosing a set of observations (features of the subgraph) is 
large after a certain number of trials. The advantage of this 
method is the high probability of getting a good point. 
Including a geometric consistency check, the method also 
eliminates blunders (Brand, Heipke, 1998). 
From the start pyramid level (lowest resolution) to the so-called 
intermediate level (medium resolution) feature based matching 
(FBM) is carried out using the whole images. For the 
processing of the HRSC imagery level 3 has been chosen as 
starting level. Going down the image pyramid the image size 
increases, as well as the number of extracted features. Besides 
the heavily increasing computational time, the matching of the 
complete images would result in too many tie points for the 
camera orientation. Therefore the matching procedure is carried 
out only for selected "image chips", starting at pyramid level 2. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.