Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

ISPRS Commission III, Vol.34, Part 3A »Photogrammetric Computer Vision“, Graz, 2002 
By tracking the paired image and object points that have 
contributed to the peak in the accumulator array for the final 
iteration, the correspondence problem is solved. The resulting 
matches between the object and image space features are used 
in a simultaneous least-squares adjustment to solve for the EOP. 
It has to be mentioned that the obtained correspondences are for 
low-level objects (i.e. between points). The following section 
explains the consistency check we implemented to identify the 
high level correspondence (i.e. between linear features) in 
addition to highlighting discrepancies (changes) between object 
and image space features. 
3.2 Feature to Feature Correspondence and Change 
Detection 
So far, we established the following: 
e EOP of the image under consideration, and 
e  Point-to-point correspondences between object and image 
space linear features. 
Now, we will proceed by performing a consistency check 
between these features using the feature labels. The consistency 
check has four steps: 
Step1: Feature to feature correspondence 
We check the label of the features containing the matched 
object and image space points. Considering the frequency of the 
matched labels, one can establish the correspondence between 
the image and object space features. 
Step 2: Object to image space projection of non-matched 
object points 
Using the estimated EOP and the ground coordinates of non- 
matched object points, one can compute the corresponding 
image coordinates. The standard deviation of the computed 
image coordinates can be estimated using error propagation. 
Step 3: Distance computation 
The closest distance, as well as the associated standard 
deviation, between the projected image points in step 2 and the 
closest points along the corresponding image space features is 
computed. One should note that the image to object feature 
correspondence is already established in step 1. 
Step 4: Blunder and change detection 
If the distance is greater than a predefined threshold (e.g. three 
times the associated standard deviation), we label these points 
as either blunders or changes between object and image space 
features. Single occurrences of non-matching points are 
identified as blunders while successive occurrences of the non- 
matching points are labelled as change (discrepancies). 
Figure 2 is a schematic drawing for illustrating the concept of 
the consistency check. In this figure, points i; to ij, are the 
projected data points along a linear feature from the object 
space into the image space, while points /, to /,7 are image data 
points along the corresponding linear feature in the image 
space. Consider points ij, ig, ij, iy and iy to be correctly 
matched with points jj, ji, J13» Jis And j17, respectively; while 
points i, is, is, is and io do not have matching entities in the 
image space. Instead, their closest points in the second data set 
along the corresponding linear feature are points Js, js, j7, jo and 
Jie, respectively. In order to distinguish between the consistent 
changes and blunders, non-matching points along the linear 
feature are segmented and labelled. From this analysis, the pair 
(io, ig) Will be considered as one label and the pairs (is, j3) to 
(is, jo) will be considered as another label. The former label will 
  
be considered as blunder because it has only one change pair, 
while the latter will be highlighted as a consistent change. For 
consistent changes, the longitudinal distance along the linear 
feature as well as the average lateral distance will be computed 
as the change attributes, Figure 2. 
Consistent Change 
Blunder 
js Js 
Average 
Dlaterak---- 
distance 
VER E. 
B 
  
S 
— 
= 
Longitudinal distance 
À 
A 
Figure 2: Consistency check between the object and image data 
points. The rectangular points with labels 7; to i; are 
the projected object space points along a linear feature 
into the image space, while the crosses with labels 7; 
to ji; are the points along the corresponding linear 
feature in the image space. 
4. EXPERIMENTS/RESULTS 
Experiments have been conducted using real data. To carry out 
the outlined methodology in the previous section, one should 
have: 
e A sequence of 3-D points along the ground control 
features. 
e A sequence of 2-D points along the image features. 
e The interior orientation parameters (IOP) of the 
camera. 
Once again, the suggested algorithm does not require full 
correspondence between the object and image space features. 
The main requirement is having enough common features 
between the two data sets. The input data and the results are 
presented in the following paragraphs. 
In the area covered by the aerial image, there exist a number of 
major and secondary roads. The object space roads, represented 
as a sequence of 3-D points, were extracted from a 
photogrammetric stereo model containing the image under 
consideration. Two data sets in the object space with different 
number of roads had been digitised. A 2-D view of the 3-D 
road network can be seen in Figures 4-a and 4-b. To complete 
the data set, a 2-D point sequence along the image road network 
must be extracted. In a digital environment, the extraction 
process can be established by applying a dedicated operator 
(e.g. Canny or any other operator for road network extraction). 
In this work however, 2-D image features have been manually 
digitised, Figure 3-c. Another data set in the image space had 
been obtained by introducing digitisation errors (Figure 3-d) to 
check the ability of the suggested system to detect those 
changes. By combining different data sets from the object and 
image space, we conducted four experiments, Table 2. 
From Table 2, one can see that image space has much more data 
available than the object space. After carrying out the 
experiments, matched points were used to estimate the EOP in a 
simultaneous least-squares adjustment. The estimated EOP are 
listed in Table 3, together with the their initial (approximate) 
values. These values can be obtained from navigation data. 
However, very rough knowledge about the initial values of EOP 
A- 153 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.