Full text: Proceedings, XXth congress (Part 3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
Table 2: Quality analysis 1: estimation of the virtual cameras 
observing area: left lower corner (3,3, —17) [em] 
observing area: right upper corner (21, 23, —11) [em] 
number of points used for estimation 845 
distance between points 2 [em] 
Gapproxvc (cameraC1) 0.04 [pel] 
  
  
  
  
  
  
  
Table 3: Coordinates of the projection centre of camera Cl for 
the strict model and the approximation VC 
  
[Projection center | Xi [em] | X» [em] | Xs [em] J 
Strict model 3.08 4.53 63.46 
Approximation VC 3.07 4.5] 85.98 
  
  
  
  
  
  
  
  
  
  
Quality analysis: 
l. A priori quality DLT: 
residuals as backprojection errors in image space 
2. Quality DLT: 
residuals in object space for new points — " 
3. Quality point matching algorithm: 
comparison of the reconstructed points (before final es- 
timation) using the strict and the approximated model 
4. Quality point matching algorithm: 
comparison of the reconstructed points (after final esti- 
mation) using the strict and the approximated model 
  
  
  
5.4 Prediction of 3D points using the virtual camera 
5.4.1 Estimation of the virtual cameras To define the seg- 
mentation of the object space a priori quality test have to be calcu- 
lated (see (Wolff and Fórstner, 2001)). These a priori tests show, 
that the determination of only one virtual camera (VC) for the 
whole object space is enough. For the position of the four cam- 
eras see Fig.4. 
To investigate the quality of the determined virtual cameras (Qual- 
ity analysis 1), we project the object points which were used for 
the estimation of P into the image space and get the image points 
x’. The estimated DLT (11 independent parameters) yields resid- 
uals x — x’ being systematic errors. To get an a priori quality of 
the projective model we give the r. m. s. error 
$i Ta xy 
Gapprox = 2n — 11 
where n is the number of points used. 
Tab. 2 gives the entities and results of estimating the virtual cam- 
eras of camera C1. The number of points used for the estimation 
need not to be as high as in this case. Tab. 3 gives the coordinates 
of the camera projection center for the three different orienta- 
tions. The multi media geometry influence mostly the hight of 
an object point, which is here the X3 coordinate of the projec- 
tion center. Therefor the projection center of the two orientations 
differ mostly in the hight. 
5.4. Results of the point matching using the approximation 
As mentioned above, the algorithm should be calculated for dif- 
ferent starting images, to guarantee that also the points, which 
are not extracted in the starting image, can be found. Here we 
use four cameras, every camera could see the whole object scene. 
Together with the constrain, that at least three corresponding im- 
age points of an object point are needed, it is enough to have two 
different starting images. Therefor and because of the constraints, 
610 
that the image points of an object point should be seen in at least 
three image points, we got the constraint for our clustering algo- 
rithm: a group of at least three define an object point. 
First, we want to examine if the constraint for a object point, that 
at least three close points in a group define an object point, is 
sufficient. Fig. 3 shows the hypotheses of two matched image 
points by there corresponding object points (seen from the side). 
The distribution of the 3D points shows a very dense part, where 
the sediment surface is supposed to be. All the other points might 
be wrong hypotheses and should be deleted by the clustering al- 
gorithm. Fig. 4 shows the results after the clustering. All points 
which differ significantly from the surface are eliminated (Fig. 4 
a). Fig. 4 b) shows the distribution of the object points on the 
surface, which are evenly distributed. 
  
i 
  
  
  
  
Figure 3: Hypothesis of 3D point matchings before clustering. A 
group of at least three points define an object point. 
  
  
92 
  
  
  
  
  
Figure 4: Results after clustering the point hypotheses. The right 
figure shows the point cloud from the side, the left figure shows 
is from above together with the positions of the cameras. 
Using the strict model gave 156 reconstructed 3D points, the use 
of the virtual camera VC found 161 points. For the quality anal- 
ysis 3. we have to compare the two sets of points. Therefor a 
threshold e is defined, so that a point X, is defined as equal to a 
referent point X; if X, — X;| « e. The number of points found 
as equal in dependency of the threshold is shown in Fig. 5. 
The main influence of the approximation refers to the hight ofthe 
object points. The r. m. s. error of the Xs coordinate of the 
reconstructed object points X. — (X1, X», X3) is 
Ox; = 
  
where n is the number of points used. The error of the approxi- 
mation is given in table 5 in comparison to the referent data before 
calculating the final bundle adjustment. 
5.5 Final 3D determination of the predicted points using the 
strict model 
After the matching process, including an approximated determi- 
nation of the object point, we calculate a final bundle adjustment 
for the strict model and for the approximation VC. The clusters 
resulting from the clustering algorithm contain that points, which 
were found as corresponding points. To compare this clusters 
Inte 
Fig 
bet 
nur 
Tat 
usi! 
(qu 
mir 
tem 
rec 
give 
a di 
0 i 
wer 
pro: 
For 
usir 
ima 
are 
Fig 
3D) 
In tl 
base 
view 
We | 
and 
long
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.