X Y. X-X
y |*|v, |» m: R:| Y-Yo (3)
0 0 Z
The differentials for the error equation coefficients are given as
follows for images, sensors and new points (Oy accordingly):
S es eR
0X,
Ox
(4a)
ACTUS
C
=|
Noo
Il
|
3
"S
p
m- S a Xe Retz
09 09
op
ow ow
(Ru (x — ef
m Es (X-Xo)+ E
Ner
Il
A RESTE
QD
|»
on) Sez
ow
Il
ES
© D
Ie glg
WIN
D
A
resa i e iE
Il
on) Sez
K OK
R,-(x -X)) Ra (Y X) 8:Z (4b)
aa
C5
S|
(ee
D
|a
Il
S
>
(4c)
D
>
gr
2|
Il
S
oF
CD
SIE
No
Il
3
>
3.3 Parallel Section in Space
With the previously derived orientation parameters, we are now
able to do object reconstruction with large amounts of points
which were not included in the parallel block adjustment. A
linear forward section for central projection is shown in (Kraus,
1997; Moré, 2000). We have transferred it to the parallel case,
using the following equations:
x =(474)' (472), (5)
with
and
The above is a very fast way of computing three dimensional
coordinates from an arbitrary number of images, since this
adjustment is linear. Therefore neither approximate values are
required, nor will there be an iteration.
The computation of errors, however, is not reliable with this
method. A closer look at the error matrix Oy, = (4 Tq)! reveals,
that the observations do not take part in this matrix, hence all
image points seem to have the same error. Other ways have
been and are being explored to reveal blunders in the
observations. A rather simple approach is to use the correction
matrix v = AX-L for the detection of errors. A more
sophisticated, yet time consuming approach, is the computation
of the shortest distance of the new point to the line of sight of
each image. Since all lines of sight should merge in one point,
these distances ought to be rather small and equal to each other.
Analysis of the distances can help detecting blunders and the
originating image.
As mentioned above, this is currently a topic of research with a
growing variety of setup situations and point measurements.
4. RESULTS
The suggested methods have been applied to several projects,
involving the tilting sample stage and some calibration objects.
As expected, the best results could be achieved with the latest
cascade pyramid. As an example, a typical tilting series will be
stated, using 13 images with up to 38 measured control points,
each. All images shared one sensor, where the radial lens
distortion and affine transformation were kept unknown as
additional parameters. Thus, the whole adjustment system had
70 unknowns and 966 observations. Some results are shown in
table 1.
sensor magnification magnification error
[pixel / nm] [pixel / nm]
XL30 FEG 94.559 0.116
image Xo [nm] my [nm] 9 [3 mo [7
number Yo [nm] myo [nm] 0 [7 ms [9]
x ['] my [*]
1 2.621 0.004 0.261 0.22
1.941 0.004 351.038 0.21
88.550 0.06
2 2.640 0.004 0.171 0.21
1.882 0.004 345.935 0.19
88.332 0.06
3 2.460 0.004 359.838 0.22
2.017 0.004 356.701 0.22
88.628 0.06
4 2.675 0.004 359.993 0.21
1.794 0.004 341.131 0.18
88.279 0.07
5 2.452 0.004 359.042 0.22
2.033 0.004 1.728 0.22
88.694 0.06
Table 1. The first 5 images of the tilting series.
The above table exemplarily shows the first five images of the
tilting series, mentioned before. The tilting sample stage was set
to: 0°, 355°, 5°; 350°, 10°, etc. Comparison with the adjusted
results proves the high accuracy of the positioning table.
Furthermore it provides high stability and convenient handling
in use. The pyramid allows precise measurement, therefore
supporting accurate results. Nevertheless, deviations may occur
-914-
ant
sh
for
sin
the
ilh
cal