International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part Bl. Istanbul 2004
2. GEOMETRIC CALIBRATION OF DESKTOP
SCANNERS
2.1 Sources of geometric errors at desktop scanners
[n order to objectively evaluate geometric accuracy it is
necessary to know the most important error sources and their
nature. In general, these errors can be divided to slowly and
frequently varying errors.
Slowly varying errors are:
e lens distortion
e misalignments of CCD sensors
e imperfection of transport mechanism
Frequently varying errors are:
e vibration
e electronic noises
e mechanical positioning
It is important to mention that only effects caused by slowly
varying errors can be removed. Stability of error sources during
longer period of scanning is therefore very important. This is
prerequisite that these errors could be removed efficiently.
2.2 Mathematical model
Geometrical calibration of desktop scanners can be seen as
interpolation problem. This problem, in most of the cases, can
be solved by some method of approximation - law on error
propagation is supposed by some mathematical function. Due to
the several error sources with unknown error propagation, the
most suitable interpolation method fir geometric calibration is
some of prediction methods. Linear prediction by least squares
method was chosen as the most appropriate method in this
research.
2.2.1 Linear prediction by least squares method
This method is also known as least-squares collocation, with
remark that linear prediction represents special case of
collocation. Moritz introduced the method in geodesy for the
first time, for determination of gravitation anomalies and
vertical deviations. This method starts with the assumption that
interpolating function could be considered as stationary random
function of two variables. In order to start with this assumption,
it is necessary to remove the trend influence from referent point
values so the points will have small absolute values and
arithmetical mean will be close to zero. Then, the assumption is
that these values are composed of correlated (systematic, also
known as signal) and uncorrelated (random, known as noise)
part. The task of prediction is to determine correlation
component at interpolated point, based on known referent
values. Interpolated value is sum of previously eliminated trend
value and this systematic part. Random part is treated as
measurement error at referent point or as noise, which should be
filtrated i.e. removed.
Trend removal can be done by calculation of trend surface and
subtracting it from measured surface. Trend surface is usually
represented by low order polynomials. Optionally, preliminary
interpolation with relatively high level of smoothing could be
performed.
For the final interpolation of unknown values at arbitrary points
it is necessary to know stochastic characteristics of both
components (correlated and uncorrelated) participating in
known values. These characteristics can be determined
empirically based on known values in referent points, or based
on several previously accepted assumptions.
These characteristics are represented by covariance function,
with beforehand assumption that function depends just on
mutual distance of observed points and not on their position.
Gaussian bell function is often used as covariance function
(eq.1):
Cy=Ce (1)
Unknown parameters of covariance function (C and K for
Gaussian function; d is the distance between samples) can be
determined empirically if one has enough referent points (over
30). Based on known values, variance is calculated and
considered the same for all referent points, as well as covariance
for different intervals between referent points. Unknown
parameters of covariance function can be determined based on
empirically calculated covariance. The case of "clean"
prediction means that C value of Gaussian function is equal to
covariance of correlated components. Decrease in this value
leads to increase in data filtering, i.e. interpolated values will
differ from known values at referent points.
During the process of scanner geometric calibration
measurements are pertaining to two-dimensional coordinate
system. Since scanning errors are usually considered in
directions of two coordinate axes, the task of prediction is
conducted independently for cach of them and covariance
function is determined for each coordinate direction.
2.3 Software for the geometric calibration
Software called DigiScan 2000 is adapted for the needs of
desktop scanner calibration. Software is generally intended for
the calibration and georeferencing of scanned maps, but some
additional functions for image analysis and processing are also
provided. Batch resampling is also supported and this is very
important for photogrammetric scanning. DigiScan 2000 has
built-in mathematical models for Helmert, affine, polynomial
second order transformation, as well as mathematical models for
linear prediction by least squares, with or without filtering. All
the processing and calibration result analysis, as well as the
process of scanned image resampling was done by using this
software.
2.4 Process of calibration and image scanning
Nature of scanning errors, as well as their stability and
repeatability during the scanning, have to be determined during
the calibration process. Several objectives were set for the
process of geometric scanner calibration:
e determination of overall scanner error and value of
systematic part in this error,
e determination of global stability of systematic errors,
e determination of local stability of systematic errors
Influence of changing spatial and radiometrical resolution of
scanning as well as influence of scanner warming on scanning
errors has to be considered.
Internation
AIME
I'he calibra
1 *T
an
results
Do
A
S od
transf
If the res
standards,
procedure
parameters
L3. CAI
SCA
3.1 Epsoi
This scanr
higher clas
known ma
30008.
E
The basic t
SCA
SUB SCA
PHOTOE
Sca
Tran
LIG
OPTICA
OPTI
Pf
C
MONO
Table 2:
32 Grid
Glass plat
calibratior