oints. In this
ised for all
fsets in the
m the robot
g tolerances.
rd transform
o control the
:h consist of
coders and a
ops do not
e not able to
d and actual
ler to obtain
an external
> differences
estimate the
the inverse
| number of
d in order to
mation. This
of industrial
ers are norm
ration will be
5/2 industrial
kg and has a
r specifies a
obot is used
asks.
ositions in his
in the inverse
position and
. The second
termination of
in x, y and z
direction are in mm. The deviation of the orientations are
indicated in degree.
X Y Z © $ K
[mm] |[mm] |[mm] |[DGR] | [DGR] | [DGR]
1.174 | -0.284 | -1.642 | —0.008 | —0.006 | 0.003
-0.243 | 1.989 | 0.471 | 0.126 | —0.052 | —0.165
-0.035 | -0.189 | -0.330 | 0.002 | 0.000 | 0.000
0.172 | -0.774 | 0.152 | 0.010 | 0.004 | 0.0019
-0.096 | 0.618 | 0.313 | 0.008 | —0.003 | —0.011
0.386 | 0.854 | 1.459 | -0.011 | —0.028 | 0.000
Table 1: Accuracy of a industrial robot
The experiment shows not the absolute accuracy of the robot,
it only shows the error for the repeatability using different
paths. The result shows that the accuracy of this robot error
are larger than 1 mm. By changing other parameters like
temperature, payload and acceleration the error will increase.
In other words, the absolute accuracy for this robot will be
even lower than the relative accuracy presented in table 1.
4. PHOTOGRAMMETRIC SYSTEM
4.1 Camera Model
While the basic camera model in photogrammetry is the pin-
hole camera, additional parameters are used for a more
complete description of the imaging device. The following
parameters are based on the physical model of D. C. Brown
(Brown 1971). The parameter follows the notation for digital
cameras presented by C. S. Fraser (Fraser 1997). Three
parameters K1, K2 and K3 are used to describe the radial
distortion. Two parameters P1 and P2 describe the decentring
distortions. And two parameter Bl and B2 describe the
difference in scale between x- and y-axis of the sensor and
the shearing. To obtain the corrected image coordinates
(x,y) the parameters are applied to the distorted image
coordinates (x', y") as follows:
x=x-»%
v=y-n
Ax=Xr" Kı +xr*Kı +Xr°Kz +(2x' +r)E+2B + BX + B,ÿ
Ay 7 yr Kı + yr Ky * Ks + 2B +(2y +r)B
x=x+Ax
y= y + Ay
where (Xo,ÿo) is the principal point and 7 = x" + ÿ” is the
radial distance from the principal point. The camera
parameters are determined in a bundle adjustment using a
planar test field. The bundle adjustment process is carried out
before-hand.
—35—
4.2 Target recognition
For the target array, we used a combination of coded and
non-coded retro-reflective targets. In this case, the targets
were fixed on a portable plate. They were arranged in such a
way that for all intended robot positions at least four coded
targets were visible in the camera image. During the
measurement, coded targets are identified and measured first
and an initial approximation for the camera pose is computed.
Then, in a second step, all remaining (non-coded) targets are
identified and measured based on this initial approximation.
Regarding coded target design,
there exist several possibilities.
We used coded targets made of a
central disk (used for
measurement) and a concentric
ring, which contains the code (for
identification). Van den Heuvel
and Kroon (1992) or Schneider
and Sinnreich (1992) have
suggested such a design for
example. Of course, the design is invariant with respect to
rotation, scale change and perspective distortion.
Figure 3 Targets
In order to achieve a robust target identification and precise
image coordinate measurement, a very high contrast between
targets and background is desirable. To achieve this, we use
retro-reflective targets in combination with an illumination in
the near infrared (IR) spectrum. IR light emitting diodes are
placed in a concentric ring closely around the camera's lens.
Additionally, the lens is covered with a daylight filter. This
way, practically no objects are visible in the images except
for the targets.
4.3 Resection
The problem of spatial resection involves the determination
of the six parameters of the camera station's exterior
orientation. To solve the resection problem a two-stage
process is used. A closed-form solution using 4 points gives
the initial values of for an iterative refinement using all
control points.
Several alternatives for a closed form solution to the resection
problem were given in the literature. In this approach the
algorithm suggested by Fischler et. a1 (1981) is used. Named
the "Perspective 4 Point Problem" their algorithm solves the
three unknown coordinates of the projection centre when the
coordinates of four points lying on a common plane are
given. In our case all signals are coplanar the mapping in-
between image and object points is a simple plane-to-plane
transformation. The location of the projection centre can be
extracted from this transformation T when the principal
distance of the camera is known. The solution of this
algorithm is not unique. There exist two possible solutions,
one in front the plane and one behind it. In this case the
solution in front of the plane is used.
For the complete solution of the spatial resection problem the
orientation of the camera must be also computed. The
solution is based on the algorithm Kraus (1996) which gives
a solution for the determining of the rotation angles when the
coordinates of the projection center are already known.