Image
Binary image
Connected Componet
Labeling
Wor ;
ord Labeling
coordinates
coded
coded
targets
targets
—
World coordinates un-
coded targets
Approximation of orientation
and position of the camera.
| + =
Labeling of un-coded targets
+
Least square 3D spatial resection
y
Transformation to robot base system
Figure 4 Resection process
Figure 3 shows the complete operating sequence for the
resection process. Of course the approximation of the camera
orientation and position is not necessary, because the robot
position and orientation can be used for the approximation.
We can not use this approximation because there are no
information available to get the robot pose in real-time from
the robot control.
4.4 Forward Intersection
Forward intersection is another possibility for an accurate
robot pose measurement. Two or more cameras observe the
coded and un-coded targets fixed in the object space. By a
beforehand calibration the relative position and orientation to
a “master camera” is calculated. For identifying the un
coded target in one image the same algorithm as described in
section 4.3 is used. With the technique of epipolar geometry
the homologous targets in all other images can be found.
After this the 3D model coordinates of the observed coded
and un- coded targets are computed with process of forward
intersection. The camera pose in relationship to the test field
coordinates system is equivalent to the relationship between
the test field system and the model system. For this a closed
form solution for computing the transformation is carried out.
5. HAND-EYE-CALIBRATION
While the determined camera pose refers to an arbitrary
coordinate system defined by the test field, for the correction
of the robot pose the data must be available in the robot
coordinate system. To transformation the camera pose to the
robot system a Hand-Eye-Calibration is necessary. It
computes the offset of the fixed yet unknown position and
—36—
orientation of the camera coordinate system with respect to
the robot hand coordinate system. The robot hand coordinate
system, also known as the end-effector frame or in some case
the tool-centre-point (TCP), is the coordinate system that is
often used within the robot control software. In this section
only a short review of the implemented algorithm is given. À
fully is given in Tsai, Lenz (1989).
The next table gives a short description of the involved
coordinate systems for the computation of the hand-eye
calibration.
R Robot coordinate system. It is fixed in the robot station.
The robot arm moves around it and the encoder output
of all joints enables the system to tell where the TCP is
relative to R
Hand coordinate system. Gives the position and
orientation of the robot hand relative to R. The zaxis is
coinciding to the last link of the robot. The x,y axes are
parallel to the robot flange.
Camera coordinate system. Moves with the robot. The
offset to the Hand coordinate system is constant. The z-
Axis is coinciding with the optical axis and x,y axes are
parallel to the image plane. The principal point is the
origin
Test field coordinate system. Is arbitrarily to the robot
system. Does not change the position and orientation
during the hand-eye calibration
The next figure shows the relationship between the hand
coordinate system and the camera systems in two positions.
Position i
À *
Position j
Xc
Yc
Ze
Vi,
Figure 5 Relationships between the different
coordinate frames
From the figure following equation for the homogeneous
transformation matrices can be extract:
Hs Hog = HH
The homogeneous matrices H and H,, are well known
from the robot control and from direct measurement of the
camera pose. One characteristic of the equation help out to
solve this equation. Are the rotation matrices of H,, and
H,. parameterised as a rotation about a axis. The rotation
angle is equal. Because of the six independent unknowns two