1. VaMoRs, the experimental vehicle of UniBwM for
autonomous mobility and machine vision, a 5-ton van.
Always inexpensive PC-type computers have been used
for the higher levels: initially, one PC based on the Intel
80286 microprocessor in addition to the BVV 2 with 8086
single board computers sufficed for guiding VaMoRs at
its maximum speed of 96 km/h on an empty Autobahn in
1987 exploiting the 4D approach. Only through the power-
ful and intelligent interpretation constraints introduced by
the integrated spatio-temporal models has it been possible
to achieve these results with that low computing power on
board. Since 1987 Intel 80386 single board computers
have been installed on an intermediate hierarchical level
in the BVV 2 [Mysliwetz, Dickmanns 87] resulting in
much more robust road recognition under strongly per-
turbed environmental conditions through shadows of
trees.
In 1991 all application software developed up to that
point in different computer languages was translated into
C and ported onto transputers. In a transition phase, both
BVV 3 and transputers are used jointly; with the next
generation of transputer processors the BVV will disap-
pear.
Since 1984 active viewing direction control has been
applied in the framework of our vision systems [Mysliwetz
84]. In 1986 it has been implemented for better recognition
of curved roads [Mysliwetz, Dickmanns 86]. The micro-
processor for viewing direction control is since integrated
in the BVV 2. Especially with the introduction of a bifocal
camera pair for better resolution further away this auto-
matic viewing direction control became essential.
2. The vision guided testbed ATHENE was built up in
the year 1990 and is equipped with an almost identical
sensor system as ' VaMoRs' except for the second TV-
camera, but emphasis has been put on autonomous land-
mark navigation. The operational environment has been
provided with landmarks in the form of well discernable,
static objects. Either the global position or the location of
each target relative to the prescribed local trajectory has
been known. The task of the real-time image processing
system was to recognize the object and to deliver the
corresponding measurement data to the navigation soft-
ware. The event driven data fusion filter and a Kalman
filter are used to combine different qualities of sensor data
and to gain the best estimate of the robot's state.
In case of ill conditioned optical information, the ve-
hicle guidance system is able to travel a reasonable dis-
tance between target sightings. This is a kind of
’instrument flight’, realized with the memorized knowl-
edge about the environment and the egomotion of the
vehicle.
The allowable distance travelled between optical up-
dates is a function of how much drift from the nominal
path is still safe for not colliding with an obstacle and for
finding the next known landmark.
Implementation for the AGV ATHENE started with the
dead reckoning navigation approach. Reproduceable ex-
N
Fig.13: Demonstration experiment for landmark navigation
periments showed, that it is possible on a smooth surface
to travel over a distance of 20 meters with a lateral error
of less than 1 cm. Another test course, shaped like an oval,
showed that after a 16 meter ride and a 360 degree turn the
heading error was less than 1 degree.
These results have been obtained after putting some
effort into the servo control mechanism. Stable and effec-
tive control laws have been derived to allow accurate and
safe operation of the vehicle. The control system is split
up into different levels in order to have short reaction times
for the vehicle to follow the trajectory commanded. But
on rough ground, dead reckoning by itself does not yield
any acceptable performance.
After implementation of the landmark navigation mode,
ATHENE moved autonomously around the laboratory
area. The course consisted of four hallways with a total
length of about 100 meters and a width of 1.80 meter. Four
90 degree turns connect the hallways. The speed during
an autonomous drive has been between 0.2 m/sec in
narrow corners and 0.5 m/sec in straight hallways. Final
experiments in late 1991 in a factory environment demon-
strated the high precision navigation capability with visual
feedback from landmarks. The task to be performed by the
robot was the following (see fig.13): Starting position was
at a roughly known location. The diameters of the error
ellipses were between 10 and 25cm. After initialization
with an artificial landmark (1) a straight line of work-
benches on the right hand side had to be followed until
reaching landmark (2); it consisted of a left turn corner.
Next landmark (3) was an extremly narrow doorway ( 4
cm free space at each side of the vehicle). A predifined
path in a dead reckoning manner leads to the fourth
landmark (4), which consisted of a closed door. A left turn
brought the vehicle back to landmark (1), where it stopped.
Then, a backward docking maneuver was performed to the
starting position. The error ellipse now was less than 5 cm
in diameter. The same course has been performed a second
time after simulating a loading procedure.