ISPRS Commission II, Vol.34, Part 3A „Photogrammetric Computer Vision“, Graz, 2002
are at the level of 2-3 cm for position coordinates, and ^10
arcsec and 10-20 arcsec for attitude and heading components,
respectively. These naturally do not represent the final
mapping accuracy, which can only be confirmed by an
independent comparison with the ground control, as
presented in Section 6. Figure 1 illustrates the system
architecture (Cairo or Budapest Grejner-Brzezinska and Toth,
2002), and Figure 2 presents the prototype hardware
configuration.
ODOT Centerline Surveying System
Hardware Configuration
GPS
Antenna
Trimble
4000ssi
Pulnix
TMC-6700
VGA, Mouse N
Figure 1. Design architecture and data processing flow.
Camera CCD pixel size 9 micron
Camera focal length 6.5 mm
Camera height above road surface 3m
Image scale 3/0.0065=461
Ground pixel size at nadir (no tilt) 4.1 mm
Ground coverage along vehicle 2.68 m
Ground coverage across vehicle 2m
Max speed, no overlap at 10 FPS 26.8 m/s (96 km/h)
13.4 m/s (48 km/h)
Max speed at 50% overlap
Table 1. Sensor characteristics and the image acquisition
parameters.
The imaging module consists of a single, down-looking,
color digital camera, Pulnix TMC-6700, based on 644 by 482
CCD, with an image acquisition rate of up to 30 Hz (10 Hz is
the target for our application), which allows for full image
coverage at normal highway speed or 5096 image overlap at
reduced speed (footprint size is about 2.68 by 2 m; see Table
1). More details are provided in (Grejner-Brzezinska and
Toth, 2000; Toth and Grejner-Brzezinska, 2001 a and b). The
imaging system provides a direct connection between the
vehicle georeferencing (positioning) module and the road
marks visible in the imagery, allowing for the transfer of the
coordinates from the reference point of the positioning
system (center of the INS body frame) to the ground features.
Naturally, calibration components, including the camera
interior orientation (IO), as well as INS/camera boresight
calibration components are needed (for algorithmic details
see, for example, Grejner-Brzezinska, 2001). For 3D image
processing, a 50-6096 overlap is needed along the vehicle
motion, which can be achieved with the hardware
implemented in our system. Stereovision is realized by the
platform motion, which, in turn, emphasizes the need for
A - 363
high-precision sensor orientation provided by direct
georeferencing. Table 1 summarizes the camera
characteristics and the image acquisition conditions.
ODOT District 1 office built the complete system with all the
sensors and supporting hardware installed in early 2002 and
Figure 2 shows the surveying vehicle.
Figure 2. Mapping vehicle.
3. IMAGE SEQUENCE PROCESSING CONCEPT
There are two key questions regarding the development of
the image sequence-processing concept. The first is whether
a more complex stereo model-based technique or a simple
monoscopic method should be used for the centerline
position extraction process. Second is the question of
whether a completely real-time (or near real-time) solution
implementation should be considered or whether post-
processing should remain the only option. The main goal of
on-the-fly image processing is to determine the centerline
image coordinates in real time, so that only the extracted
polyline, representing the center/edge lines, would be stored
without a need to store the entire image sequence. Clearly,
there is a strong dependency among these options and the
decision was made at the beginning to develop the system
with full 3D capabilities in a possibly real-time
implementation. Later, based on the initial performance, the
design may be modified. In simple terms, the stereo
processing can provide excellent accuracy but it imposes
more restrictions on the data acquisition process, such as the
need for continuous image coverage with sufficient overlap,
and it definitely requires more resources. The single image
solution however, is a compromise in terms of accuracy but it
is very tolerant toward the image acquisition process, e.g.,
gaps will not cause any problems and the processing
requirements are rather moderate.
The real-time image processing is technically feasible due to
the simple sensor geometry and the limited complexity of the
imagery collected (single down-looking camera acquiring
consecutive images with about 50% overlap; only linear
features are of interest). The most challenging task is the
extraction of some feature points around the centerline area,
which can be then subsequently used for image matching.
Note that the availability of the relative orientation between
the two consecutive images considerably decreases the search
time for conjugate entities in the image pair, since the usually
2D search space can be theoretically reduced to one
dimension, along the epipolar lines. However, errors in