se.
nd
he
tre
re
ve
on
tor
is
ge
WO
the
ng
for
Stic
In
or
dus
ing
ing
red
nts
the
ght
lull
of
Eoi
els
ive
‚of
]ts,
left
3.3 Image acquisition.
The image acquisition process is the sequence from the imaging
of the object on the sensor to the stored digital image. This can
be described as in figure 6.
C ini 7
Imaging
Sync. and video-signal
( 12.12 3
Analog processing
Sync. and video-signal |
12.13
Synchronization
AD conversion
Digital signal |
12.14
Digital processing
and storage
DS | Image parameters
Grabbing parameters
Figure 6. Decomposition of the image acquisition process.
The ordinary frame rate of CCD-cameras (CCIR, European
standard) is 25 Hz, i.e., 25 images are captured per second.
Without interlacing it is possible to use only one of the two
videofields to gain a doubled rate. In measurements of fast
moving objects this is a quite common method (e.g., Baltsavias
and Stallmann, 1990).
Different methods are used to interpolate the ’missing’ field in
order to maintain complete frames for the later analysis.
The aim of this process is to maintain the spatial resolution in
the y-direction without significant loss of precision.
The method applied in our system is based on a convolution
performing a filtering of the field by a 3x3 kernel. This process
also eliminates discontinuities in the line-direction (x-direction).
Another closely related approach is linear interpolation in the
y-direction. In a future version we will keep the field intact
(without interpolation) throughout the measurement process in
order to control and increase the precision and accuracy.
3.4 Reduced radiometric resolution of the image.
The analog video-signal from the CCD-sensor are usually
converted into an 8 bit-number ranging from 0 (black) to 255
(white) using an analog to digital (A/D) converter. The output
is stored in a frame buffer on the frame grabber board and
afterwards read out to the host computer. Using a reduced
radiometric resolution, i.e., a lower number than 8, the number
of frames that can be stored in the buffer is increased. When
the read-out from the frame buffer to the computer is slower
than the video-rate, it is favorable to have a required storage
capacity in the frame buffer, especially when dealing with fast
moving dynamic scenes.
The influence of the number of quantization levels on the
precision and accuracy of pointing is investigated by Trinder
(1989) showing no significant deterioration for quantization
above 4 bits/pixel. Below 5 bit/pixel the pointing precision to
circular targets decrease considerably. In a binary image (1
bit/pixel) the pointing precision is estimated to be
approximately 10-15 times larger than with 5 bits/pixel, or 15-
20 times larger than with full quantization (8 bits/pixel).
Similar results are obtained by applying the locale’ concept
(Havelock, 1989 and 1991) where the sizes of regions of
indistinguishable object position are the basis for the estimation
of precision. As an example it is stated that an estimate of the
position of a small circular target in a binary image has a
precision of up to 0.3 pixel in the worst case. With an image
scale of 1:30 this indicates a precision better than 0.1 mm. on
the object, which is well inside our requirement. This allows
storage of a continuous sequence of 32 binary images in the
frame buffer at the video rate.
3.5 Target location.
The process of detection and measurement of the position of
the targets in digital images requires subpixel resolution
because the data is a digital representation of an analog signal
sampled onto a discrete array and simultaneously quantised to
a finite number of levels. To obtain a satisfactory measurement
accuracy it is necessary to measure in between the sample
positions.
The precision of the determination depends upon the method,
image quality, quantization levels, pixel size and noise.
Since circular shaped targets were chosen in our system,
methods suitable for location of such targets are of main
interest here.
Different techniques for subpixel location fall into the
categories of interpolation, correlation, centroiding, edge
analysis or shape-based methods. The performance of different
methods is investigated with respect to spatial and radiometric
resolution and accuracy in West and Clarke (1990). Use was
made of both simulated data and real data from optical
triangulation with 1D sensors and laser light sources. Three of
the categories were found to be the most applicable to this type
of task: interpolation, correlation and centroiding. The results
show that most techniques can perform accuracy better than 0.1
pixel. A weighted centroid method obtained best results in
simulation while a Vernier method (Tian and Huhns, 1986) was
better on real data.
For use with circular targets, or symmetric targets in general,
variants of the centroid method are the most common. The
techniques differ in the way the centroid is computed and the
pixel values used. In the most simple approach the standard
first order moment is computed using the grey values of the
target in the image. Thresholding is used to reduce the number
of pixels in the computation. With a symmetric object the
centroid will give a perfect result. Asymmetry together with
noise and quantisation are the main contributors to the loss of
accuracy (West and Clarke, 1990).
The centroid method is based on formula (1) and (2):
> ip,
xs 0)
> Py