0 250
ystyrol seeding
) x 10.0 cm? is
age. Although
cannot be seg-
32 x 32 pixels
e identified as
together to a
local maxima
in the intensity, as the location of streaks is well approxi-
mated by a local maximum gmax(x,y) (Fig. 4). A minimum
search horizontally and vertically from gmax(x,y) enables the
calculation of the peak height:
Ag = min(gmax y mini) ) (1)
Emin; being the minima revealed by the minimum search. In
addition the half width is measured. Both peak height and
half width are required to lie above a threshold to prevent
random noise being a seeding point for the region growing.
After these germ points are identified the growing algorithm
segments the object following two rules: Firstly, a pixel is
accepted as an object point only when its gray value is
higher than an adaptive threshold, which is calculated from
Emini by interpolation. For details regarding computation
the threshold see [Hering et al., 95b]. Secondly only those
pixels forming a connected object are considered. A result
of the described segmentation algorithm is shown in Fig. 5.
Each object identified by the segmentation is then labeled
with a flood fill algorithm borrowed from computer graphics.
The size of each object can then be determined, and thereby
large objects (reflections at the water surface) removed.
Figure 5: Segmented image of original gray value picture (Fig. 4
top). 501 objects were found. The reflections at the water surface
were eliminated by the labeling algorithm.
1st image frame . 2nd image frame
to lg * At *9/2
—MÓ
MEE S ER
lg * At- 0/2 to+2A
Figure 6: The temporal overlap 0 of the exposure time in two
consecutive fields of the same frame yields a spatial overlap of
corresponding streaks.
Image Sequence Analysis After segmentation, the corre-
spondence problem of identifying the same particle in the next
image frame is solved, by calculating its image field streak
overlap: Some cameras (e.g the Pulnix TM640) show a sig-
nificant overlap 0 of the exposure in two consecutive fields
of the same frame. The overlap of the exposure time yields
a spatial overlap of the two corresponding streaks from one
image to the next (Fig. 6). An AND operation between two
consecutive segmented fields calculates the overlap fast and
efficiently [Hering et al.,95a]. In addition as the temporal or-
der of the image fields is known, the sign of the vector is
also known and no directional ambiguity has to be solved.
However most cameras do not show such a temporal overlap
in the exposure time. In these cases corresponding particles
will only overlap due to their expansion in space. Artificially
this expansion can be increased by the use of a morphological
dilation operator. The binary dilation operator of the set of
object points O by a mask M is defined by:
OOM-(p: M,nOAZ0), (2)
where Mp denotes the shift of the mask to the point p, in
that way that p is localized at the reference point of the
mask. The dilation of O by the mask M is therefore the
set of all points, where the intersecting set of O and Mp
is not empty. This operation will enlarge objects and typi-
cally smooth their border. For more details see [Jähne, 95].
To avoid unnecessary clustering of the objects the dilation is
not calculated simultaneously for all objects in an image but
for each object individually. In most cases, in particular for
low particle concentration(< 300 particles/image), each par-
ticle shows only the overlap with the corresponding particle
in the next frame. At higher particle concentration, particles
however show overlap with up to typically four particles in
the next frame. Therefore additional features are required
to minimize false correspondences. ldeally the sum of gray
values for each streak in the image series should be constant,
due to the equation of continuity for gray values [Hering, 96]:
> g(x,y) = const.sigma . (3)
z,y€O
This implies a particle at low speed is visualized as a small
bright spot. The same particle at higher speed is imaged as a
fainter object extending over a larger area. The sum of gray
values in both cases should be identical. Deviations from
this ideal situation are caused by segmentation errors. Better
results are therefore gained by normalizing the sum of gray
values with the segmented area. The normalized sum of gray
values being GÀ for the first frame and G3 for the second are
required to lie above a threshold of the confidence interval C:
[Gn — Gil
6 = 1 Ia + G2] — [0,1]. (4)
A similar expression can be derived for the area of the
objects. Finally the expected position of a particle is
predicted by interpolation, from the vector field of previous
time steps [Hering et al., 95b]. A x?—test evaluates the
probability that a pair of particles match. Minimizing x? will
maximize the likelihood function.
Calculation Of The Displacement Vector Field
[Hering, 96] showed that the center of gray value Z.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996