211
In: Paparoditis N., Pierrot-Deseiiligny ML Mallet CL Tournaire O. (Eds), IAPRS. Vol. XXXVIII. Part ЗА - Saint-Mandé, France. September 1-3. 2010
opposite direction and only those links that appear in both ways
are kept.
The global smoothness constraint of optical flow allows to link
object regions without an explicit matching of their unstable
appearance. However the drawback of the proposed method is
its dependency on a good and complete object detection result
in each picture. To overcome situations when a single person
could not be detected in one image of the sequence or when a
link could not be established, images are processed additionally
being two frames apart. These links are used to establish
missing connections between the three consecutive images
while the person's location in the bridged image is interpolated.
The introduced procedure is applied to the entire sequence. The
output of the tracking algorithm consists of trajectories which
reflect the motion of individuals through the image sequence.
They are used for further processing in the second module of
the proposed system.
2.3 Interpretation of trajectories of people
The derived trajectories of moving people within the observed
scene are used to initialize the second module analyzing the
trajectories with regard to motion patterns. The trajectory
interpretation system aims at bridging the gap between low
level representation of single trajectories and high level people
behavior analysis from image sequences. To achieve this goal,
microscopic motion parameters of single trajectories as well as
mesoscopic motion parameters of several trajectories have to be
extracted. A graph is constructed containing microscopic and
mesoscopic motion parameters to represent a neighborhood of
trajectories. Additionally. G1S data and macroscopic parameters
are utilized to recognize predefined scenarios. A hierarchical
modeling of scenarios is feasible, as the interpretation of
trajectories is based on the analysis of simple motion
parameters of one or more trajectories. In the following, the
module for trajectory analysis is presented in more detail.
Microscopic and mesoscopic parameters: Microscopic
motion parameters concern the motion characteristics of one
single moving person. Hence, the most important microscopic
motion parameters to exploit are speed and motion direction. In
addition, further parameters can be calculated from these two
basic microscopic motion parameters. Figure 2 shows a single
trajectory depicting some features which are used to calculate
the following parameters.
The average speed v of a moving object is calculated using the
relative distance d re i of a trajectory which is given as the
Euclidian distance between the points x_l and x_n. Using this
approach, v is the speed for the effectively covered distance for
this object within the observed time frame, disregarding any
multi-directional movements. In contrast, the absolute distance
d abs is derived from adding the segments dj of one trajectory
over all time steps i. The acceleration a of a moving object is
computed by differencing the speeds of two consecutive line
segments. A further microscopic parameter is straightness,
calculated from the two different distances mentioned before by
s = d rc /d ahs . As d abs always receives a bigger number than d reh s
takes a value near 1 when the trajectory is very straight and a
much smaller value towards 0 when the trajectory is very
twisting or even self-overlapping.
Motion direction is the second basic microscopic motion
parameter: the direction z(x_i) at a point x_i is the direction of
the tangent at this point defined by the points xji-l) and
xji+l). The motion direction is specified counterclockwise
with reference to a horizontal line. Similar to straightness, the
standard deviation o z of the motion directions indicates the
degree of the twists and turnarounds within one trajectory.
x 3
x 4
x 6
x_2 fry
¡¡A
x_l •
x 5
'Ll
Figure 2. Features of a trajectory to calculate microscopic
motion parameters: points xj and line segments d_i (black),
direction at point with reference to horizontal line z(x_i) (blue).
Mesoscopic motion parameters represent the interaction
between several individuals. Therefore, it is necessary to
evaluate the proximity of a trajectory with respect to the
number of neighboring trajectories, their motion directions and
potential interferences. Figure 3 shows an example of two
neighboring trajectories. The detection of neighbors is
accomplished by scanning the surrounding area of existing
trajectory points at every time step i. For each detected
neighbor, the offset o_i of each pair of points x i und y_i is
stored. Comparing length and direction of these offsets during
the entire image sequence, robust information can be derived if
neighbors come closer or even touch each other. In addition, the
motion direction at each point is inspected to detect
intersections of trajectories.
Figure 3. Two neighboring trajectories with offsets o_i (green)
between pairs of points x_i and y_i (black).
Scenario modeling and scenario recognition: Scenarios are
modeled hierarchically to recognize complex motion patterns
based on the extraction of simple microscopic and mesoscopic
motion parameters, similar to the event detection systems
mentioned in Section 1. Hence, predefined scenarios consist of
trajectories and local G1S information in the lower level which
represent simple image features by coordinates (Figure 4).
Microscopic motion parameters follow in the next level of
motion parameters which give a more abstract representation of
the trajectories. Additionally, mesoscopic motion parameters
are embedded in this level because they are closely linked to
microscopic motion parameters and directly derived from the
trajectories. In the subsequent level, simple events are modeled
resulting from beforehand defined parameters. These events
concern single trajectories or try to model information from
mesoscopic motion parameters. In the highest level of the
hierarchical scenario modeling, simple events are combined
with GIS data to complex scenarios representing complex
motion patterns within the observed scene.
The goal of the proposed system is to recognize scenarios
which are predefined as described before. Based on the tracking
in the first module of the system, motion parameters are
extracted. These parameters are evaluated to compute
probabilities of simple occurring events. The combination of
several simple events leads to the recognition of a predefined
scenario.