5. Hakodate 1998
on
jeld from image
1l formulation of
ance differences
olution is taken
generalized way
quares. We will
emporal optical
tal least squares
'e structure. In
e of an aperture
These measures
er crucial factor
| kernels. With
otischen Flusses
eschränken sich
itsgleichung des
Formulierungen
ems als gegeben
ferentiellen Ver-
ist squares’ und
raum-zeitlichen
variiert, was zu
st equivalent zu
Fluf liefert eine
sgepragtheit des
n. Ein weiterer
Ableitungsfilter.
ng um mehr als
1. INTRODUCTION
Motion analysis remains one of the fundamental prob-
lems in image sequence processing. The only acces-
sible motion parameter from image sequences is the
optical flow, an approximation of the two-dimensional
motion field on the image sensor. The optical flow
field can be used as input for a variety of subsequent
processing steps including motion detection, motion
compensation, three-dimensional surface reconstruc-
tion, autonomous navigation and the analysis of dy-
namical processes in scientific applications. As only
the apparent motion in the sequence can be extracted,
further a priori assumptions on the constancy of im-
age brightness and the relation between relative three
dimensional scene motion and the projection onto the
two-dimensional image sensor are necessary for quan-
titative scene analysis.
In contrast to the more qualitative requirements of
standard computer vision applications, such as motion
detection or collision avoidance, quantitative measure-
ment tasks require precise and dense optical flow fields
in order to reduce the propagation of errors in subse-
quent processing steps. In addition to the optical flow
field, measures of confidence have to be provided to
discard erroneous data points and quantify measure-
ment precision.
Quantitative image sequence analysis requires the en-
tirety of quantitative visualization, geometric and ra-
diometric calibration and a quantitative error analy-
sis of the entire chain of image processing algorithms.
The final results are only as precise as the least precise
part of the system. Quantitative visualization of ob-
ject properties is up to the special requirements of ap-
plications and cannot be discussed in general. With-
out doubt, camera calibration is an important step
towards quantitative image analysis and has been ex-
tensively investigated by the photogrammetric society.
This article will focus on the algorithmic aspects of
low-level motion estimation in terms of performance
and error sensitivity of individual parts, given a cal-
ibrated image, eventually corrupted by sensor noise.
It will be shown how a combination of radiometric
uniformity correction, filter optimization and careful
choice of numerical estimation techniques can signif-
icantly improve the overall precision of low-level mo-
tion estimation. Starting with the brightness change
constraint equation (Section 2), we will show how a lo-
cal estimate on optical flow can be obtained by using a
weighted standard least squares estimation proposed
by Lucas and Kanade (1981) (Section 3). This tech-
nique can be improved by using total least squares
estimation instead of standard least squares. This
directly leads to a tensor representation of the spa-
tiotemporal brightness distribution, such as the struc-
ture tensor technique (Haussecker and Jàhne, 1997,
Haussecker, 1998) (Section4). In this section we will
further detail how a fast and efficient implementation
can be achieved by using standard image processing
705
Figure 1: Illustration of the constraint line defined by (1).
The normal optical flow vector, f, is pointing perpendic-
ular to the line and parallel to the local gradient Vg(x, t).
operators, which is an important requirement for dy-
namic analysis. Coherency and type measures are ob-
tained from the solution of the structure tensor tech-
nique in a straightforward way. They allow to quan-
tify the confidence of the optical flow estimation as
well as the presence of an aperture problem. It will
be shown how they compare to other measures, pre-
viously proposed by Barron et al. (1994) and Simon-
celli (1993). In Sections 5 and 6 we will show how
optimization of derivative filters and uniformity cor-
rection significantly improve the performance of any
differential technique. We will conclude with results
from both test patterns and application examples in
Section 7 and a final discussion in Section 8.
2. OPTICAL FLOW CONSTRAINT
A common assumption on optical flow is that the im-
age brightness g(x,t) at a point x — [r, y]" at time t
should be conserved. Thus, the total temporal deriva-
tive, dg/dt, needs to equal zero, which directly yields
the well known brightness change constraint equation,
BCCE (Horn and Schunk, 1981):
m - (Vxg)! f gv — 0, (1)
t
where f = [fi, f2]T is the optical flow, Vxg defines
the spatial gradient, and g, denotes the partial time
derivative 9g/Ot.
This relation poses a single local constraint on the
optical flow at a certain point in the image. It is,
however, ill posed as (1) constitutes only one equa-
tion of two unknowns. This problem is commonly
referred to as the aperture problem of motion estima-
tion, illustrated in Figure 1. All vectors along the
constraint line defined by (1) are likely to be the real
optical flow f. Without further assumptions only the
flow f, perpendicular to the constraint line can be
estimated. In order to solve this problem a variety
of approaches have been proposed that try to min-
imize an objective function pooling constraints over
a small finite area. An excellent overview of optical
flow techniques is given by Barron et al. (1994). They