a FAs EE Ee EE Tes msm
330
THREE-DIMENSIONAL MOTION ESTIMATION USING RANGE AND INTENSITY FLOW
Pierre Boulanger J.-Angelo Beraldin
Autonomous Systems Laboratory
Institute for Information Technology
National Research Council of Canada
Ottawa, Canada
(613) 993-1426
e-mail: Boulanger@iit.nrc.ca
KEYWORDS: 3-D motion, range flow, image flow, sensor fusion.
ABSTRACT: This paper describes a method capable of estimating rigid motion parameters from range flow (3-D
displacement vector field) and optical flow (corresponding intensity displacement) from a sequence of video rate
range and intensity images. This method can directly estimate the motion parameters by solving a system of linear
equations obtained by substituting a linear transformation expressed by a Jacobian matrix into motion constraints.
This method is based on an extension of the conventional scheme used in intensity image sequence analysis. The
range flow is computed from the linear transformation and then the motion parameters are estimated. The algorithm
is supported by experimental results using real range and intensity image sequences acquired by a video rate range
camera developed at the National Research Council of Canada.
1. INTRODUCTION
The problem of computing the motion parameters of moving objects, such as rotational and translational speed from
a sequence of images, is an important problem in computer vision. It is especially important if one wants to track the
position of objects in real-time for applications such as satellite docking and real-time tracking [13]. The importance
of such analysis has increased with the recent introduction of high speed range sensors capable of producing registered
intensity and range information at video rate.
Research in motion analysis has dealt mainly with the study of rigid objects in motion, based on the analysis of
intensity image sequences. But, very few have considered the use of range information to help solve some of the
problems introduced by the analysis of intensity information only. Ballard and Kimball [2] estimated rigid motion
parameters from optical flow and depth information utilizing a generalized Hough transformation. Asada and Tsuji [1]
tracked rigid objects by matching 3-D shapes obtained from structured light images. Chen and Penna [8] determined
motion parameters of an object with a restriction on the type of deformation under the assumption of corresponding
image sequences and the relative depth given by photometric stereo. Kehtarnaraz and Mohan [12] determined 3-D
rigid motion parameters after analyzing the correspondence between range images using a graph matching technique.
More recently, Godin et al. [9] determined 3-D rigid transformations between a pair of registered range and intensity
images. This algorithm is based on a robust version of the iterative closest point algorithm (ICP) first introduced by
Besl and McKay [5]. Real-time tracking using such an algorithm has also been demonstrated recently by Simon et
al.[16].
The methodology to analyze sequences of images is divided into two main categories. The first one assumes that
the correspondence between the sequence of images is established by the use of features such as edges. From these
corresponding features one can estimate motion parameters between each frame. However, it is very difficult to solve
the correspondence problem in general. Furthermore, most methods require a dense correspondence map between
images to estimate complicated motions such as those produced by nonrigid objects. In these cases, the surfaces of
these objects must be covered with a dense discriminating pattern to allow simple solution of the correspondence
problem.
IAPRS, Vol. 30, Part 5W1, ISPRS Intercommission Workshop "From Pixels to Sequences", Zurich, March 22-24 1995