CREHT.
acted with
' into sub-
ure 3.
inny oper-
ER
Figure 3. Lines found after RHT.
4. FEATURE MATCHING
In this chapter, the term feature means a 2D object in
image space, which has characteristic properties
belonging to the object, notation deriving from pattern
recognition field. And this should not be confused with
3D linear features mentioned earlier. The feature
matching then simply means solving the correspon-
dence problem between features from different frames.
When measuring images of a video sequence, the
displacement of the current feature between consecu-
tive frames cannot be too big. For this reason Hough
parameters are good feature descriptors of a 2D image
feature, both the length of the arc as well as the
average strength of edges are suitable for the task. In
case of line, the spatial coordinates of starting and
ending point of the line are distinguished descriptors.
Matching of features between consecutive frames can
be a ambiguous process. The first stage is to construct
all combinations of feature pairs and to calculate
similarity measures between them. The correlation
between feature vectors presents one good measure.
Often some kind of normalization is needed for
correlation coefficients. Based on these similarity
measures weights for each feature pair are
determined.
Finding the correct feature pairs for the features of
the first frame from the second frame, i.e. feature
matching, can be done in many, different ways. One
method widely used in any sort of situations is the
probabilistic relaxation. The idea of this method is
that the nodes near proximity effect on weights of the
node. Relaxation is then an iterative process. The
result can although depend on the order in which the
nodes are updated.
À problem occurs when occluding particles appear. In
a such case, some heuristic threshold value has to be
set for a similarity measure to eliminate the affect.
Also geometrical constraints like epipolar constraint
can stabilize the matching. The assumption is that
camera movement is smooth between the frames. This
223
may restrict the search space and make the matching
more robust.
5. FEATURE MODELING
Three dimensional form fitting can be done by using
two dimensional image observations from two or
multiple images, whose pose differ from each other for
solving three dimensional parameters of the features.
That means linear 3D features like lines, circles,
ellipses, parabolas, hyperbolas, and b-splines are used
instead of points to reconstruct the object. In
photogrammetry D. Mulawa’ presented the idea in his
dissertation thesis in. There he used this kind of three
dimensional parametric form of the features to depict
shape and size of the objects. The idea of doing form
fitting in three dimensional space means that no
subpixel line detection is needed in image space. The
whole estimation can be done in three dimensional
space using original pixel observations.
The parametric presentation is very compact. The
general form of curve can be presented as a set of
points. In a case of a three dimensional curve its trace
consists a certain set of points P;.
(1)
In the parametric formulation we can find a common
set of parameters u; on which all points of curve are
dependent. The general formulation of parametric
presentation can be given as,
x(u;)
P; = P(u;) - | y(u,)
z(u;)
u; = set of parameters of feature i
(2)
All parametric presentations are not unique without
involving some constrains. For modeling purpose also
constrains between features are possible. Constrains
e.g. intersection of lines in three dimensional space,
parallelism of lines etc., set by the operator can
simplify and stabilize the estimation in the object
reconstruction part.
To have a direct relation between image observation
and three dimensional feature parameters gives us lot
of redundancy in the estimation. We can have as many
observation as edge points detected to determine the
parameter values. The number of parameters is
always small compared to number of observations we
can have. And specially in our case with multiple video
frames we can have massive number of observations
connected to a single feature.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996