660
1 ììt ìiiimriftliiiri
If the estimated variance factor à 2 is larger than a suitable
threshold <j 2 nax or if the solution does not converge, the
estimated 3D edge is rejected.
3.4 End Points Decision
The last part of the algorithm is the computation of the end
points of the 3D edges. By backward projecting rays from the
end points of one corresponding 2D edge and taking the
intersection with the estimated 3D edge, we can get two
endpoints. Considering the direction vector of 3D edge, we can
separate intersection points to two groups, as shown in figure 1.
The red circle area shows where the intersection points are.
Then we get a set of end point candidates for each 3D edge end
point.
Figure 1. End points decision
Optical centers (black points), 2D edges (green), 3D edges
(black solid line), viewing rays (black dashed lines),
direction vector (blue)
The uncertainty value of corrections for each 2D edge is used as
a weight for its affection on end points of 3D edge. The weight
value can be obtained from covariance matrix of estimated
residuals.
4. EXPERIMENT
4.1 Data Description
Above video data was captured by a hand-hold Canon IXUS
camera moving along a street. The images are 640x480 pixels,
15 frames per second and 134 frames in total. Figure 2 shows
first and last frame from input image sequence with reliable
points and edge extraction results. Tracked points from Boujou
with cr 3D less than 8mm are considered as reliable points. The
number of extracted edges from each frame varies from 72 to 96,
about 87 in average.
Figure 2. Input video image sequence with reliable points
and extracted edges,
Reliable points (green), edges with end points (yellow)
frame 0 (upper), frame 133 (lower)
4.2 Results
A common way for taking video is to maintain a constant height
of the camera during capture. As the camera is moving
horizontally, horizontal edges are almost at epipolar plane
between different view points. For such poor geometry relation,
they are difficult to be correctly estimated. By setting the
suitable threshold for estimated variance factor, those incorrect
3D edges can be eliminated.
We chose 200 as the max iteration value during 3D edge
estimation and a 2 mx =0.1 for estimated variance factor. Figure
3 show first and last frame from image sequence with edges that
successfully reconstruct 3D edges that are showed in figure 4.
As there are many cars in front of the building, edges on the
ground and cars are usually connected and easily mixed up,
which leads to two incorrect 3D edges extracted in front of the
building. But all the other edges fix the wall plane very well and
the main building plane can be seen from the extracted 3D
edges. Comparing figure 2, figure 3 and figure 4, our method
can correctly match edges in short range video image sequence.