Full text: Proceedings (Part B3b-2)

658 
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 
3. Use the same method described in step 2 to analyze edges 
from all the images in which the same 3D point is visible. By 
this measurement, edge candidates in images are obtained. This 
method is much faster than applying epipolar beam from end 
points to find candidates. 
4. In order to enlarge the baseline to get a more accurate result, 
a 3D edge hypothesis is made between candidate edges from the 
first and the last images. Because the corresponding edge 
usually can not be extracted in every image, and also 
considering computation time, we choose the candidate edges 
from first and last ten percent images. A 3D infinite edge 
hypothesis is the intersection of two planes, each defined by one 
optical center and the corresponding 2D edge. 
where 
constrained to be 
orthogonal to the Euclidean part L 0 = (L 4 ,L 5 ,L 6 ) T 
I? h L 0 =0 . The homogeneous part presents the edge direction 
and the Euclidean part decides the distance from the origin to 
the edge. Thus the 6-vector L has 4 degrees from freedom, 
considering both the orthogonal and homogeneous constraint. 
2.2 Point and Camera Parameters Extraction 
Usually when dealing with video image sequences, this step 
(point detecting and matching) is also named feature tracking. 
The most widely used tracker is KLT tracker [Lucas and 
Kanade, 1981]. By determining 2D-2D point correspondences 
in consecutive video frames, the relative camera geometry is 
established. We use the commercial software Boujou [2d3, 
2008] to get camera projection information and corresponding 
2D and 3D points. 
2.3 Edge Extraction 
Edges are first detected in each frame separately. First, an 8-bit 
binary edge map is generated in each image by running 
EDISON edge detector. As a improvement for Canny detector, 
a confidence measure is introduced in EDISON edge detector, 
which results in more connected and smoothed edges [Canny, 
1986; Meer and Georgescu, 2001]. The second step is to use 
Hough transformation to extract straight edges from the edge 
map. 
3. APPROACH 
3.1 Method Overview 
The most common model for cameras is the pinhole camera 
model: a point in 3D space is projected into an image by 
computing a viewing ray from the unique projection center to 
the point and intersecting this viewing ray with a unique image 
plane. During preprocessing steps, camera projection matrices 
for each frame are obtained with some corresponding points in 
2D and 3D. Using reliably matched points as guidance for edge 
matching is the key point in this method, only edges near these 
good quality points are considered, which reduces the search 
space for corresponding 2D edges in frames. The workflow is 
described below: 
1. Compute the covariance matrix of each tracked 3D feature 
point and chose reliable points based on it. This part will be 
explained in section 3.2. 
2. Project a reliable 3D point to an image in which it is visible 
(or using corresponding image point), and calculate the distance 
between the 2D point and all edges detected in the same image. 
The distance here is the distance between a point and a finite 
edge. If the distance is less than one pixel, the edge is 
considered as an edge candidate in that image. 
A = P a T l a 
B = P b T l 
L = Af]B 
T l 
b l b 
(1) 
(2) 
(3) 
Where, l a , l h are 2D edges in image a and image b; P a , P b are 
projection matrixes of image a and image b; L is the intersection 
of plane A and plane B 
5. Project the 3D infinite edge to each image. As a projection 
matrix P for points is known, x = P X, it’s able to construct a 
projection matrix Q that can be applied to 3D edges, 1 -Ql, 
where Q is a 3><6 matrix. More details are given in [Hartley 
and Zisserman, 2000; Heuel, 2004]. 
Calculate distance and angle between projection results and 
edge candidates. If the distance and the angle is less than a 
predefined threshold, the edge candidate is considered as a 
corresponding edge for the 3D edge hypothesis. 
Compare the number of corresponding edges with the number 
of images considered. If the rate is higher than fifty percent, the 
hypothesis is confirmed. Otherwise, it should be rejected and 
new hypothesis need to be made from edge candidates. Return 
to step 4. 
6. When the hypothesis is confirmed, the corresponding edge in 
each image can received. From these 2D edges, 3D edge 
estimation is done see section 3.3 below. The 3D edge can still 
be rejected if the estimated variance factor is lager than a 
suitable threshold or if the solution does not converge. 
7. Compute end points for the estimated 3D edge. By backward 
projecting rays from the end points of the corresponding 2D 
edges and taking the intersection with the estimated 3D edge, 
we get two sets of end point candidates for the 3D edge. The 
method described in section 3.4 is used to fix the end points. 
8. Take next reliable 3D point, until all points are processed. 
3.2 Point Quality Analysis 
Assume a 3D point (X,Y,Z) is visible in n+1 images, a set of 
corresponding image points (x, ,y,) , i = (),•■■,n , and camera 
projection matrices/],z = 0,•••,«, for each frame in which the 
3D point is visible are known. It is usually not the case that the 
rays of the corresponding points in the images intersect 
precisely in a common point, which means, the points’ quality 
should be analyzed first. As the relation between 3D point and
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.