Full text: Papers accepted on the basis of peer-reviewed full manuscripts (Part A)

In: Paparoditis N., Pierrot-Deseilligny M., Mallet C., Tournaire O. (Eds), IAPRS. Vol. XXXV1I1. Part 3A - Saint-Mandé, France. September 1-3. 2010 
View-Centered Mesh in 3D Now, 3D triangles are generated 
by back-projection of triangles in the image mesh using polygon 
planes and edge booleans. Triangle t inside a polygon p with 
plane in 3D is reconstructed as follows. Let C v be the circularly- 
linked list of polygons which have vertex v of t. We obtain sub- 
list(s) of C v by removing the CV-links between consecutive poly 
gons which share edges e such that b e = 0. A CC-link is also 
removed if one of its two polygons has not plane in 3D. Let S x , 
be the sub-list which contains p. The 3D triangle off is defined 
by its 3 vertices: the 3D vertex reconstructed for v is the mean of 
3D vertices of the polygons in Sf! which correspond to v. 
Refinement Here we provide a brief overview of the refine 
ment, which is detailed in (Lhuillier, 2008b). The view-centered 
mesh (3D triangles with edge connections) is parametrized by the 
depths of its vertices and is optimized by minimizing a weighted 
sum of discrepancy and smoothing terms. The discrepancy term 
is the sum. for all pixels in a triangle with plane in 3D, of the 
squared distance between the plane and 3D point defined by pixel 
depth (Appendix A). The smoothing term is the sum. for all edges 
which are not at image contour, of the squared difference between 
normals of 3D triangles in both edge sides. This minimization is 
applied several times by alternating with mesh operations “Trian 
gle Connection” and “Hole Filling” (Lhuillier, 2008b). 
2.6 Triangle Filtering 
For all i, the method in Section 2.5 provides a 3D model centered 
at image i using images Af(i). Now. several filters are applied on 
the resulting list of triangles to remove the most inaccurate and 
unexpected triangles. 
Notations We need additional notations in this Section. Here 
t is a 3D (not 2D) triangle of the i th view-centered model. The 
angle between two vectors u and v is angle(u, v) G [0, n]. Let 
d-t.Ci be the camera symmetry direction and center at the i th 
pose in world coordinates (d t points toward the sky). Let Uj{\) 
be the length of major axis of covariance matrix C v of v G R 3 
as if v is reconstructed by ray intersection from v projections in 
images Af (j) using Levenberg-Marquardt. 
Uncertainty Parts of the scene are reconstructed in several view- 
centered models with different accuracies. This is especially true 
in our wide view field context where a large part of the scene is 
visible in a single image. Thus, the final 3D model can not be de 
fined by a simple union of the triangle lists of all view-centered 
models. A selection on the triangles should be done. 
We reject t, if the i th model does not provide one of the best avail 
able uncertainties from all models: if all vertices v of t have ratio 
Ui{v)/miiij Uj(y) greater than threshold uq. 
Prior Knowledge Here we assume that the catadioptric cam 
era is hand-held by a pedestrian walking on the ground such that 
(1) the camera symmetry axis is (roughly) vertical (2) the ground 
slope is moderated (3) the step length between consecutive im 
ages and the height between ground and camera center do not 
change too much. This knowledge is used to reject unexpected 
triangles which are not in a “neighborhood of the ground”. 
A step length estimate is s = median, ||c,- — c,-+i||. We choose 
angles at, or, between d, and observation rays such that 0 < 
at < f < Q& < 7T. Triangle t is rejected if it is below the 
ground: if it has vertex v such that angle(d,-, v — c*) > ai and 
height ^dj(v — ct) is less than a threshold. The sky rejection 
does not depend on scale s. We robustly estimate the mean m 
and standard deviation a of height dj(v — c,) for all vertex v 
of the i 1h model such that angle(d,. v — c,) < a#,. Triangle t is 
rejected if it has vertex v such that angle(d,, v — c,) < at and 
^(dJ (v — a) — rn) is greater than a threshold. 
Reliability 3D modeling application requires additional filter 
ing to reject “unreliable" triangles that filters above miss. These 
triangles includes those which are in the neighborhood of the line 
supporting the Cj, j G Af(i) (if any). Inspired by a two-view 
reliability method (Doubek and Svoboda, 2002), we reject i if it 
has vertex v such that maXj.keA^i) angle(v — Cj. v — c*.) is less 
than threshold qo. This method is intuitive: t is rejected if ray 
directions v — cj, j G Af(i) are parallel. 
Redundancy Previous filters provide a redundant 3D model in 
sofar as scene parts may be reconstructed by several mesh parts 
selected in several view-centered models. Redundancy increases 
with threshold u,q of the uncertainty-based filter and the inverse 
of threshold oco of the reliability-based filter. Our final filter de 
creases redundancy as follows: 3D triangles at mesh borders are 
progressively rejected in the decreasing uncertainty order if they 
are redundant with other mesh parts. Triangle t is redundant if its 
neighborhood intersects triangle of the j th view-centered model 
(j / i). The neighborhood of t is the truncated pyramid with 
base t and three edges. These edges are the main axes of the 90% 
uncertainty' ellipsoids of the /. vertices v defined by C v . 
Complexity Handling We apply the filters above in the in 
creasing complexity order to deal with large number of trian 
gles (tens of millions in our case). Filters based on prior knowl 
edge and reliability are applied first. Thanks to c, and relia 
bility angle ao, we estimate radius r, and center b* of a ball 
which encloses the selected part of the i th view-centered model: 
b., = |(c,_i +c,:+i) and tan(ao/2) = ||c i+ i -c,_ 1 ||/(2r i ) if 
Af(i) = {t — 1 ,i,i+ 1}. Let N(i) = {j, ||b, — by|| < r x + r,} 
be the list of view-centered models j which may have intersec 
tion with the i th view-centered model. Then the uncertainty- 
based filter is accelerated thanks to N(i): triangle t is rejected 
if [/¿(v)/minj e w(j) Uj(v) > uq for all vertices v of t. Last, 
the redundancy-based filter is applied. Its complexity due to un 
certainty sort is 0(p log(p)), where p is the number of triangles. 
Its complexity due to redundancy triangle tests is 0(p 2 ). but this 
is accelerated using test eliminations and hierarchical bounding 
boxes. 
3 EXPERIMENTS 
The image sequence is taken in the university campus on au 
gust 15-16th afternoons without people. There are several tra 
jectory loops, variable ground surface (road, foot path, unmown 
grass), buildings, corridor and vegetation (bushes, trees). This 
scene accumulates several difficulties: not 100% rigid scene (due 
to breath of wind), illuminations changes between day 1-2 sub 
sequences (Fig 2), low-textured areas, camera gain changes (un 
corrected), aperture problem and non-uniform sky at building-sky 
edges. The sequence has 2260 3264 x 2448 JPEG images, which 
are reduced by 2 in both dimensions to accelerate all calculations. 
The perspective camera points toward the sky, it is hand-held and 
mounted on a monopod. The mirror (www.O —360.com, 2010) 
provides large view field: 360 degrees in the horizontal plane, 
about 52 degrees above and 62 degrees below. The view field is 
projected between concentric circles of radii 572 and 103 pixels. 
We use a core 2 duo 2.5Ghz laptop with 4Go 667MHz DDR2. 
First, the geometry is estimated thanks to the methods in Sec 
tions 2.1, 2.2 and 2.3. The user provides the list of image pairs 
{?', j} such that drift between i and j should be removed (drift de 
tection method is not integrated in the current version of the sys 
tem). Once the geometry of days 1 and 2 sub-sequences are es 
timated using the initial calibration, points are matched between 
images i and j using correlation (Fig. 2) and CBA is applied to
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.