We now have all camera coordinates and orientations and the
3D coordinates of the set of initial points, all registered in the
same global coordinates system. Unless a known distance is
used, the coordinates are up to scale factor. The next interactive
operation is to divide the scene into connected segments to
define the surface topology. This is followed by an automatic
corner extractor, again the Harris operator, and matching
procedure across the images to add more points into each of the
segmented regions. The matching is constrained, within a
segment, by the epipolar condition and disparity range setup
from the 3D coordinates of the initial points. The bundle
adjustment is repeated with the newly added points to improve
on previous results and re-compute 3D coordinate of all points.
An approach to obtain 3D coordinates from a single image is
essential to cope with occlusions and lack of features. Several
approaches are available [e.g. van den Heuvel, 1998, Liebowitz
et al, 1999]. Our approach uses several types of constraints for
surface shapes such as planes and quadrics, and surface
relationships such as perpendicularity and symmetry. The
equations of some of the planes can be determined from seed
points previously measured. The equations of the remaining
plane are determined using the knowledge that they are either
perpendicular or parallel to the planes already determined. With
little effort, the equations of the main planes on the structure,
particularly those to which structural elements are attached, can
be computed. From these equations and the known camera
parameters for each image, we can determine 3D coordinates of
any point or pixel from a single image even if there was no
marking on the surface. When some plane boundaries are not
visible, they can be computed by plane intersections. This can
also be applied to surfaces like quadrics or cylinders whose
equations can be computed from existing points. Other
constraints, such as symmetry and points with the same depth or
same height are also used. The general rule for adding points on
structural elements and for generating points in occluded or
symmetrical parts is to do the work in the 3D space to find the
new points then project them on the images using the known
camera parameters. The texture images are edited afterwards to
remove the occluding objects and replace them with texture
from current or other images. The main steps are shown in
figure 4.
1. Extract, match, and compute
3D coordinates of seed points
2. In 3D space, reconstruct the
object from the seed points
x
Window
3. Project new points ínto
the images
4. Model and texture map the object
Figure 4. Main steps of constructing architectural elements
semi-automatically (column and window examples)
We will now give more details on the use of seed points. A
cylinder is constructed after its direction and approximate
radius and position have been automatically determined from
four seed points (figure 5-a) using quadric formulation
[Zwillinger, 1996]. The ratio between the upper and the lower
circle can be set in advance. It is set to less than 1.0 (about 0.85)
to create a tapered column. From this information, points on the
top and bottom circle of the column (figure 5-b) can be
automatically generated in 3D resulting in a complete model.
ul Ba) (b)
Figure 5. (a) Four seed points are extracted on the base
and crown, (b) column points are added automatically.
NEED a
Reconstructing arches is similar to the approach used in Facade
except that our approach uses seed points instead of blocks and
the arch points are extracted automatically. First a plane is fitted
to seed points on the wall (figure 6-a). An edge detector (a
morphological operator, revision to [Lee et al, 1987]) is applied
to the region (figure 6-b) and points at constant interval along
the arch are automatically sampled. Using image coordinates of
these points (in one image), the known image parameters, and
the equation of the plane, the 3D coordinates are computed and
projected on the images (figure 7). A procedure for constructing
blocks, even when partially invisible, is developed. For example
in figure 8 the part of the middle block where it meets the base
is not visible and needs to be measured in order to reconstruct
the whole block. To solve this problem, we first extract the
visible corners on all blocks from several images and compute
their 3D coordinates. We then fit a plan to the top of the base
block, using the gray points in figure 8, then project a normal to
this plane from each of the corners of the block attached to it
(the white points). The intersections of each normal will
produce a new point (a black point in figure 8) automatically.
Using symmetry, we can fully construct the block.
(a) | bl
Figure 6. (a) Seed points (b) detected edge.
Figure 7. Automatic point
extraction on arches.
Figure 8. Constructing blocks.
For windows and doors we need three (preferably four) corner
points and one point on the main surface (figure 4 above). By
«140
ee pmb 0M M d
mm] i, I Sal
ral NO
pen
ERAS AMA qub ym QE jus o PL juna Pei Vak
m i seul V Ru MSN