ucting
ed for
based
tually
IS €. g.
ystem.
ogical
ng the
gation
ess of
y from
istruct
)
m
ween-
hedral
ained,
raints
ge. b)
image
cribed
ive no
agram
ected.
g. 1b,
wings
w the
/1)).
Stephan Heuel
Thus we obtain the following generic situation shown in fig. 3. Let the image features ‘ F” and their neighborhood relations
IN! z N(F, EL) of image i be collected in the feature adjacency graph ‘FAG’ = G(‘F',’N’), being a compact
representation of the situation in fig. 1b. Our goal is to reconstruct parts of the 3D FAG and to derive 3D aggregates
together with their neighborhood relations, contained in the aggregate adjacency graph AAG. We want to discuss corners
in detail, as they form the basis for our 3D recontruction procedure.
The 3D corners that are estimated in our approach are a special class of aggregates A. They are composed of 3D features
F together with the used neighborhood relations. Fig. 3 represents the relation between the FAG's in object resp. image
space and the AAG.
The reconstruction starts at 3D-corners. Corners with n > 1
neighboring edges may be represented by C" — (P, EF, e
, En, R1,.., Rm). The geometry of these 3D-features is given
by the three coordinates of the corner point and the n directions
of the edges.
'FAG’
(2D)
= FAG AAG
(3D) (3D)
Due to the multiple image redundancy the 3D edges F1,..., E2
can be taken to be quite reliable. The generation of the 3D
corners also establishes reliable neighborhood relations (inci-
dences) between the 3D point and the 3D edges but also between
the recovered 3D features F and the corresponding 2D features
F".
Figure 3: Relationship of the graphs ' F AG’ of the im-
ages i, the 3D F AG and the graphs of the 3D aggre-
gates, AAG. Some of these 2D features induce a set of
3D features and their neighborhood relationship, con-
tained in the 3D FAG. Aggregating these features
leads to entities like n-corners C", which in turn is
We also obtain neighboring pl ions P, ... HR, with (^) »
e also obtain neighboring planar regions FR wi (3) part of he 3D AAG.
m > 1; but, without exploiting the 3D geometry in detail, e. S
by an occlusion analysis, they cannot be inferred reliably.
The reconstruction process, descibed below (cf. 4.1), actually uses a set of image points and edges which, in a many-to-one
relation, are linked to the 3D point and edges.
3 UNCERTAIN GEOMETRIC ENTITIES
Grouping 3D entities involves tests of various relationships between these entities. Geometric relationships play a central
role, especially if they involve identities, incidences or other crisp conditions, as they can be used to either exclude merg-
ing processes with high reliability or to evaluate grouping results. Checking for the existence of these crisp relationships
requires thresholds which depend on the uncertainty of the geometric parameters and therefore best are formulated as
hypothesis tests. Thus we derived the uncertainty of the initially reconstructed 3D aggregates, especially corners, repre-
senting it in a covariance matrix and built a library for "statistical uncertain geometric reasoning", containing routines for
generating 3D geometric entities and for checking geometric relations between them!. The concepts are similar to those
of (Kanatani, 1995). Here we describe the process for generating uncertain 3D entities from image features and the basic
elements of the geometric reasoning modules.
3D features are represented in two ways, an Euclidean one
resulting from the reconstruction process and a homogeneous
one for the spatial reasoning, which is used internally. The ho- | nis [Styne | onion J
mogeneous representation, in contrast to using different maps, P(X) | normal X =
is continuous and allows also to represent entities at infinity. In hom. X = (Xo,X) = (X,1)
all cases the uncertainty is represented by a covariance matrix L(L) | normal (X, M)
of adequate rank, cf. (Fôrstner, 2000b). hom. L = (L, Lo) = (M, X x M)
e(A) | normal (N, D)
We have to solve three tasks: hom. A — (A, Ao) € (N, D)
1. Transferring given entities into the internal representation.
2. Generating new entities from given ones.
i i : : Table 1: shows the basic spatial entities with their differ-
3. Testing pairs of entities for specific geometric relations.
ent representation. The line parameters fullfill the con-
dition L-Ly = 0. The vectors M and N represent nor-
Ta S 1Zes resentations. ; rs
ble 1 summarizes the used representat malized directions and normals resp.
We realized modules for generating 3D entities from other given ones, namely, L — P, ^ P?,L —&i(163,€ - PAL
and P — Le, where the operator A joins two entities, and the operator f intersects two entities. A plane ¢ from three
points P;, i — 1,..,3is determined viae — (P4 ^ P3) A Ps.
! This library sugr, written in C++ and in MAPLE is available upon request.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 399