ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision“, Graz, 2002
mma E nba
(b) Markings outside of shadow area (c) Markings inside of shadow area
Figure 3: Examples of intermediate steps during road extraction
(d) Verified lanes (top), detected car (bottom)
the knowledge that markings are very bright and have symmetric
contrast on both sides because of the unicolored pavement (see
Fig. 4). However, in case of shadow regions as detected during
context-based data analysis, the system automatically retrieves a
different parameter set for internal evaluation and, thus, accom-
modates the different situation.
In order to attain an unbiased evaluation, model components be-
longing to different types should be independent from each other.
This is, of course, not always the case in practise. On one hand,
we use the orientation difference between two markings as crite-
rion for the extraction of groups of markings. On the other hand,
the curvature of a group is part of the evaluation which is doubt-
less correlated with the orientation difference (see also Fig. 5).
However, what makes a difference is the point of view on the
object: In the first case only pairs of markings are considered. | | |——L.......
Therefore, the group may "wiggle" although yielding pairwise $, s
small orientation differences. In the second case the group is con- (c) Asymmetry A ofline points (d) Typical intensities along marking
sidered as a whole. Hence, notable wiggling would lead to a bad
rating. Figure 4: Model for markings:
(b) Line model
z
pT
Components used for extraction | Components used for evaluation
At each step of processing, internal evaluation is performed by
not only aggregating previously derived values but also exploit- > Intensity: Curvature maximum | > Asymmetry AI of parabolic
ing knowledge not used in prior steps. This point has especially along $ profile along 3 , : small
high relevance for bottom-up driven image understanding sys- > Length of s: lower bound > Intensity along s: high
tems (as ours), since essential global object properties making
different objects distinctive can be exploited only at later stages
of processing. Lanes segments, for instance, are constructed from
grouped markings and optional road sides (Fig. 5, 7, 8), but they
still have high similarity to, e.g., illuminated parts of gable roofs.
Only their collinear and parallel concatenation resulting in lanes,
road segments, and roads makes them distinctive and gives in turn
new hints for missing lane segments (cf. Fig. 9, 10). Consider the
two-lane road segment in Fig. 10a). The continuity of the upper
lane provides a strong hint for bridging the gaps of the lower lane
in spite of high intensity variation therein. Hence, at this stage,
the system can base its decision on more knowledge than purely
the homogeneity within the gaps.
zc
p(l)
Figures 4-10 summarize the employed extraction and evalua-
tion models. Tables below the figures give detailed information
about the respective model components and the expected values
to measure (qualitatively). Linear features are denoted as smooth,
unit-speed curve s = s(l) neglecting the parameter |. Ribbons
s(w) — s(l, w) have an additional variable w parameterizing the (c) Histogram of lengths (d) Histogram of curvatures
ribbon profiles in direction of the unit normal vector $, (bold
letters for vectors). 7 stands for grayvalue intensities and H are Figure 5: Model for grouped markings:
heights given by a Raster-DSM. Components used for extraction | Components for evaluation
In the implementation, fuzzy-set theory is used for transform- > Orientation difference of pairs | > Lengths of markings and gaps:
ing knowledge as represented by the model into a mathematical of markings and gaps: limited constant within group
> Gap length: limited > Overall curvature 8: low
framework. The internal evaluation of each object is based on ; Ye
> Length of group: lower bound | > Height variation of s: low
fuzzy-functions which approximate the values to be extracted as
they are expected by the model (illustrated by graphs in Figs. 4—
10). Resulting confidence values are then combined by fuzzy-
aggregation operations.
A - 166