Figure 2: (a): Rectified facade image with two super classes, windows and balconies and four different subclasses of class window,
(b): Found image patches marked with their label, (c): Mean images of found clusters and their labels given by the user manually. The
first number codes the class, here 1 for window and 2 for balcony and the second number codes the type and subclass, respectively,
here three different window types, (d): Object hierarchy, with classes represented by their prototypes.
The sample is matched to that class c for which the a posteriori
probability according a bayes classificator is maximised.
P(y\c) = argmax c p (y\p c ,E c ) p (c) (2)
the mean p c of new images.
+1 = Ntp v c + N cPc
Pc Ng + N c
(4)
We assume an uni-modal gaussian p (y\p c ,T, c ) with mean y c
and covariance £ c for every class c. The a priori probability p (c)
for every class is simply the fraction of the number N c of samples
of the class c to the whole number of samples N, p (c) = N c /N.
This way we can introduce a reject option for which the classifi
cation is uncertain.
Pa (y )
1 -
P(y\ C Pmax)
T,c=iP(y\ c )
(3)
The reject option holds if p a > e for which we choose a signif
icance threshold of e = 0.01. In that case the according image
patch is presented to the user and he has to decide whether to
accept the sample or to reject it.
4.3 Detecting candidates for new instances in new images
To detect new instances in new images we use almost the same
procedure as described in Sec. 4.1. As we work on metric images
we assume the image scale given. Thus, we rescale the image
according to the given prototype scale. We then use the proto
types for detecting at least one new instance in the new image
and start the recursive search procedure described in Sec. 4.1 to
detect probable new instances. In contrast to Sec. 4.1 we now
pass on the clustering and classify all found instances according
to the learned classification models.
4.4 Incremental update of prototypes and classifiers
First we need to update our prototypes p c for detecting new in
stances in new images. This is done by simply updating the al
ready known mean images p u c of class c of last step v by adding
whereas N* are the associated numbers of samples.
As we have only few examples per class, in our first implementa
tion we evaluate LDAaPCA on all data gathered during the recog
nition phase. Thus, after receiving a new example the subspace is
updated as well as the coefficients of all images seen before. To
increase the performance we will adapt it to an incremental LDA,
cf. (Uray et al., 2007) which is a combination of an incremental
PCA on an augmented PCA subspace and the LDA on this up
dated aPCA space. This way we will be able to handle a long
sequence of images and to continuously update our class models.
5 EXPERIMENTS
Now we want to describe an experiment where we detected and
classified windows over a sequence of images given one example
and built up the class hierarchy.
Fig. 3(a) shows a rectified facade image where we gave the sys
tem one example of a window, marked red. Given this exam
ple we started the recursive search and clustering procedure and
found new window instances, shown in Fig. 3(b). The second
class, marked green, was established after asking the user for the
meaning of this cluster. Thus the initial class hierarchy consists
of one root node, class 1 (window) with two leafs 1-1 and 1-2.
Fig. 3(c) shows the associated prototypes for these two classes.
And Fig. 3(d) shows the samples projected onto the subspace
together with the class decision boundaries which are indented
from the class boundaries by the rejection area. A sample pro
jected onto the region between the decision boundaries can not
be reliably matched to one class, hence the user is asked for the
meaning of this sample.
Next images Fig. 4-6 shows the process of detecting new in
stances and updating the learned models. First we took the learned