Full text: Papers accepted on the basis of peer-reviewed full manuscripts (Part A)

In: Paparoditis N., Pierrot-Deseilligny M.. Mallet C.. Tournaire O. (Eds). IAPRS. Vol. XXXVIII. Part 3A - Saint-Mandé, France. September 1-3, 2010 
288 
gradient-descent and a genetic algorithm (GA). He shows that GA 
is the most efficient method, however, the algorithm requires 20 
to 120 iterations for a population size about 100. With the same 
idea. (De La Escalera et al.. 2004) compares GA and SA to detect 
road signs, and a simple normalized correlation score to recog 
nize road signs from an image database. Roughly 50 iterations 
are required to find translation, rotation, scale and stretch parame 
ters with 101 individuals. Dealing with biological metaheuristics, 
(Siarry. 2007) and (Deneche et ah, 2005) propose different meth 
ods for shape recognition using an Evolutionary Algorithm (EA), 
clonal selection (CS) and Particle Swarm Optimization (PSO). 
It can be observed that these evolutionary algorithms and meta 
heuristics are very efficient in shape recognition but that they 
suffer from other drawbacks : unpredictable processes (such as 
mutation, crossover etc.) can provoke hazardous behavior dur 
ing population evolution (i.e. several computations on the same 
image can lead to different results) which require large numbers 
of iteration for the problem to be avoided. It what follows we 
show that by hybridizing the evolutionary algorithm to carefully 
combine colour information (lower noise sensitivity) and edge 
information (greater precision), we can constrain convergence, 
improve process repeatability and reduce the number of required 
iterations. 
3 PREVIOUS KNOWLEDGES 
To understand our approach, we need to introduce some notation. 
Our algorithm use deformable templates and primitives extrac 
tion based an edge and colour detection. Each template is then 
associated to a spatial configuration necessary to explain affine 
mapping. The primitives are used by the algorithm to find the 
best {template-configuration} couple based on a score that we 
propose below. 
Figure 1: Templates used to detect (a) interdiction road signs and 
(b) danger road signs 
retain the same extraction to be able to compare our detection al 
gorithm to theirs. This extractor is a multi-criterion extraction. 
The method uses non-normalized colour component and we need 
no intermediate calculations because the first criterion imposes 
a condition on r = h + q+ B . Then, we obtain an image C r (I) 
from which every white pixel corresponds to non-red pixel and 
every black pixel corresponds to red pixel of the original image 
as shown in figure 2. More details can be found in (Siarry, 2007). 
The second primitive concerns edge extraction. This algorithm 
is simply based on a gradient calculation in every direction of 
the original image. We then calculate the thresholded image of 
the gradient magnitude and we remove every local minimum to 
deduce edges. In a Similar way to the colour extraction, black 
pixels represent edges as shown in figure 2. If / is an image. V/ 
represents the gradient image and ||W|| r the edge image using 
a threshold T. 
3.3 Distance transform 
3.1 Deformable template 
We use the term deformable template to describe a mathematic 
shape P defined by edges and regions. This template can be a 
point-cloud which could be transformed relative to configuration. 
Concerning the conditions in which our template will be used, it 
seems justified that we have to use affine transformations based 
on 6 different parameters : Scale s, Translation t, u and t v , Rota 
tion 0. Stretch e and Skew d. This affine transformation is asso 
ciated to a transformation matrix w'ith respect to a configuration 
vector a = (theta, s, t u ,t v ,e, d) 1 : 
Ta(x) 
(cos(d) -sin(6)\ /1 d\ is 0 \ /t u \ 
ysin(0) cos(6) / \0 1J y0 sej y£ v ) 
(1) 
We choose to show our results for a red road sign. Figure 1 repre 
sents both templates we use to fine-detect these red road signs. A 
template is described by two different classes : black points be 
long to edges, red points belong to colour region (i.e. red in this 
case). We note this subset of points respectively Pe and Pr. A 
template which is transformed w.r.t a configuration vector a will 
be written as : 
P a = {p(a) = T a (p) \ pe P} (2) 
3.2 Primitives extraction 
In order to compare our results to those presented in (Siarry, 
2007), we use the same colour extraction algorithm. Dutilleux 
& Charbonnier present, in their book (chapter ten), three kinds 
of biological metaheuristic applied to road sign detection. We 
Felzenszwalb et al. describe an algorithm which calculates what 
we call the Distance Transform (DT) of an image ((Felzenszwalb 
and Huttenlocher, 2004)). The DT of a binary image (i.e. every 
pixel belongs to only two possible classes) gives an image Di 
where the pixel value is the distance between itself and the closest 
pixel of / belonging to the right class. The algorithm proposed in 
(Felzenszwalb and Huttenlocher, 2004) was chosen because, un 
like the one proposed in (Borgefors, 1986). the calculation is not 
an approximation of the distance but the real euclidean distance. 
For our application, we chose to work with exact distances even 
though the approximation method is three times faster. 
Io = {q=(u,vY \ I{q) = 0} (3) 
The expression of the DT of an image I with respect to a class Iq 
then becomes : 
Dj(p) = min |b-<7|| (4) 
q£Io 
3.4 ICP Algorithm 
The ICP algorithm (or Iterated Closest Point) is an iterative al 
gorithm which is made for determining linear transformation be 
tween two similar point clouds. The number of points in each 
cloud is not necessarily the same. An excellent descritption of 
this algorithm is presented in (Druon et al., 2006) : for two dif 
ferent sets of points {S'} and {79}. we define the error e between 
the sets. This error to be minimized with respect to the transfor 
mation T and is calculated as the sum of the distance between 
matched points from set {S} and {£>} (Cf. 5). 
£ = jvi>‘- 7 ’ s <n 2 < 5 > 
¿=i
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.