2.4.1 Homomorphic Filtering
In order to improve the reliability of point detection in the
matching process, we perform a modified homomorphic filter
to reduce the effects of non-homogenous illumination as well as
specular reflectance caused by moisture on the wound surface.
The image function I(x,y) can be written as a product
E,E(x,y).R(x,y), where E, is the desired constant illumination,
E is the function of lighting and R is the reflectance. The idea is
compressing the brightness from lighting conditions that
generates some low frequencies, while enhancing the contrast
from reflectance properties of the object that generates some
high frequencies so as to reduce variations of the lighting while
details will be reinforced, thus permitting a better observation
in the dark zones of the image. The ad-hoc transfer function
which is convolved is considered as (Kasser and Egels, 2002):
H(0,,0,) - Enron. ee + À. (2)
2 it
-$4/0; to, 0,
l+e
Parameters s, @, and 4 govern the shape of the filter. This filter
shows very good improvement in the final results (see
section 4).
2.4.2 Image Segmentation
As the binarization of the images accelerates the matching
process and is also required for connected component labelling
in the next stage, we aim to find an optimal value for
thresholding the image. For this purpose a locally adaptive
TopHat filter is applied as B—/-O in which / is the image
function, O is morphological opening operator, and B is the
image Background. In cases in which the difference between
the pixel value and its corresponding background value is
small, the pixel is set to the same value as the previous pixel
result. The region size in which the local average is to be
calculated is dependent on the size of expected foreground
features. This region size should be large enough to enclose a
feature completely, but not so large as to average across
background nonuniformity. It is also critical to determine how
much of a deviation from the average to tolerate before a
different threshold is selected. A low level for this deviation
will result in many erroneous detections, a high level will leave
some true features undetected. The selection of the threshold is
made with the aid of the noise estimation.
2.4.3 Point Detection
For the extraction of the pattern features, we implement a
connected component labelling procedure. This method is
region-based and exploits pixel connectivities. Considering the
binary image containing a set of objects corresponding to ON
regions on an OFF background, when a pixel is found to be
ON, neighbouring pixels are tested. Four situations can arise:
None of these neighbours is ON, and the current pixel is set to a
new label; one of the neighbours is ON, and the current pixel is
given the same label; more than one neighbour is ON and
labelled equivalently, so the current pixel is given the same
label; finally, two or more neighbours are ON, but labelled
differently, so the current pixel is set to one of the labels. When
this pass is done, centroids of the regions are computed. When
a contour point is encountered, the scan is interrupted and a
filling routine is initiated. This procedure has the advantage of
being unaffected by noise, leading to a robust estimate of the
centroids.
3. GEOMETRIC MATCHING
The Geometric Matching procedure in MEDPHOS exploits the
trifocal constraint to establish robust correspondences between
three or more perspective images of a scene. The epipolar line
constraint is fortunately independent of the shape of the object
(Faugeras, 2001). Our method requires neither image pyramids
nor interactive seed points for generating approximate values,
but provides a solution which is independent of the availability
of initial values or knowledge on the object shape; only a very
rough apriori estimation on the minimum and maximum depth
is required. Moreover, the length of the epipolar lines is
basically unrestricted, and ambiguities due to multiple
candidates are solved by means of the concepts of trifocal
geometry, thus avoiding any smoothing effects introduced by
surface fitting or patch size in area-based techniques. We
assume only a set of three or more discrete, disparate, and
monocular views. The method is robust against missing points.
It is also evident from (1) that rather than in a two-camera
model, the number of ambiguities does not depend on the
length of the epipolar lines any longer (Maas, 1997). In general,
our matching process consists of two stages which form a
useful approach for making the final decision, i.e. local
matching, and global matching. In the local matching stage, for
every feature in the Source Image, an attempt is made to find a
set of candidate match features in the Target Image that satisfy
certain constraints and have similar local attribute properties. In
this step correspondence hypotheses are being generated, i.e.
finding initial matches between features from different images
based on geometry, and radiometric and topologic similarities.
The compatibility between the extracted features leads to a
preliminary list of correspondences, including their weights. In
the global matching stage, a scheme for imposing overall
consistency among the local matches is exploited to
disambiguate multiple local match feature candidates and to
pick out the correct corresponding triplets. In other words, at
this stage we are looking for the evaluation of the hypotheses.
3.1 Image Correspondence
A series of attributes P; in a series of images are said to be
corresponding or homologous if all P;s are projections of the
same physical entity.
3.1.1 The Trifocal Geometry
The correspondence between each image pair (ij) can be
described by the Fundamental matrices F;. Given two
corresponding points m; and m, in two images both in a
homogeneous coordinate system, the following relationship
exists (Faugeras, 2001):
mb Fim, =0 (3)
If the position x; of the camera nodal of the first image turns to
the position x; of the camera nodal of the second image through
3D rotation R and translation 7, then the relationship between
x, and x; is expressed by:
xL EX, =0 (4)
in which E is called Essential matrix being a function of R and
T (Mori, et al, 2001). The two above equations are called
Epipolar equations. Considering the accuracy of the
measurements and the purity of the lenses, this formulation
should be modified as follows to compensate for the
uncertainties (Seitz and Kim, 2002):
m5 Fm, SE (5)
—266—
ot
the
th:
lir
i
Ho OT ^n HHT Qu
m.
Tr