X (7)
crete format of
, which is the
2
h)] (8)
h the interval of
c model should
n order to carry
nterpolation. In
cause our new
m itself.
n model curve,
le spacing. Key
and nugget
( range of the
ern, the nugget
in the data and
tually a vector:
rresponding to
m equation (8).
ire the same or
variograms are
1isotropic.
ivariograms in
10wn in Fig. 2,
lages: such two
lages.
standard
andard
ame direction
Given a standard image, we call all the other images to be
compared with the standard candidate images. We denote OB the
range of a standard image (the solid lines shown in Fig. 2), BA
the sill and [JAOB to be 0 . Elongating BA and intersecting with
another semivariogram curve of another image (the candidate
image, shown in Fig. 2 with imagery lines), denoting C the
intersect point. Let [JCOB be A (not shown in the figure), then the
bigger the difference of the two images (the standard image and
the candidate), the bigger the difference of angle 0 and P. Let:
. tang BA
A tan# BC
(9)
then p is a parameter with no metric unit. If the more the
aberration of p from 1, the bigger the difference between the
standard image and the candidate, otherwise, the more similar
between the standard and the candidate. If p=1, that means the
standard image and the candidate are actually the same. By this
way, the parameter p can be a good candidate for measuring the
similarity between two images. Compared with image similarity
measurements such as derived from histogram intersection, this
parameter has powerful ability to describe image structure
differences which is strongly related with the properties of
semivariogram. If different semivariograms of different
orientations are calculated both for standard image and candidate,
and calculating different value of p corresponding to these
orientations, then the degree of similarity between standard
image and candidate can be determined by giving a range of
threshold value, for instance, 0.7<p<1.3. Besides, this parameter
has at least three merits listed as follows:
(1) the sensitivity to structure difference of a data set. In
literature of spatial statistics, there are lots of detailed
discussions on how the semivariogram can describe
structure of a data set. Generally speaking, semivariogram
can indicate structure differences accurately and effectively
so that parameter p inherits this property;
(2) the highest complexity of calculation of semivariogram is
O(N?), which is lower than any one of the four similarity
distances proposed by Di Gesu;
(3) even if the standard image and the candidate are very
different in illumination condition, there still have
differences in parameter p, if only there are structure
differences. That is, this parameter is robust to illumination.
This is because the calculation of semivariogram is a
procedure similar to sliding average in one dimensional
space with a variable step h. Besides, the parameter ©
does not require the standard image and the candidate has
the same size, while all the four similarity distances defined
by Di Gesu (1999) require the same size of the standard and
candidate images.
4. APPLICATION
This application comes from the railway department of China.
Almost all big railway stations in China have special workers to
check wheels of a train while it stops at the station in order to see
whether the brake system of the train works well. If there are any
problems, they must report to related divisions of the station at
once to solve the problem before the train leaves. The whole
procedure is finished manually by workers and potential risks
existing if a worker happens to miss one or two wheels which
have problems of their brake systems. Automatically checking
the wheels is a task of great value.
Ifa CCD camera is used to capture train wheels when the train is
coming and adopting digital image processing techniques to
recognize brakes (shown in Fig. 3), then the whole procedure of
brake system checking will be automatic. If the speed capturing
train wheels of the CCD camera is 6 frames per second, then
more than 1,000 digital images will be captured. However, most
of them are useless, since they are not the pictures of train wheels.
So the first step of automation is picking out the useful images
(the images that are train wheels) from all the image captured by
the CCD camera.
brake of a train wheel
Fig. 3 typical useful image of train wheel
This is actually a problem of image retrieval. Because train wheel
has typical geometric features, recognizing these features and
employing shape-based image retrieval technique becomes the
first choice. However, it is a non-uniform motion when a train is
coming to a station, which leads to geometric distortions of these
features. This makes curve extraction algorithms such as Hough
Transform have much more computational complexities than
usual, which are actually un-operational. If area-based image
matching between a standard image and a candidate is used, it is
operational as for computational complexity, but it is too
sensitive to illumination condition. Another situation that must
be considered is motion blurring as shown in Fig. 4. All the
existing approaches for motion image analysis lose their
effectiveness due to the non-uniform motion of the train when it
is coming to the station. A new method must be used in order to
solve all these problems.
Fig. 4 typical image with motion blurring
-507