Full text: Close-range imaging, long-range vision

NCE 
ised method in 
processing, and 
image retrieval 
ogram is putted 
data set but also 
omparison, and 
instructed and a 
pproach has the 
zh robustness to 
quare sampling 
factor less than 
the pixels not 
] the tests using 
'Ssion ratios are 
iges at a higher 
oding was used 
ging to texture 
sults with an 
verse kriging”. 
motion vector 
object. In their 
al tools, which 
ne combining 
having close 
ge processing 
g was given by 
ased on kriging 
iot rely on any 
ich much more 
vioral models. 
| illustrate the 
real data was 
on problem in 
am to image 
approach in 
section 2, will 
mage retrieval, 
uhan University, 
pointing out that comparing between standard image and 
candidate image is a commonly used approach in content-based 
image retrieval. The basic concept of semivariogram is given in 
section 3, as well as analyzing the characteristics of this 
parameter and the possibility of using it to describe the similarity 
between two images. The case study is presented in section 4, 
showing the effectiveness for image retrieval using the approach 
described in section 2. Conclusions and final remarks are given 
in section 5. 
2. IMAGE RETRIEVAL: 
THE CURRENT STATE-OF-THE-ART 
Recent years have seen a rapid increase in the size of digital 
image collections. Everyday, both military and civilian 
equipment generates giga-bytes of images. A huge amount of 
information is out there (Rui, Y. et al, 1999; Lew, M.S. et al 1998; 
Berman, P. A. et al 1999, Cha G.H et al 1999). However, we 
cannot access or make use of the information unless it is 
organized so as to allow efficient browsing, searching, and 
retrieval. Image retrieval has been a very active research area 
since the 1970s, with the thrust from two major research 
communities, database management and computer vision. These 
two research communities study image retrieval from different 
perspectives, one being text-based and the other visual based. 
The text-based image retrieval can be dated back to the late 
1970s (Rui Y. et al, 1999). A very popular framework of image 
retrieval then was to first annotate the images by text then use 
text-based database management systems (DBMS) to perform 
image retrieval. Many advances have been made along this 
research direction. However, there exist two major difficulties, 
especially when the size of image collections is large. One is the 
vast amount of labor required in manual image annotation. The 
other difficulty, which is more essential, results from the rich 
content in the images and the subjectivity of human perception. 
That is, for the same image content different people may perceive 
it differently. The perception subjectivity and annotation 
impreciseness may cause unrecoverable mismatches in later 
retrieval process. 
In the early 1990s, because of the emergence of large-scale image 
collections, the two difficulties faces by the manual annotation 
approach became more and more acute. To overcome these 
difficulties, content-based image retrieval was proposed. In this 
image retrieval mechanism, the images are retrieved by their own 
visual content such as color layout and texture, other than 
indexed by text-based key words. From then on, many papers 
along this research area were published in computer vision 
related journals. A recent review paper was give by Rui, Y. et al 
(1999). Some content-based image retrieval prototype systems 
have been emerged, among which the first and most famous one 
is the Querying By Image Content system developed by IBM. 
Most of these systems support one or more of the following 
options: (1) random browsing; (2) search by example; (3) search 
by sketch; (4) search by text (including key word or speech); (5) 
navigation with customized image categories. 
To search images by example is actually to compare standard 
image with candidate images: that is, retrieval those images from 
an image database that are similar to the standard one. The key to 
this process is to define a suitable measurement of image 
similarity. Measurements of similarity are also used to evaluate 
compression algorithms (Wilson et al, 1997) 
Image comparison is often performed by computing a correlation 
function, the root of the mean square-error or measurements of 
the signal-to-noise ratio (Di Gesu, et al, 1999). The last approach 
is applicable only if there is enough knowledge of the image 
content. In the case of binary images, the comparison problem is 
much simpler. If we operate on gray scale or color images, there 
are two basic means of comparison: (1) to extract some objects of 
interest by thresholding, segmentation, edge and shape detection, 
and then compare the objects; (2) to compare images as whole 
entities. The first method leads to high level image recognition, 
while the second leads to low level image analysis. 
Global features directly derived from gray levels (e.g. first and 
second order statistics, color) can give a coarse indication of 
image similarity. However, they may produce unstable 
indications, because quite different images may have similar 
histograms. One the other hand, structural features ( e.g. edges, 
skeleton, medial axis, convex hull, object symmetry) are very 
sensitive to noise in the image. Di Gesu et al (1999) analyzed the 
Image Distance Functions (IDFs) proposed by Russ (1989) and 
pointed out that distance functions seem to be more adequate to 
characterize low-level similarity of images. He also proposed 
four hybrid IDFs, namely, the Hausdorff-based Distance, the 
Global Feature Based Distance, the Symmetry Based Distance 
and the Local Distance Based Function and gave the analytical 
equations of them. Though global and local structural features of 
an image can be characterized by these functions in some sense, 
it is far from enough to model structural features of the image. 
3. SEMIVARIOGRAM-BASED IMAGE SIMILARITY 
We need a strict mathematical definition of semivariogram in 
order to extend it to define image similarity, which requires us to 
start with the concept of stationarity of random process. 
Definition (Mortensen, R. E, 1987): suppose a random process 
Xr = Ut € T] exists second order moment E x] «o. 
Where T represents a real-valued set in spatial/time domain. If 
X satisfies the following two conditions: 
E{X(1)}=m (1) 
  
EW()X() |- E (t- 5) @) 
then it is called a stationary random process. From equations (1) 
and (2), it is obvious that: 
B(t - s) - E((X(f) - m(X(s) = m)} 
zI(t-s-m! 
we call B(t—s) the co-variance function of X,. 
(3) 
From equation (1), we know that the mean value of X risa 
constant, that is, it does not change with time or position. 
Meanwhile the second-order statistical characteristics (the 
covariance function) between X(r) and X(s) only depends on 
the interval (t —s) in spatial/time domain, other than depends on 
the position of t or s. We usually call such stationary random 
process as second-order stationary random process. Common 
theories of random process only discuss second-order stationary 
random process in time domain. 
Usually in spatial domain, however, the conditions of 
second-order stationary random process can not be satisfied. 
Some phenomena in spatial domain show that if we compute the 
sample mean and variance over increasingly large domains, the 
—505— 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.