You are using an outdated browser that does not fully support the intranda viewer.
As a result, some pages may not be displayed correctly.

We recommend you use one of the following browsers:

Full text

Mapping without the sun
Zhang, Jixian

points using Forstner operator, we extracted some initial points
using Roberts operator[ 10] [ 11 ] „
2.1 Extraction algorithm
d 1 =
1 £ c, r
~ £ c + 1, r
d 2 =
1 8 c, r
- £ c , r + 1
d 3 =
\ & c, r
& c — 1, r
d 4 =
\ & c, r
~ 8 c , r - l
1) Initial points extraction
Computing the image difference in 4-neighborhood using
Roberts’s operator is equal to calculating the four gray-
difference absolute value d l ,d 2 ,d 3 ,d 4 :
Giving threshold T, The point ( C , r ) will be regarded as a
initial point ifM — mid{d\, di, d3,d4} > T.
The efficiency of extraction is influenced by T. Low threshold
will increase the computational amount and high one often
leads to omission and false point. Generally, the threshold is set
to be 60 percent of the mean of the difference image. But this
value should be adjusted according the gray and geometry
feature of the remote sensing images. If the feature points
extraction is too slow, the threshold should be increased,
otherwise it should be decreased.
2) Accurate extraction using Forstner operator
The covariance matrix N and roundness q c of error ellipse
in the 3X3 window around initial point ( c, r ) can be
computed according to equation (2).
N =
i*,g, _ 4 Det N
I g.g, I g\ ’ q ' r (trN) 2
Where ( C , r ) is the center of the window, g x ,g v are the
difference in x and y direction respectively, DetN is the
determinant of N and trN is the trace of No
For threshold J' q , if q c r > T q > ( c > r ) is a choice point,
then we compute the weight value Wc,r = DetN/tfN ^nd
select the extreme points in the grid as the feature points
according to ^ c,r . And ^ q is a empirical value, generally we
set q to the range form 0.45 to 0.7[8] [14].
2.2 Uniform Control
In the process of feature points extracting, we must distribute
the feature points evenly in the image rather than cluster in
some local region. In the paper, we adopted Grid Control
Technology base on entropy to ensure points’ distribution
1) Dividing the image into small grid block evenly, and then
computing the entropy for each block. The entropy is the
measurement of the image information quantity and feature
existence, the information quantity of image with lots of feature
is more than the one with little feature [5] [7].
E = -Y J Pm l °S(P m ) №
Where k=255, p m is the probability of the pixel with gray m in
the grid block.
2) These entropy value Eij of grid block are listed in order
from large to small, and dividing all the block into 3 levels
according to the grid size from big to smalk the quantity ratio
of grid block is 2 :1: 1.
3) In the first level grid block, extracting the feature directly
because of the large contents of information and abundant
feature. The feature points maybe too concentrate because of
the non-uniform distribution in the first level part. But the
feature points need to be evenly distribution in the whole image
for the registration precision.
4) If the feature points of the first level grid block is not evenly
distribution, we extract the feature points in the prior grid block
in order from the second level one, which make sure that there
are enough feature points in the abundant information region,
and there are also a few feature points within the less
information region.
Given image f (X,y ) , the ( p + q ) level moment is
— = IZ*' y q f (x, y ) ■ The (p + q) level central
* y
moment is U pq = X ~ Xo) P (y - yof f(x,y) ,
* y
where Xo = TYl\o/Wloo, yo = YYlwj 17loo is the coordinate for the
gravity center of the image»
Normalizing all the moments using 0 level central moments,
we can get the normalized center moments.
T| pq — Upq/U 00
where r = ( p + q)/2, p + q = 2,3 (4)
Using the 2, 3 level moments, we can construct 7 invariant
moments [3] [9] which have the invariability of translation,
rotation and proportion.
We can construct feature description vector for image using
these 7 invariant moments. The euclidean distance for two 64 X
64 image blocks whose center points are A and B respectively
could computed as below.
The similarity measurement is p(A,B) = » where
P €E [0,1] and R is a constant.