Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-1)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B7. Beijing 2008 
146 
Figure 1. A classical sequence for registration algorithms 
Registration algorithms rely on corresponding “control 
elements” to calculate the parameters of the geometrical 
transformation that sets the correspondence between all the 
points of both images. These “control elements” are either 
landmarks or locations characterized by their neighborhood. 
Landmarks can be points, contours, line intersections, or regions 
that have to be extracted from images before: approaches using 
such landmarks are called “feature-based” ones. Otherwise, 
when using pixel neighborhood to perform such a matching 
process, we talk about “area-based” approaches. 
“Feature-based” methods try to identify corresponding 
landmarks in both images. In fact, this identification is not 
performed directly on landmarks but on shape descriptor values 
(descriptions) that represent them. We only consider those 
shape descriptors that remain invariant through any possible 
transformation from an image to another. They must also have 
two important properties that are uniqueness and stability: two 
different landmarks must be represented by two different 
descriptions (uniqueness property) but slight changes on a 
landmark (because of noise, for example) must not change its 
description (stability property). Finally, landmark descriptions 
of both images are compared through a similarity measure that 
helps in finding how to pair landmarks. 
“Area-based” methods use a similarity measure to identify two 
areas that are considered as neighborhoods of corresponding 
pixels. Then, there is to define a criterion that depends on this 
similarity measure and whose optimization provides the 
registration transformation. 
Whatever approach we decide to use, we need to make choices 
(about landmarks, similarity measure,) that take into account the 
way SAR images are built but also the noise that is a relevant 
feature of these images. Anyway, the transformation model has 
to be guided by three considerations that are the geometrical 
deformation during the acquisition process, the required 
precision of the registration, and the use of the expected result. 
Global deformations, local deformations, or both can 
characterize such transformations. 
b) Specificity of Spaceborne platform Radar systems 
2.2.1 Finding the position when acquisition is performed 
from a spatial platform: We need to have a rough 
approximation of the two images geographic parameters (i.e. 
location and orientation of the corresponding areas) in order to 
efficiently initialize the registration process. This information 
could be derived from sensor position parameters but the orbit 
and orientation of the platforms that convey these sensors can 
be modified by several external actions (Arbinger and D’Amico, 
2004), as, for example: 
Earth gravity field irregularities, and sun/moon 
interactions 
Atmospheric friction in the case of satellites whose 
height is between 300 and 800km 
Photonic pressure 
Orbit and orientation parameters are not captured in a 
continuous way but obtained through estimation from key 
positions, and this estimation requires quite a long time to be 
processed: it can take a day to several weeks depending on the 
precision (Wessel et al., 2007). In addition, even if we had all 
the metadata for computing such parameters, we would not 
have enough information on the sensor functioning to exploit 
efficiently these metadata (Eastman et al., 2007). Finally, we 
can say that only knowing these external parameters is not 
sufficient for solving the registration problem between two 
Radar images. From now, we will suppose that the two images 
globally represent the same area but are not precisely registered. 
2.2.2 Geometrical deformations: A main drawback of SAR 
systems is to provide image geometrical deformations 
(Lillesand et at, 2004) that result from the way of sorting points 
by using their distance to the antenna. The ground position of a 
point can be slightly (or more) wrong because it has been 
“seen” as shifted toward the antenna. This kind of error is more 
significant when the ground height is not uniform: depending on 
height variations, the target density (a point on the ground is a 
target for the Radar) can increase and generate artificially clear 
areas in the image (“highlighting” effect), or it can decrease and 
create abnormally dark areas in the image (“lowlighting” effect). 
When height variations are very large, even the target sorting 
may be wrong (“layover” effect) and some of the targets can 
disappear because they are masked (“shadowing” effect). 
Other deformations have to be taken into account (Richards, 
2006): they are related to the earth curvature and to the width of 
the viewing angle. In such cases, the scanned areas that are far 
in the nadir direction are widely opened: this situation results in 
an important non-homogeneity in the pixel distribution, in 
particular along image boundaries. 
2.2.3 Radar image radiometry: Interpreting images from 
Radar systems is very difficult because of the complexity of the 
processes involved in their generation (Rees, 2001). Signal 
intensity for each point - or target - is encoded as a grey level 
in the resulting image and depends on the way the Radar wave 
interact with the target. This interaction relies on both sensor 
features and target features. Sensor features are its wavelength, 
its polarization and its viewing angle. Target features are its 
roughness, its dielectric characteristic (it has a high value for 
metallic elements, and it is correlated to its moisture content), 
its shape and orientation. 
All these parameters mutually interact and thus, it is very 
difficult to exactly know which are their individual 
contributions to the returning signal. For example, the target 
roughness parameter depends on the target itself but also on the 
incident angle and on the wavelength that has been used for 
illuminating the target. Several mechanisms related to the 
reflection and backscattering processes are involved in the 
image generation: specular reflection, diffusion, comer 
reflection, and volume diffusion.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.