Urse
lage
B.
| be
lage
inex
rito
V is
inex
ion,
of
tion
age
dly,
as a
10
nain
the
nce.
and
| are
ects.
velet
sing
tion.
and
tain
lour
and
ality
ging
is to
ging
alled
as Retinex Wavelet" theory (EH: Cand, 19717 1977-—D:-
Jonson, 1997 and D. H. Brainard, 1986). The representation
mathematic model is constructed in the Retinex Wavelet
domain and relevant fast algorithm is design for the image
quality related processing including image dodging and image
restoration. All these will be discussed in Section 2.1 to Section
23.
2.1 Image Representation and Analysis Based on Retinex
Wavelet Theory
The basic assumption of retinex wavelet image representation is
that the quality degrade image is consist of two parts. One is the
constant part, which is the real imaging of object scene and the
other is variance part with noise and distortion. Thus image
quality processing should be done only to the variance part
image. This assumption can be understood with different
imaging model for different image quality related processing
aim such as image enhancement and image restoration.
For image restoration, the assumption comes from imaging
theorem of photograph. All of imaging physics course to an
optic device is the same, which can meet imaging theorem as
figure 3 shown (S.X. Zhang, 1994):
Where a is the distance of scene, 5 is the distance of image and
f is focal distance. A B. is the image of scene object AB, € is the
diameter of imaging blur circ
e. As the above imaging course
and imaging theorem, objects in different scene distance should
adopt different focal distance strictly to obtain clear imagery as
automatic focusing function of HSV to ensure the interested
object being placed with best focal distance. To an imaging
system, the focal length of optic sensor is stable during imaging
course, thus it tries to use a suitable super focal length to obtain
a large range of the depth of field with relative clear imagery.
The super focal length with large depth of field is based on the
estimation of imaging blur circle with a factor that a circle with
à certain small diameter can be taken as a point to the sensitive
resolution of HSV, which means the blur imagery inside the
blur imaging circle can be taken as clear imagery. And the
objects in the range of depth of field can obtain clear imagery
with the estimation of blur circle. Even though, the final
imagery of a large scene range especially the mountain terrain
just has a stable imagery part with strict clear imaging and the
other part should be re-focus to obtain clear imagery. Thus,
image restoration should be done just to the imagery part not on
the focal length while all of image restoration algorithms are
based on the total imagery. In principle, the image restoration
based on total imagery will damage the object imagery on the
focal distance, thus these quality related processing should be
done with two parts imagery representation. And once the
963
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
imaging sensor and imaging state is confirmed, the clear
imagery part is stable, which we called it constant imagery part.
For image enhancement, the assumption comes from
illumination model as the figure 4 shown (B. K. P. Horn,
1974). As figure 4 shown, R is the light source, and A, B is the
object in scene. The lightness of object A composes with the
illumination 7; from light source and reflectance /55 from other
object B, which can be illustrated as equation (3).
EI ra +! LS Q)
View Point
Le
Light Source
Object A
Object B
Figure 4 Illumination Model
Thus the final imagery can be decomposed into two different
images of illumination image part and reflectance image part.
Once the light source and its relationship between light source
and object is confirmed, the illumination image part can also be
taken as constant imagery part. Thus, the image brightness and
colour processing should deal with these two images using
different operator, which is not considerate by common image
enhancement algorithms.
Whether imaging theorem or illumination model, they construct
image with right imaging part and ill imaging part. This can be
expressed a deconvolution model as equation (4):
G(x, y) - R(x, y) L(x, y) (4)
Where G(x,y) is the degrade image through imaging model,
L(x,y) is the constant imagery part and A(x,y) is the variance
imagery part. For this image representation model, the image
processing algorithms should adopt different operator to
constant image and variance ill image. And the benefits of such
decomposition include the possibility of improving image
quality only to the ill-posed image part without damaging
constant image part. Converse image representation model to
the logarithmic domain by g(x,y) 7» log G(x,y), I(x,y) =log L(x,y),
r(x,y) = logR(x,y), and thereby we can get:
g(x. y)=l(x, y)+ r(x,y) (5)
This step is motivated both mathematically, preferring additions
to multiplications, and physiologically, referring to the
sensitivity of our visual system]. Thus, the multiple-resolution
analysis can be done only to the r(x, y) to obtain higher precision
image representation as equation (6) shown:
g(x, y ) = I(x, y) T Nd. V is (a y) di. ES f. V ? (6)
Where, U is the two dimensions base function of wavelet
kon
transform. This is the image representation model based on
Retinex Wavelet theory. Comparing with traditional imaging
model (as equation (1) shown), this model is more closed to
imaging course and focus mechanism of HSV. Image
processing algorithms based on this representation model will
be superior to common imaging model. Thus, the image