The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part BI. Beijing 2008
As introduced above, in order to conduct the fusion of points
cloud and images, the points should be interpolate into raster
format firstly. Interpolating can be regarded as resample. Some
approaches, such as Inverse Distance Weighted(IDW), Spline,
Kriging can be used here. In this paper, we use Inverse
Distance Weighted method to interpolate the vector points into
raster.
As mentioned in section 2.2, Z coordinate and the intensity are
included in points data. Both of the two data reflects the
objects information. So, during the interpolation, Z and the
Intensity values are used as the key value. Fig.3 and Fig.4
indicates the results of interpolation by Z and intensity values.
Fig.3 Raster Image by Interpolation of Z
Fig.4 Raster Image by Interpolation of Intensity
Since some uncertainly factors and different spectral
characteristic during the data acquisition, the intensity values is
not as normal as Z values. Even in the same building, the
intensity will change violently. Thus, Fig.3 is more smoothness
and flatness than Fig.4. The buildings and the hills which are
higher than the terrain is quite obvious in Fig.3 but the
buildings can not be recognized in Fig.4. However, the road
and the plants is clear and vivid compared with the Fig.3.
3. DATA FUSION
3.1 Raster Fusion between image and points cloud
IHS fusion
The IHS colour space is broken down into Hue, Saturation and
Intensity. In order to separate the intensity from IHS colour
space from the intensity value of points, we use / to denote
the intensity value of IHS colour space and Intensity to
denote the intensity value of points cloud. Hue refers to pure
colour, saturation refers to the degree or colour contrast, and
intensity refers to colour brightness. Modeled on how human
beings perceive colour, this colour space is considered more
intuitive than RGB. It can be compared to the dials on an old
television set that help viewers adjust the set's colour.
To analyze and process images in colour, machine vision
systems typically use data from either the RGB or HSI colour
spaces, depending on a given task's complexity. For example,
in simple applications such as verifying that a red LED on a
mobile phone is indeed red and not green, a vision system can
use data from R, G and B signals to perform the operation.
With more complex applications, however, such as sorting
pharmaceutical tablets of subtly different colours, a vision
system may require hue, saturation and intensity information to
perform the operation.
IHS fusion is based on the conversion between IHS colour
space and RGB colour space. It is useful on the fusion
between multi-spectral images and the panchromatic images.
During the IHS fusion process, the RGB values of multi-
spectral image should be converted to IHS values for each
pixel. Since there’s only grey value in panchromatic images,
the grey value is considered as the RGB values. So, the
panchromatic images can also be converted to IHS colour
space. Then the I values of multi-spectral image can be used
for further process by some other image such as panchromatic
images. Fusion image can be get after the inverse transform of
IHS colour space to RGB.
The first fusion experiment is executed between Ortho
photomap and results of interpolation of Z value. Here, the /
values of Ortho-photomap are replaced by the grey value of
interpolation results. Fig.5 shows the result of the first
experiment.
The second experiment is executed for Ortho-photomap and
results by Intensity value interpolation. Here, the I values
of Ortho-photomap are replaced by the grey value of
interpolation results. Fig.6 shows the result of the second
experiment.
311