Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-1)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008 
down details such as edges, or to decrease or do away with 
noise patterns. 
Figure 7. Detail of the original images 
The algorithm we have created which we have named 
DETECAM is based on filtering in the frequency domain 
according to the Fast Fourier Transform. Thanks to its 
separability property, the complexity of calculations in 
bidimensional data (images) is reduced; two transforms for each 
image are carried out, one for the rows and another one for the 
columns (one-dimensional).When representing the transform 
spectrum, the highest frequencies occupy the comers, 
coinciding with image zones of high contrast. If these zones of 
the new image are substituted in the old image, by applying the 
inverse FFT, an image is obtained on which the edges of the 
changes appear lightly marked on the old image, having both 
been previously equalized. 
The next step is to make a correlation between the old image 
and the new image, the latter having been modified by the 
previous step. In order to do this, we resort to a QuadTree 
algorithm with which we carry out the correlation of wide tiles, 
progressively subdividing them if changes are found until we 
reach the pixel level. The result of this correlation is a binary 
image with the changes undergone between both images in 
black over white background. 
Evidently in this image-result there are salt-and-pepper type 
noises. In order to get rid of them, we resort to a filter we have 
named “direct occurrence filter”, on which a window size and a 
threshold are set. For each window obtained when the image is 
looked through, the number of white pixels is assessed; if this 
number is greater than the defined threshold, the central pixel 
will become white. In addition, if a white pixel is surrounded by 
black ones in the range of a pixel, it will become black. The 
image is then smoothed out by the application of a mode filter. 
The next step consists of combining the binary image obtained 
with the modem image according to the following principle: if 
the pixel is black in the binary image, it will be replaced by the 
value of the RGB pixel of the modem image. If the pixel of the 
binary image is white, the resulting pixel will remain white. 
Finally, an image is obtained with the RGB attributes of the 
modem image, only at the points where changes were detected; 
roads, buildings ... appear. 
Next a Mahalanobis classification is carried out. On the image 
obtained, a sample area is selected - e.g. a road - with ~10 
points, and an image is obtained with just the changes 
resembling the established sample area, according to 
Mahalanobis distance. As in previous steps, the result is 
binarized, making black the pixels that statistically resemble 
those of the sample area and making the remainder white. 
In the end we get an image on which the road - or any other 
feature of interest - appears and also some noise that will be 
done away with by means of an inverse occurrence filter. In this 
case, a window size and a threshold are also set up, however if 
the window is superimposed on each pixel and the existing 
number of black pixels get over the threshold, the pixel is set up 
as black. As in the case of the direct occurrence filter, a 5x5 
window mode filter is applied. The result is a noiseless binary 
image on which the changes in the two source images appear. 
This will be the image to be vectorized. 
7. CONCLUSIONS 
• The photogrammetric production may be improved 
concerning time and increase in correlated points if 
we have LIDAR software available for editing (tests 
have been carried out with good results). 
• DTM geometry obtained from the LIDAR scatter plot 
is improved if breaklines from photogrammetry are 
included. 
• Images provided with LIDAR intensity level do not 
have a good definition (images coming from the 
visualization of scatter plots!). In the making of 
orthoimages it is essential to start off from digital 
images. We are studying the generation of multi-layer 
images that could incorporate the LIDAR intensity as 
an additional channel (resampled image from the 
LIDAR intensity). 
• The fusion of data coming from different sensors open 
new expectations in the filtering and classification of 
information, giving rise to possible new products. 
• Photogrammetry appears to be a good tool for 
verification of data provided by LIDAR. 
ACKNOWLEDGEMENTS 
This is a publication of the LIDAR project (Integration and 
optimization of Lidar and photogrammetric technologies and 
methodologies for the cartographic production) and DETECAM 
project (Change detection from high resolution SPOT imagery. 
Methodologies analysis, developing models and algorithms). 
The project is carried out through a large group of scientists and 
engineers from the Technical University of Madrid (UPM), 
Department of Topographic and Cartographic Engineering and 
its partners from the National Geographic Institute (IGN). 
Funding and part of the technical staff is provided by the 
National Geographic Institute (IGN) while the rest of the staff is 
provided by the Technical University of Madrid (UPM). We are 
grateful to F. Papi and E. González for their work in
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.