Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B1-1)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bl. Beijing 2008 
higher potential for 
data 
longitudinal and 
transverse 
coverage 
Flight constraints 
Less impact of 
time, daylight, 
night, season 
clouds 
Daylight flying, 
clean atmosphere 
necessary 
Production range 
May be automated, 
thus a greater 
production 
Higher need of 
editing control 
Budget 
25%-33% of 
budget: 
photogrammetric 
compilation 
Production 
Software: depends 
on qualified 
commercial & 
technical people 
Software for the 
end-user: slow 
process of 
identification and 
manual extraction. 
Not reliable if 
automated, implies 
editing, especially 
at large scales 
Data acquisition 
limited by a largely 
contrasted area 
Data can be 
acquired. 
Successfully used 
in coastal 
cartographic 
production 
Difficult and 
expensive 
Processing 
Groups 
Correlation 
Feature extraction 
Definition of zones 
or areas 
Edge limits 2-D 
Results 
Edges or limits 
3-D 
Edges and zones 
3-D 
Table 1. Comparison of photogrammetry vs punctual LIDAR 
From the tests and trials carried out in the Photogrammetry 
Laboratory of the Technical School of Surveying, Geodesy and 
Cartography of the Technical University of Madrid (Spain) with 
both techniques and information, provided by the National 
Geographic Institute (IGN) of Spain, from the same zone 
(Segovia) flown over with Vexcel Ultracam D digital camera 
and LIDAR sensor, processed with DIGI3D/MDTop 
photogrammetric software for LIDAR GTBiberica (Inpho 
DTMaster) information, we have come to the following 
conclusions: • 
• The classical photogrammetric technique is very dependable 
and accurate but the production process is time-consuming, 
very specialized, and thus costly. 
• If we consider semi-automation processes, i.e. automatic 
correlation with breakline drawing by operator, production 
times are going to be improved and even the HR will be less 
specialized, however the software and even the quality of 
the images make this methodology slow since it involves 
much editing of the correlated points. On the other hand, if 
we want to decrease the number of correlated points (a 
greater interval) we should increase the number of 
breaklines, so we would go back to the above-mentioned 
case regarding production times. 
• Full automation implies a large amount of correlated points. 
The automatic extraction of the breaklines provided by the 
current software is still the cause of many inaccuracies 
which force a revision, and adding that information in the 
classical way by restitution. Therefore, the time spared is 
lost in a stereoscopic revision. 
• Maybe the aspect that stands out is the actual information 
source, the image, a metric document, of high geometric 
resolution and increasingly better radiometric resolution. 
Besides, sensors allow ever-increasing information in the 
spectral range. 
• Regarding the LIDAR techniques, they appear satisfactory 
as far as the number of points (density/sq meter) and 
precision (it is necessary to eliminate systematic errors, 
calibration, etc.) However we have to use very specific 
software and at times the filtering and classification 
processes are long and intricate. In any case, a metric 
verification of the information supplied appears to be 
convenient. 
The use of one technique or another will depend on parameters 
such as cost, time and quality, independently, two at a time or 
all together, as we shall see below (decision triangle). 
3. CARRYING OUT THE INTEGRATION OF 
INFORMATION 
To the extent in which the resolution of the LIDAR sensors has 
been progressively increasing with time and in view of the 
possibility that these sensors could collect spectral information 
in other bands and not only the information provided by the 
LIDAR, researchers have been developing procedures and 
techniques of classification based on the fusion of information 
provided by the LIDAR with the information provided by the 
regular photogrammetric cameras. Haala and Brenner (1997) 
reconstructed 3-D models of cities and vegetation using a 
combination of LIDAR data with another data source. 
The ML (Maximum Likelihood) classifier is appropriate in our 
case to solve the problem by using several bands of spectral 
information and other attributes simultaneously [TSO and 
Mather (2001)]. The initial hypothesis of this method is that the 
classes we want to obtain are distributed with the same 
likelihood in the image considered; although this is not always 
the case, the method is improved and extended in the well 
known Bayesian decision method which assigns a different 
likelihood of occurrence to each class [Swain and Davis (1978), 
Strahler (1980), Hutchinson (1982), Mather (1985), Maselli et 
al (1995)]. 
In our case we carry out the classification of the LIDAR scatter 
plot by turning the RGB image into HIS, so as to get the colour 
information through the H and S attributes encoded to 8 bits. In 
addition we have the information of the panchromatic camera 
that provides us the channel I with an encoding of 16 bits. 
To this spectral information we add the R component of the 
infrared camera, encoded to 8 bits, linearly independent of the 
previous attributes, the LIDAR intensity level encoded to 8 bits, 
and finally the Z increment of the point, i.e. the Z difference 
between the first and the last pulse. In all, we carry out the 
classification of the LIDAR points with 6 independent attributes 
in order to improve the results of the classification that would 
be obtained by using only part of the attributes.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.