Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision", Graz, 2002 
  
After having oriented the aerial imagery to the LIDAR point 
cloud we can fuse features extracted from the images with 
the segmented surface. Figs. 3(e,f) depict the edges ob- 
tained with the Canny operator. We show them here to 
demonstrate the difficulty of matching edges to reconstruct 
the object space by stereopsis. With the segmented sur- 
face and the exterior orientation parameters available it is 
possible to constrain the edge detection process to special 
areas, such as the boundaries of segmented regions, to 
adapt the parameters of the edge operator, or even choose 
other operators that may be better suited in a particular 
case. Figs. 3(g,h) show the effect of using all the knowl- 
edge that has been gained about scene before extracting 
edges. The segmentation of the LIDAR points led to pla- 
nar surface patches and boundaries. These boundaries are 
projected back to the images and thus specify image re- 
gions where we look for edges. The edges obtained in both 
images are then projected into the segmented scene, for 
example by intersecting the planar surface patches with the 
plane defined by the projection center and the edge. With 
this procedure we have now boundaries in object space that 
have been derived either from LIDAR points or from aerial 
images, or from a combination. Fig. 3(i) shows the final re- 
sult. The color-coded boundaries reflect the combinations 
that are also a useful measure to express the confidence 
and accuracy. For example, the red roof edge was deter- 
mined from LIDAR and confirmed by edges from both aerial 
images. 
6. CONCLUDING REMARKS 
We have shown in this paper that fusing aerial imagery with 
LIDAR data results in a more complete surface reconstruc- 
tion because the two sensors contribute complementary 
surface information. Moreover, disadvantages of one sen- 
sor are partially compensated by advantages of the other 
sensor. We have approached the solution of the fusion prob- 
lem in two steps, beginning with establishing a common ref- 
erence frame, followed by fusing geometric and semantic 
information for an explicit surface description. 
Many higher order vision tasks require information about the 
surface. Surface information must be represented explic- 
itly (symbolic) to be useful in spatial reasoning processes. 
Useful surface information comprises surface patches, de- 
scribed by an analytical function, their boundaries, surface 
discontinuities, and surface roughness. Note that the explicit 
surface description is continuous, just like the real physical 
surface. This is in contrast to the better known discrete rep- 
resentations such as DEMs, DSMs, and DTMs. Here sur- 
face information is only implicitly available with the notable 
exception of a DTM that contains breaklines. Unlike explicit 
descriptions, grid and triangular representations (TIN) have 
no direct relationships with objects. 
The fusion of aerial imagery and LIDAR offers interesting 
applications. The first step for example establishes an ex- 
cellent basis for performing a rigorous quality control of the 
LIDAR data. This is particularly true for estimating the hor- 
izontal accuracy of laser points and for discovering sys- 
tematic errors that may still remain undetected even after 
careful system calibration. Another interesting application 
is change detection. Imagine a situation where aerial im- 
agery and LIDAR data of the same site are available but with 
a time gap between the separate data collection missions. 
A - 316 
Differences between the two data sets that exceed random 
error expectations, must have been caused by systematic 
errors or by changes in the surface. 
After having completed the fusion approach as described in 
this paper, future research will concentrate on applications 
in order to test the suitability of the explicit surface descrip- 
tion in spatial reasoning processes as they pertain to object 
recognition and other image understanding tasks. 
REFERENCES 
Csathó, B., T. Schenk, D-C. Lee and S.Filin (1999). Inclusion 
of multispectral data into object recognition. In ISPRS In- 
tern. Archives, 32(7-4-3W6):53—61. 
Csathó, B., W. Krabill, J. Lucas and T. Schenk (1998). A 
multisensor data set of an urban and coastal scene. In 
ISPRS Intern. Archives, vol. 32, part 3/2, pp. 15-21. 
Ebner, H. and G. Strunz (1988). Combined Point Determina- 
tion Using Digital Terrain Models as Control Information. 
In /SPRS Intern. Archives, 27(B11/3), 578—587. 
Habib, A., A. Asmamaw, D. Kelley and M. May (2000). Linear 
features in photogrammetry. Report No. 450, Department 
of Civil and Environmental Engineering and Geodetic Sci- 
ence, The Ohio State University, Columbus, OH 43210. 
Hall, D. (1992). Mathematical Techniques in Multisensor 
Data Fusion. Archtech House, New York. 301 p. 
Jaw, J.J. (1999). Control Surface in Aerial Triangulation. PhD 
dissertation, Department of Civil and Environmental En- 
gineering and Geodetic Science, The Ohio State Univer- 
sity, Columbus, OH 43210. 
Lee, |. (2002a). Perceptual Oganization of Surfaces. PhD 
dissertation, Department of Civil and Environmental En- 
gineering and Geodetic Science, OSU, 157 p. 
Lee, Y. (2002b). Pose estimation of line cameras using lin- 
ear features. PhD dissertation, Department of Civil and 
Environmental Engineering and Geodetic Science, OSU, 
162 p. 
Marr, D. Vision. W.H. Freeman, San Francisco, 
Schenk, T. (1999a). Matching Surfaces. Technical Notes in 
Photogrammetry, No. 15, Department of Civil and Envi- 
ronmental Engineering and Geodetic Science, The Ohio 
State University, Columbus, OH 43210, 21 pages. 
Schenk, T. (1999b). Digital Photogrammetry. TerraScience, 
Laurelville, OH 43135, 428 pages. 
Wald, L. (1999). Some Terms of Reference in Data Fusion. 
IEEE Trans. on Geoscience and Remote Sensing, 37(3), 
1190-1193. 
Zalmanson, G. (2000). Hierarchical Recovery of Exterior 
Orientation from Parametric and Natural 3D Curves. PhD 
dissertation, Department of Civil and Environmental En- 
gineering and Geodetic Science, OSU, 121 p.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.