Full text: XIXth congress (Part B5,1)

I 
  
M 
  
as 
re 
D. 
Ve 
to 
ve 
ut 
11 
Or 
f 
f 
  
El-Hakim, Sabry 
  
5. Even when using different types of sensor, some parts of a complex environment will not be captured due to, for 
example, occlusions. Methods to interpolate or extrapolate to fill the gaps should be employed. 
6. The system should be able to integrate models constructed from independent sets of images (e.g. a set of images 
capturing a front of a building and another set capturing the back, the top, or the inside). 
7. Forlarge complex environments, the system must be able to take input from positioning devices (e.g. GPS). 
This paper argues that implementing the above ideas will result in: 
e Flexibility and less restriction on the type of object or environment in terms of complexity and size. 
e High accuracy and reliability. 
e Complete scene description. 
This approach is suited for a variety of applications including accurate documentation of complex sites. 
2 DETAILS OF THE APPROACH 
To completely describe a complex scene we propose to practically combine several techniques. The basic approach uses 
multiple overlapped images and photogrammetric bundle adjustment to find the global shape of the environment or 
object. However, we also need 3D points from single images because the following cases are prevalent: 
e Most images do not have enough features to describe all surfaces. Without feature, multi-images can not be used. 
e Parts of the surface appear in one image only due to occlusions or incomplete coverage. 
e In complex scenes, finding feature in multiple images can be time consuming and error prone. 
We will also need additional techniques to automatically generate sufficient 3D points on non-flat surfaces that have 
only few features. If the surface is of known shape, for example, a sphere, cylinder, or quadric, we can determine the 
parameters of the surface from available features. Once this is determined, additional points can be added automatically 
and projected into the images using the known image parameters. Another approach, which can be applied to smooth a 
triangulated part of the surface, is polygon subdivision (Zorin, 1997). The initial set of triangles is split into smaller 
triangles by adding, for example, a point in the middle of each side and fit B-splines to determine 3D coordinates of 
new points. The process is repeated as many times as needed. However, the existing features may not be sufficient to 
start this process. In this case an active range sensor such as a laser scanner is best suited for the local details 
2.1 Main Features 
Based on the ideas presented in section 1.2 and the above discussion, we designed a system with the following features: 
Complete camera calibration with all distortion parameters to provide solid basis for all subsequent computations 
Sub-pixel target and corner extraction 
Manual selection and labeling of points 
Photogrammetric bundle adjustment with or without control points (free network) 
Least squares surface fitting (planes, cylinders, spheres, quadrics, circles) 
Computes 3D from a single image, with fitted surface model 
Automatic addition of randomly distributed points on known surfaces 
Automatic polygon subdivision to smooth surface between existing triangles 
Automatic extraction and matching of targets, if available, after registration 
Interactively register and integrate data from laser scanners with main data sets from digital images 
Interactively register and integrate models created by independent sets of images 
Generates accuracy numbers for calibration, 3D points, registration, and fitting 
Accepts input from positioning devices 
Human assisted determination of point connectivity followed by automatic modeling 
2.2 Main Steps 
Figure 2 summarizes the main procedure steps. In the data collection step, overlapped images, usually from digital still 
camera, are taken at wide base line and made sure to cover the intended object or site. If local details are needed on 
some parts, those can be acquired with a range sensor. Placement of sensors dependents entirely on the site and the 
application requirements and is beyond the scope of this paper. 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part BS. Amsterdam 2000. 205 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.