Full text: XVIIth ISPRS Congress (Part B5)

  
  
  
  
  
  
  
  
  
  
  
Table et object coords of 
OBJECT |“ Polis » 
SPACE Table of measured 3-D 
distances Result: 
* 3-D coords of control 
and tie points 
Image coords of control * Adjusted orientation 
IMAGE and tie pts of image #1 elements 
* Accuracy of adjusted 
SPACE AT values 
of Image # 2 
Table of a 
Image pe 
of Image # n 
  
  
  
  
Figure 2: Overview of Bundle Adjustment 
3. CALIBRATION AND ORIENTATION OF THE CAMERAS 
In this development two or more cameras will be used simultaneously for 
image acquisition. For the calibration of the cameras and calculation of 
the camera arrangement a calibration cube is used. Its size is 50 x 50 cm 
and is built of black iron rods. Mounted on these rods are bright white 
balls with a diameter of 12 mm which are used as control points. The 
locations of these control points were measured by using three high 
precision theodolites. The network included 2 high precision scale bars. 
The accuracy after adjustment was 0.07 mm. 
If the cube fits exactly in one image then the pixel size is approximately 
1 mm in the object space (assuming a sensor matrix of about 512 x 512 
pixels). Then 0.07 mm in the object is less than a tenth of a pixel in the 
image, which is consistent with the matching accuracy expected. 
The cube can be relocated and the re-calibration done anywhere. For the 
calibration itself, it is necessary to find the locations of the control points 
in the images. A simple thresholding is sufficient if the background is 
very dark compared to the bright targets. The image points found after 
thresholding are checked for their circular shape. If they are not circles 
the points are rejected. This can be caused by occlusions of parts of the 
target or by the detection of elements which are not actual targets. The 
computation of the image coordinates of the targets is again done in the 
original greyscale image by calculating the centre of gravity (Trinder, 
1989). 
They only remaining problem is an automatic number assignment for the 
control points. There are two possibilities for finding the right number for 
a measured point. Firstly, if the orientation elements of the cameras 
(especially the exterior orientation) is accurate enough their imaged 
position can be precalculated and compared with an actually found posi- 
tion. The comparison of the point distribution is more important than the 
estimated position. Secondly, the previous method fails in cases where 
only rough approximations of the camera are known. Interaction of an 
operator becomes necessary in this case. He must assign point numbers to 
at least 4 well distributed control points. After this a first bundle adjust- 
ment (or spatial resection) can be calculated and the previously mentioned 
automatic assignment can be started. Finally a bundle adjustment calcu- 
lates the interior and exterior orientation elements which are used later for 
compiling the images. 
4. DERIVATION OF THE OBJECT SHAPE 
One of the main problems of the derivation of object shapes in close 
range applications are the large parallaxes and the occlusions which make 
searching for corresponding points in the images very difficult. The least 
squares matching algorithm needs very good approximations within four 
to five pixels. There are several possibilities to obtain these first approxi- 
mations very quickly without complicated and time consuming image 
interpretation tasks. In our project the following two approaches were se- 
lected: 
- binary coded structured light 
- epipolar constraints of more than two images 
    
     
  
      
   
    
    
     
  
   
    
    
  
  
  
  
  
  
  
  
  
  
  
  
   
    
   
   
   
   
   
   
   
   
   
    
   
   
   
   
   
   
    
   
  
    
    
   
    
   
  
  
  
    
   
     
   
    
4.1 Binary Coded Structured Light 
This principle has been used successfully in several projects (Stahs and 
Wahl, 1990, Kim and Alexander, 1991). The minimum equipment is one 
calibrated slide projector and one camera. We are using a slide projector 
and two cameras. In this case the slide projector does need not to be very 
accurate. Lens distortions of the projector lens for instance or distortions 
of the slide image (if these distortions are common to all slides) do not 
have any influence on the geometric quality of the result. A calibration of 
the projector is therefore not necessary. The pattern projected onto the 
object surface is used for image segmentation. For matching the correct 
geometric position and for calculation the 3D coordinates only the images 
of the CCD cameras are involved (Figure 3). 
  
  
  
  
  
Figure 3: Slide Projector and CCD Cameras 
A sequence of, at the most, 8 black and white stripe patterns with fre- 
quencies increasing by a power of two, and two pictures for normalization 
and maybe one further picture with a random pattern are projected onto 
the object surface. The direction of the stripes must be approximately 
orthogonal to the photogrammetric base. The images taken by the cam- 
eras are normalized and thresholded giving 8 binary images for each 
camera. These 8 images are combined into one greyvalue image by a 
simple addition which resembles a greywedge projected onto the surface 
of the object. The same area of the object is covered by a stripe of the 
same greyvalue in the left and right images. Black areas indicate the areas 
where matching is impossible, caused by shadows or very dark object 
regions. Corresponding positions within a stripe can be found by calculat- 
ing the intersection of a stripe with an epipolar line. 
Although this type of segmentation can be done very quickly there is one 
major drawback. Since 8 subsequent images must be taken the object 
must not move during data capture. The time necessary for capturing the 
image data depends mostly on the ability of the slide projector to change 
the slides very fast. The ideal slide is an LCD slide where the pattern is 
generated and changed computer controlled without any mechanical 
movement. It is hoped to incorporate such a system into the method in 
the future. 
4.2 Epipolar Constraints of More Than Two Cameras 
If the projection of stripe patterns is not possible for some reason a 
further possibility exists to find corresponding points in images. If more 
than two image are taken of the same object the epipolar constraint of all 
images together can be used for eliminating ambiguities (Maas, 1991). 
The precondition for this method is a good random (dot-)texture on the 
object surface which can be projected artificially if necessary. 
5. FINAL FINE MATCHING 
After the intersection of the stripes in the image with the epipolar lines 
the image coordinates of two corresponding image points are available. 
The accuracy of the object coordinates will depend on frequency of the 
stripe pattern and on the accuracy of the projection centres. If this 
accuracy is not sufficient and the object surface shows a suitable texture 
an additional fine matching can be commenced. The already calculated 
image coordinates are good starting values for a least square matching 
procedure.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.