Full text: Photogrammetric and remote sensing systems for data processing and analysis

ilt 
lage 
to 
of 
gle 
111 
for 
of 
The 
of 
is 
be 
ust 
age 
ion 
ger 
ata 
ect 
nce 
and 
CT, 
ing 
are 
orm 
age 
ric 
ime 
ow. 
of 
me) 
se. 
es, 
for 
and 
ted 
red 
ge, 
nto 
his 
ely 
of 
white "blobs" on black background, or vice versa. The feature extraction 
process isolates each blob and gives it a unique label. This operation is 
carried out by library functions applying a connectivity-analysis pixel 
grouping algorithm. 
In order to distinguish the blobs representing the targets from 
other blobs, characteristic parameters are computed for each one. The 
differences between these computed values and a given set of parameters 
for the ideal target are computed and the blob is recognized as a target 
if the differences are within a pre-set tolerance. 
Each recognized target covers an area of several pixels and thus 
it is necessary to locate, with sub-pixel accuracy, the coordinates of the 
center of the target. For solid-colored symmetric targets of highly 
contrasting background, the coordinates of the centroid are computed and 
considered to be the target coordinates. However, if the targets are 
designed with their center points clearly marked, a second step is carried 
out. In this step, the enhanced image is recalled and the grey levels of 
the center pixel and a matrix of pixels around it are used to interpolate 
the coordinates of the center. 
All the previous computations are applied to individual images, 
one from each camera. Now each point in one image must be matched with its 
corresponding point in the image taken by the other camera. For the 
control points, which are needed to determine camera orientation and 
calibration parameters, a priori knowledge about the number of these 
points and the way they are arranged is required. For all other points, to 
match a point in an image with its corresponding point in the other, the 
image coordinates of this point in the first image and the orientation 
parameters of the two images are used to determine the relationship 
between the x- and y-coordinates of this point in the second image (figure 
VIII-A). This is a straight line relationship (the epipolar line) as shown 
in figure VIII-B. The image coordinates of all the targets in the second 
image are tested with the equation of the epipolar line and the target 
that satisfies the equation best is the best match. However, this best fit 
must be within a preset tolerance, otherwise the point has no match and 
will be eliminated. 
Once all the targets are matched, their image coordinates and the 
orientation parameters are used to determine the object coordinates of 
these targets by photogrammetric intersection. 
SUMMARY 
In conclusion, we have provided a brief description of the digital 
image processing facilities currently available at NRCC/PR and have 
discussed the application of one of the systems to a fully automated 
photogrammetric task. While none of the equipment is of exceptionally high 
performance calibre, we feel that modest equipment such as ours is capable 
of addressing a wide range of problems relating to photogrammetric digital 
image processing. It seems certain, on the other hand, that product 
oriented system development for an all digital photogrammetric system will 
require equipment pushing state-of-the-art technology if implemented in a 
straightforward. manner. Future research may illuminate techniques for 
lessening the technological requirements or, at least, provide effective 
and efficient digital solutions to photogrammetric problems. 
159 
  
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.