A weighting factor was introduced as suggested by
Trinder, 1989, changing Equation 6 into Equation 7.
n m
X=1MZ X] .gij.-Wij
i=1j=1
nm
y = /ME Li £ij.Wij (7)
i=1j=1
Where
nm
M = 3 gij-Wij
i=1j=1
gij is the grey scale value of each pixel and n = m = 15,
and wij — gij.
In this case the higher intensity values of the target are
iven a greater weight in the calculation so that the
influence of the background is decreased. Further tests
were performed using this equation instead of Equation
6, with unaltered: data sets, network, camera
calibration, method of target location, labelling, and
bundle adjustment procedure. This resulted in an
improvement over the simple centroid method of 25%
giving an overall subpixel accuracy of 0.11 of a pixel.
The benefit of the approach adopted is that an
independent measure of accuracy achieved is used to
assess the two methods. Furthermore, this is possible in
a measurement situation with real problems of variable
illumination, target orientation, and target distance.
The results of these tests show that the location of the
targets using these methods produces reasonable
accuracy. À few targets exhibited residuals of up to one
pixel in the bundle adjustment. These larger residuals
occurred at the same image positions for each method
but were not examined further as the primary purpose
was to compare the two target location methode
4. UNIQUE LABELLING OF TARGETS.
It is has been shown that the coordinates of legitimate
targets can be extracted with high reliability and
accuracy. These coordinates provide the basic
information required for the bundle adjustment
program to calculate the 3-D coordinates of the targets.
However, although it is possible to identify and locate
the targets from each camera station, the differing
camera orientations mean that: the subject may be
distorted, some of the targets possibly occluded, or
targets may be out of the field of view. Therefore, it is
necessary for the targets from each view to be uniquely
identified with respect to each other.
Ideally, the locations of some or all of the labels of the
targets could be mathematically modelled in 3-D space
and a transformation performed for each varying
camera location. However, this presupposes just the
information which is the end product of the whole
measurement process. Unfortunately many objects are
complex to model accurately and so an approximation
may be a better approach. Another method may be to
use uniquely shaped targets. However, this has serious
implications for the imaging of these targets because for
unique identification it is likely that the targets would
need to be larger than the small circular targets used
and would also be non symmetric under translation.
Fortunately the wind vane under consideration
approximates to a flat surface and an affine
transformation can be performed. See Figure 7.
Figure 7. Image of the tip of the turbine blade.
4.1 Choice of control points.
The choice of parameters for the transformation of the
image data to the same orientation is performed by
choosing at least three, generally four, known control
points which can be uniquely identified in each image,
say, the corners. Then by performing the affine
transformation the image is warped. These control
points need to be positioned to minimise the distortion
caused by the transformation of the 2-D image of a real
3-D object. It was found in practice the choice of targets
at or near to the corners gave the best results.
4.2. Principle of the transformation.
The basic principle of the transformation is that one of
the images is considered to be the reference image and
the other images are transformed to match its
orientation. If the number of control points identified
on the master image is m, then the equations of a
polynomial are:
X = Tx (Xa, Yn)
Y = Ty (Xn, Yn) (8)
Tx(), and Ty() are single mapping functions. Because
the position and orientation of the cameras are
arbitrarily placed, the mapping relationship of Tx() and
Ty() has to be approximated. In this case, where a linear
fine transformation is sufficient, then: