Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B5-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B5. Beijing 2008 
The described algorithm works independently for each triangle. 
The result of the TtIA is independent of the assignment of the 
adjacent triangles. Depending on the illumination conditions 
during the acquisition, the brightness levels of the used images 
are very inhomogeneous; the result of the textured model using 
the original images and the TtIA can be speckled. This problem 
is reduced or eliminated using the following global and local 
image enhancement steps (chapter 3.4). 
3.4 Global Grey Value Adjustment 
3.4.1 Colourspace 
According to the image acquisition situation with controlled 
illumination conditions and using one camera with constant 
settings, we assume the colour values between two adjacent 
images and the same object points as approximately constant, 
but according to different acquisition positions and different 
exposure times we assume the brightness levels of the images as 
different. Therefore, we apply the following vignetting 
correction as well as the global correction (chapter 3.4.2-3.4.4) 
only to the brightness of the images; the colour values are not 
modified. 
To do this independently from the colour and in relation to the 
human colour and brightness perception, the CIE 1976 (L*, a*, 
b*) colour space (CIELAB) is used. This colour space was 
developed by the CIE (Commission Internationale d'Eclairage) 
to describe all the colours visible to the human eye and to 
approximate human vision. It aspires to perceptual uniformity, 
and the L component closely matches human perception of 
brightness. It can be used to make accurate colour balance or to 
adjust the lightness contrast using the L component. These 
transformations are difficult or impossible to handle in the RGB 
space, because RGB models the output of physical devices, 
rather than human visual perception. Uniform changes of 
components in the L*a*b* colour model aim to correspond to 
uniform changes in perceived colour. Therefore, the images are 
transferred from the RGB- to the L*a*b*- colour space to 
process only the L-channel (luminance). 
3.4.2 Removal of vignetting 
The first step to enhance images is the reduction of the 
vignetting, also called light falloff effect. Therefore, the cos4 
relation is used (Hasler, 2004), to reduce the effect significantly, 
as long as this influence of the lens system can be described 
sufficiently accurate. For more complex lens systems the 
approach is not sufficient anymore; more complex algorithms 
have to be used (d’Angelo, 2004; Goldman, 2005; Litvinov, 
2005). 
3.4.3 Global brightness correction 
The algorithm calculates one global brightness difference using 
an averaging over the whole image. This difference value is 
applied to all pixels of the image. The brightness differences are 
calculated in the following way. The images are transferred into 
the L*a*b* colour space and the L (brightness) component for 
each common object point is calculated in two images. The 
difference of these two values is calculated for all points. The 
global brightness difference is the average value of all the 
brightness differences. The key factor to achieve acceptable 
results is the accurate orientation of the images, otherwise the 
result of the grey value difference calculation is not meaningful, 
because the grey values of the images are taken from different 
image positions, which are not related to each other. To reduce 
this influence as much as possible, two steps of processing were 
implemented. First, the brightness values are determined using 
an average of an area, not only of a single pixel. To calculate 
the grey values for each point we use a square neighbourhood 
around the calculated point. Empirical tests have shown that the 
optimal size of the neighbourhood depends on the dataset and 
on the quality of image orientation. Best results were achieved 
using a surrounding of 3x3 up to 7x7 pixels around the 
calculated image coordinates. 
The second part is the improvement of the image registration or 
orientation itself. Two potential methods are existing. First, an 
improvement of the whole image orientation. It covers the 
exterior as well as the interior orientation including distortion 
parameters. According to the fact, that the here presented 
algorithm should work with different camera models and with 
different definitions of distortion parameters, the approach to 
implement each camera model is not flexible enough. Therefore, 
a second and more flexible procedure was defined to match the 
single points in two images to each other. At the beginning, one 
image is set as master image, the image point coordinates are 
calculated for all images using the given orientation parameters 
and the camera model. To improve the position of each point in 
the second (slave) image, a cross correlation matching is 
conducted. During this step, the position of the point in the 
slave image can move up to four pixels. The limit of four pixels 
was used to avoid big shifts in “uncooperative” regions, e.g. 
regions without sufficient texture and contrast. The algorithm 
was implemented to achieve sub-pixel accuracy and works in 
four steps. First the extraction of an 11-by-11 template around 
the master control point and a 21 -by-21 region around the point 
in the slave image is conducted, followed by the calculation of 
the normalized cross-correlation values between template and 
the selected search region. The detected absolute peak of the 
cross-correlation matrix is used to adjust the coordinates of the 
point in the second image. 
The global brightness correction improves the brightness level 
over the whole image. Local differences, for example caused by 
chancing the illumination source- and camera-position cannot 
be modelled in this way. To evaluate the meaningfulness and 
the usability of the global brightness correction the standard 
deviation of the brightness differences are introduced. If the 
standard deviation exceeds a defined threshold, e.g. half of the 
average value of the brightness differences, the grey value 
difference between the images can not be approximated with 
one value anymore. 
3.4.4 Local brightness correction 
To handle non-uniform brightness differences between the 
images, a local brightness correction has to be applied. The grey 
value differences are calculated in each point of the images, like 
in the approach before (chapter 3.4.3). In contrast to the global 
correction, these differences are not combined to an average 
value. The differences are associated with the image 
coordinates of the points. This point cloud of brightness 
differences can be interpreted as TIN. Using these initial points, 
a brightness difference grid can be interpolated over the whole 
image (Figures 6 and 8). The first test to apply this procedure 
was done using a simple inverse distance weighting function 
using the closest three points. The key factors for satisfying 
results are the point density and the accuracy of the single 
brightness values. The accuracy primary depends, like in the 
global algorithm, on the correct orientation of the images and 
on the back-projection of the points. This problem was solved 
using the matching procedure mentioned before. To check the 
behaviour with sparse datasets, we used the Globe dataset with 
a point density of about one point per five square centimetres.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.