Full text: Proceedings (Part B3b-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bib. Beijing 2008 
671 
3. use T M ] as the initial value for computing Tj+u 
In this solution the image is processed frame by frame, starting 
from a reference frame, which (for simplicity) we will assume 
to be frame 1 in the sequence. Assume frames 2...i are already 
registered to frame 1, which means that the transformation T u 
between frames i and frame 1 is known. Within the framework 
the image i+1 is registered to the first image. 
This strategy also prevents registration errors to accumulate. 
Matching consecutive images (step 1) is easier (i.e. less error- 
prone) than matching arbitrary images, since the misalignment 
is limited. In step 3, this problem is avoided by providing an 
accurate approximate value to the matching process. 
3. SIMULATION OF THE DISTURBANCES 
In this section, the disturbances of our method which are the 
illumination variation and the moving objects are simulated. 
The amount of permitted disturbances will give a quantitative 
indication of the robustness. 
3.1 Type of the Disturbances 
Within our stabilization framework as sketched in Section 2 any 
arbitrary image registration is treated as a consecutive image 
registration. But in fact, the registration problem becomes 
different due to the disturbances: moving objects and 
illumination variations. The disturbances are increasing with 
increasing temporal differences (type small and gradual to type 
large and sudden). The number of pixels changing due to 
moving objects is in general lower than the total number of 
pixels that represent the moving objects duet to overlap of a 
moving object in different images. The illumination values in 
the overlapping area are almost the same. The illumination 
variation is small in consecutive images. Therefore the effect of 
theses disturbances is very small in the process. This effect 
results in a small MSE value. By increasing the temporal 
distance the amount of these disturbances is increasing. 
Decrease of the overlapping area increases the number of pixels 
in moving objects. Although after a while when there is no 
overlap, the amount of moving pixels stabilize. On the other 
hand, the number of moving objects may increase by changing 
traffic situation, e.g. from a moving type to a congested type. 
Also many object outside will influence the number of moving 
pixels. The effect of local illumination variation is increased for 
example by the appearance of clouds in one part of the image. 
Global illumination variations are not problematic as they can 
be removed by using a normalized form, a difference of the 
image gray values from their mean. 
The change of illumination depends on the source of the light, 
object characteristics, viewing angle, and influence of other 
objects. Examples of these changes are shadows of fixed and 
moving objects; a reflection of vehicle lights from the road 
surface; changing the viewing angle caused by shaking of the 
helicopter results in illumination variation of road lines and 
vehicles especially because of specular effects. 
In fact, moving objects can be interpreted as the local 
illumination variations which destruct the image structure of an 
occupied area. The energy function, which explicitly depends 
only on illumination values, cannot distinguish between these 
two types of disturbances. As a result, in our simulation, 
moving objects and small region illumination variations are 
treated the same. 
3.2 Used Simulation 
All simulated moving objects are rectangular, consisting of 100 
x 22 pixels. The image size is 1392 x 1040 pixels. The position 
of these objects is randomly distributed over the whole image 
area in the reference image. To have maximum variation, the 
gray value is specified as the maximum value in an intensity 
range, here 255, because of having mainly darker background 
in our data sets. All these white simulated objects are moved 
with object width, 100 pixels, in x-direction and object height, 
22 pixels, in y-direction to have a higher amount of 
disturbances with very high correlation in object motion. The 
disturbances, in this case, are destructing image content as if 
there was a destructive structure occurred such as a moving 
object or a specular reflection in water or windows. This is the 
worst case of moving object simulation because of high 
correlation motion. If the objects move differently or the objects 
are different in two images, the disturbance of this type is less 
problematic than having moving objects which move the same. 
To generate the illumination changes, the reference image is 
subdivided to in four non equal regions. In each region all gray 
values are disturbed by a fixed amount. The worst case of 
illumination variation is when the structure of an image is 
destructed by the disturbances. For example reducing the gray 
value in the dark image can cause more severe problem than 
increasing the gray value as in the later case the image structure 
is not essentially affected. Although in preserving case the 
amount of the disturbance is more than the constructive case. 
After simulation of disturbances, a camera motion is simulated. 
The reference image is transformed by applying the simulated 
camera motion parameters. Ideally, the estimated 
transformation parameter values should be the same as the 
parameter values applied to simulate the camera motion. The 
reason of simulating a transformation is to have real parameter 
values for validation. Although the transformation parameters 
are obtained by manual corresponding point selection and then 
parameter estimation, exact positioning of correspondence 
points manually is erroneous due to image resolution. 
The total amount of disturbances should be calculated after 
removing the camera movement. Therefore the intentionally 
moved object and illumination variations are introduced before 
inserting motion. The advantage of this order is that additional 
radiometric errors are avoided. Consequently, two images are 
the same before inserting disturbances in both of them and 
transforming the reference one. 
3.3 Boundary Calculation 
The percentage of the amount of disturbances is the total 
amount of absolute disturbances relative to the maximum total 
amount of possible disturbances, i.e. the number of pixels 
multiplied by the maximum grayscale of the pixel depth. For 
example for a 8 bit image, the pixel depth equals 256. Accuracy 
of the calculated parameters is quantified as normalized 
parameters’ error and geometric error. 
The parameters are normalized by dividing for each parameter 
its absolute error by its resolution. This value indicates how 
many times each parameter value error deviates from its 
resolution. The resolution of each parameter is calculated by 
discarding the other parameters and obtaining maximum one
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.