Full text: Proceedings (Part B3b-2)

669 
ENERGY FUNCTION BEHAVIOR IN OPTIMIZATION BASED IMAGE 
SEQUENCE STABILIZATION IN PRESENCE OF MOVING OBJECTS 
F. Karimi Nejadasl 3, *, B. G. H. Gorte a , M. M. Snellen 3 , S. P. Hoogendoom b 
a Delft Institute of Earth Observation and Space Systems, Delft University of Technology, Kluyverweg 1, 2629 HS, 
Delft, The Netherlands - (F.KarimiNejadasl, B.G.H.Gorte, M. Snellen) @tudelft.nl 
b Transport & Planning Department, Delft University of Technology, Stevinweg 1, 2628 CN, Delft, The Netherlands - 
S.P.Hoogendoom @tudelft.nl 
Commission III, ICWG ffl/V 
KEY WORDS: Registration, Transformation, Visualization, Orientation, Correlation, Image Sequences, Aerial 
ABSTRACT: 
In this paper, we address the registration of two images as an optimization problem within indicated bounds. Our contribution is to 
identify such situations where the optimum value represents the real transformation parameters between the two images. Consider 
for example Mean Square Error (MSE) as the energy function: Ideally, a minimum in MSE corresponds to transformation 
parameters that represent the real transformation between two images. In this paper we demonstrate in which situations the optimum 
value represents the real transformation parameters between the two images. To quantify the amount of disturbances allowed, these 
disturbances are simulated for two separate cases: moving objects and illumination variation. The results of the simulation 
demonstrate the robustness of stabilizing image sequences by means of MSE optimization. Indeed, it is shown that even a large 
amount of disturbances will not cause the optimization method to fail to find the real solution. Fortunately, the maximal amount of 
disturbances allowed is larger than the amount of signal disturbances that is typically met in practice. 
1. INTRODUCTION 
Collection of vehicle dynamics data from airborne image 
sequences is required for setting up and calibrating traffic flow 
models (Ossen and Hoogendoom, 2005). The image sequence is 
collected by a camera mounted below a helicopter hovering 
over a highway. The images are not stable because of the 
helicopter drift. Therefore the camera motion should be 
separated from vehicle motion. Toth (Toth and Grejner- 
Brzezinska, 2006) used GPS/INS for camera position estimation 
but only for image sequences at low frame rate. Feature based 
solutions have to deal with considerable amount of errors 
caused by mismatching and moving objects. Kirchhof 
(Kirchhof and Stilla, 2006) and Medioni (Yuan et al., 2006) 
have used RANSAC as a robust estimator to remove outliers. 
Although this method could handle considerable amount of 
outliers robustly it fails for images with low frequency content 
due to the lack of availability of enough matched points. This 
contradicts with the main requirements of our application which 
are automation and robustness. 
Consequently we have proposed a method (Karimi Nejadasl et 
al., 2008) to use explicit radiometric and implicit geometric 
information even for pixels with a very low gray value change 
with respect to their neighbors. The main idea is based on 
having one dominant motion between two images which can be 
formulated as one transformation matrix that transforms the 
whole image geometrically to achieve the second image. As a 
result, the transformation parameters are the one that provide 
the best match between two images: reference and candidate 
image that should be registered to the reference image. 
Between consecutive images moving objects and illumination 
variations cause only small difference. But between an arbitrary 
image and the reference image these disturbances are more than 
in the consecutive case. The amount of disturbances is 
influenced by ambient conditions that can be subdivided into 
environmental, traffic and scene circumstances. A large amount 
of these disturbances could cause a failure of our optimization 
method. 
Before being able to apply our method on large data sets it is 
necessary to find out how robust our method is by determining 
which disturbances are manageable. 
We simulate two types of disturbances: illumination variations 
and moving objects. Then the transformation parameters are 
estimated for each disturbed data set. Later on the amount of 
errors on the estimated parameters and the image coordinates is 
calculated. The amount of disturbances is increased until the 
energy value of the estimated parameters with a high geometric 
error is lower than the energy value of the real result. This 
situation corresponds to the real failure of the method. The 
amount of disturbances is then lowered until a correct result is 
obtained. The amount of disturbance related to this result 
indicates the acceptance boundary. 
In Section 2, the image-sequence-stabilization framework is 
introduced. The procedure of finding the boundary of our 
method is described in Section 3. Results and conclusions are 
presented in Section 4 and 5 respectively. 
* Corresponding author.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.