Full text: Papers accepted on the basis of peer-reviewed abstracts (Part B)

In: Wagner W., Szbkely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B 
INTRODUCTION 
A well-known phase retrieval algorithm commonly used for 
optical beam shaping is the Gerchberg and Saxton (Gerchberg 
and Saxton, 1972) approach in which knowing the magnitude 
distribution of an image in the spatial and the spectral domain 
enables the recovery of the phase distributions. Later work by 
Misell (Misell, 1973a; Misell, 1973b; Misell, 1973c) extended 
the algorithm for two arbitrary input and output planes along 
the optical path. These methods are proven to converge to a 
phase filter with a minimal mean square error (Fienup, 1978; 
Fienup, 1982). 
The concept presented in the Gerchberg-Saxton paper is simple. 
One starts with an arbitrary phase-only filter in the object 
domain multiplying the input object (the original image), after a 
Fourier transform one obtains a Fourier domain image and 
imposes the required Fourier magnitude, while maintaining the 
Fourier phase. An inverse Fourier transform brings us back to 
the object domain. Since we demand a phase-only signal, we 
impose the intensity of the input object in this plane. Next, one 
calculates the Fourier transform and returns to the Fourier 
domain to iterate the process until an acceptable convergence is 
obtained. 
Gerchberg (Gerchberg, 1974) and Papoulis (Papoulis, 1975) 
suggested the use of this method for super resolution. However, 
both presented relatively simple test cases and assumed the 
properties of all iterations to be identical (except when noise 
reduction was addressed). An improved Gerchberg-Papoulis 
algorithm was recently suggested by Gur and Zalevsky (Gur 
and Zalevsky, 2007a; Gur and Zalevsky, 2007b); however, it 
supplies good result only if the blurred image is actually a 
lower resolution version of the required image. Similar 
approaches providing image resolution enhancement by proper 
digital image processing interpolation and learning-based 
algorithms can for instance be seen in Refs. (Gevrekci and 
Gunturk, 2000; Nguyen and Milanfar, 2000; Joshi et al, 2005). 
In this paper, the authors propose a modification of the 
algorithm presented in Ref. (Gur and Zalevsky, 2007a; Gur and 
Zalevsky, 2007b). In the new algorithm, instead of multiplexing 
two images, one at high resolution and the other at low 
resolution, the authors propose a general approach capable of 
multiplexing a plurality of low and high resolution images. In 
the proposed approach, the multiplexed images do not have to 
relate to different regions of the field of view but rather to 
images that are captured at different spectral wavelengths. In 
this paper, the authors validate the generalized approach by 
experiments including both images captured at different spatial 
resolutions from airborne camera as well as images captured in 
a multi spectral sensor. 
The 2 nd section presents the proposed approach. Experimental 
results are presented in 3 rd section. The paper is concluded in 
the 4 th section. 
THE PROPOSED ALGORITHM 
In this paper the authors address the following situation: We 
obtain a plurality of low resolution images which can be from 
different regions in the field of view with lower resolution, or a 
set of images captured at different wavelengths by a multi- 
spectral sensor. In addition to the low resolution input images, 
we obtain a plurality of high resolution images which can be 
from other regions of the field of view (that may be at different 
resolution levels) or they may be spectral images captured at 
shorter wavelengths and thus have higher resolution. Our aim is 
to reconstruct the higher spatial frequencies by a dynamic 
iterative procedure. 
The flow chart of the proposed algorithm is described in Figure 
1. The starting point of the basic algorithm assumes that we 
possess a plurality of High Resolution (HR) images and a 
plurality of Low Resolution (LR) images. Each image relates to 
a different region of the field of view, or we possess several 
images coming from sensors of different wavelengths while 
each image contains the full field of view. An overall image is 
generated from the plurality of given images. The generalized 
image is vertically divided into N regions. The reconstructed 
image of the full field of view is either an image combining all 
of the N regions of the field of view together, or a multiplexing 
of several spectral images captured at different wavelengths 
when a different region of the field of view is taken from 
different wavelengths. In both cases, the first iteration of the 
newly generated image combines some HR and some LR image 
regions. Then, a Fourier transform is performed. The Fourier 
image obtained contains data from all regions of the new image. 
Since the lower frequencies are present in the LR image, we 
impose the lower frequencies constraints from the Fourier 
transform of the original LR images. Next an inverse Fourier 
transform is performed. At this stage we replace the various 
regions of the field of view that were related to HR images by 
the known a priori HR regions, and we keep the rest of the 
regions. We again perform a Fourier transform to impose the 
constraints on lower frequencies and so on. The basic algorithm 
converges when the difference between images obtained in 
consecutive iterations is below a certain predefined threshold. 
At the final stage, we take the HR reconstructions from regions 
that were originally imposed by the LR images. The strength of 
this algorithm with respect to algorithm based directly on 
Misell’s work or Gerchberg and Papoulis’ work lies mainly on 
its dynamic properties. We do not impose all the a priori 
knowledge at the beginning, but rather start with some of the 
known constraints and increase the applied constraints 
according to the improvement in mean-square error results from 
one iteration to the next.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.