Full text: Close-range imaging, long-range vision

ite a digital 
listic virtual 
zM aerial or 
to create a 
vering more 
a & Brenner, 
he group of 
on of terrain 
Gruen et al., 
X to develop 
veral square 
it. sub-meter 
sh models of 
r 1997], and 
of Adelaide 
yrmed by the 
pe capturing, 
ig specially 
the roaming 
ne & Teller, 
jounted on a 
campus. lt 
stimate the 
orientation 
a controlled 
s of interiors 
SIS 
' an efficient 
ie content, to 
mnthetic view 
; an efficient 
ject relations 
weir specific 
'e processing 
"sition of the 
S in a scale 
ity has also 
tion concepts 
lirection (see 
ofer, 2000]). 
these two 
om pixels to 
in this paper 
ts to support 
nt efforts on 
rics for scene 
cifically, we 
of similarity 
7, orientation, 
for all these 
ction within a 
on combines 
and relations 
between them like topolgy, orientation, and distance. More 
specifically S,,,, is defined as: 
$ eas QI) wt S 12D Mop! 5, Wort Sp dOD vy 
(1) 
where the various S terms are similarity coefficients (e.g. top for 
topology etc.), and w are the corresponding weight coefficients. 
In previous work we have developed metrics for the evaluation 
of shape, orientation, and distance similarity that are 
independent of scale and orientation variations to support image 
queries, and have demonstrated their function in geospatial 
queries [Stefanidis et al., 2002]. 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Figure 2: Objects identified in an input ground-level image 
(bottom) and the viewing azimuth of this image as it 
is identified in the corresponding VR model (top). 
Our objective in this paper is to extend these models to support 
the analysis of the content of ground-level imagery, and its 
comparison to object relations as they are modeled in the 
corresponding virtual model database. This results in a novel 
matching technique that considers object relations to allow us to 
recover image orientation given an approximate sensor location. 
This entails comparing the relations of objects as they are 
depicted in a ground-level image (Fig. 2 bottom) to all potential 
object combinations as they can be formulated in the database, 
in order to identify point-of-view as the line that maximizes the 
similarity metric (long line pointing away from the small circle 
in Fig. 2 top). This is an essential capability for modern sensor 
deployment, where GPS information is easily available to 
provide sensor location information, while orientation info is 
less easily available. Our matching technique will provide a 
“heads-up” capability, comparing potential views from a 
specific location to the incoming imagery. 
3.1 Comparing ground-level imagery to the content of a 
VR database. 
The outline of our approach to compare ground-level imagery to 
the content of a VR database is shown in Fig. 3. This 
corresponds to the approximate orientation box of the flowchart 
of Fig. 1. As mentioned, input information includes the 
approximate coordinates of the sensor as acquired by the GPS 
system, the image (video frame from the sensor), and the 3d 
model database of our area of interest. 
The first step is the creation of a panorama synthetic image 
using sensor coordinates and the 3D model. It is a 360° view of 
the VR model around the sensor location, similar to the view 
captured by an observer rotating around his/her location. 
Algorithmically the panorama image is created using a 
cylindrical projection of the objects centered in the approximate 
coordinates acquired with the GPS. This synthetic panoramic 
image can have any user-defined resolution. We commonly 
define it to have a width of 3600 pixels so that every pixel 
corresponds to a tenth of a degree. 
Once the panoramic view is produced we proceed with 
identifying approximate object outlines in both the incoming 
and panoramic images. For our subsequent analysis it is 
adequate to use object blobs, with the term blob used to indicate 
the non-precise delineation of an object in an image. They can 
be readily extracted from images through edge detection, 
without having to resort to computationally-expensive precise 
delineation algorithms, improving the 
  
  
Panorama creation using GPS coordinates and 
svnthetic camera model 
v 
Object detection in the image 
v 
Creation of all possible object combinations 
v 
Detection of winning configuration using the two 
similarity metrics. 
v 
Retrieval of approximate orientation 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Figure 3: Flowchart of our approach to compare a ground-level 
image to the content of a VR database 
Following this identification of approximate object outlines we 
proceed with identifying in the panoramic image that 
configuration of objects that best resembles the configuration of 
objects in the input imagery. Assuming the input scene contains 
n objects, and the panoramic view includes m objects, this 
—163— 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.