Full text: Close-range imaging, long-range vision

993. Desert 
iversity College 
tructure of red- 
Iscapes, IEEE 
sing, 27(4): 441- 
nse to soil 
irectional 
Remote Sensing 
id-infrared Data 
essment, by Sun 
irces Bulletin. 
idscape. The 
ication series), 
SUMMARIZING THE CONTENT OF MOTION IMAGERY DATASETS 
Anthony Stefanidis, Peggy Agouris, Panos Partsinevelos 
Dept. of Spatial Information Science and Engineering 
National Center for Geographic Information and Analysis 
University of Maine 
348 Boardman Hall 
Orono, ME 04469-5711, USA 
(tony, peggy, panos} @spatial.maine.edu 
Commission V, WG: V/5 
KEY WORDS: Video analysis, neural networks, snakes, metadata. 
ABSTRACT: 
In this paper we present a framework and algorithms for the summarization of motion imagery content to model geospatial 
information depicted in it. More specifically, we proceed towards modeling this information by detecting breakpoints in the 
trajectories of objects captured in a video dataset, and changes in the outlines of these objects. Our approach to motion imagery 
indexing and queries is based on the concept of spatiotemporal lifelines, defined as the trajectories of objects in space and time 
during a motion imagery feed. Here we introduce the spatiotemporal helix as a model of spatiotemporal lifelines, providing explicit 
yet concise descriptions of object behavior. The helix model is highly suitable for spatiotemporal analysis and offers a powerful 
abstraction mechanism for to create brief summaries of motion imagery datasets. These summaries can in turn be exploited to support 
content-based motion imagery retrieval. 
1. INTRODUCTION 
The transition from static to spatiotemporal analysis is 
becoming increasingly evident in the geospatial community. 
Regarding image analysis this signifies an evolution from single 
images to collections of time-varying imagery. Time-varying 
imagery collections may range from continuous video segments 
to sequence of static images that differ by seconds, minutes, or 
even days, depending on the temporal resolution of the event 
that they are used to describe. We use the term motion imagery 
(MI) to refer to these multitemporal image datasets. The 
processing and analysis of spatiotemporal datasets is 
introducing interesting data handling challenges, mostly 
associated with the large volumes of datasets, the corresponding 
processing times, and the diverse nature of information 
contained in them. 
The efficient modeling of spatiotemporal events is a major 
research challenge and an important step towards the analysis 
and management of large spatiotemporal datasets. Relevant 
research includes the work of [Smith & Kanade, 1995] on the 
analysis of visual and speech properties to construct “skim” 
video synopses by merging select segments of the original 
video. The extraction of select key frames for the generation of 
video summaries has also been addressed in [Yeung & Yeo, 
1997]. [Pfoser and Theodoridis, 2000] provide a spatio- 
temporal synthetic dataset generator to simulate movement 
trajectories, analyze novel index schemes for moving points 
using tree structures. The indexing and querying of moving 
points is also addressed in [Vazirgiannis & Wolfson, 2001], 
while [Sistla et al., 1997; Wolfson et al., 1999] discuss the use 
of future temporal logic for modeling and querying moving 
objects. Work on indexing animated objects is reported in 
[Kollios et al., 2001], while [Tao & Papadias, 2001] propose a 
framework for indexing and querying spatiotemporal data by 
constructing new tree structures. 
In [Stefanidis et al., 2001] we introduced a general framework 
for the summarization of spatiotemporal trajectories considering 
point datasets (a point changing its position over time). This 
was in accordance to the above-mentioned relevant works, 
where moving objects are reduced to a point representation, 
ignoring spatial extent of objects and the variations of their 
outlines. In this paper we move beyond this simplification, 
extending the framework introduced in [Stefanidis et al., 2001] 
to accommodate the spatial extent of objects. This allows us to 
consider not only the movement but also the deformation of 
spatial objects, introducing a more comprehensive 
spatiotemporal model than the currently existing ones. At the 
core of our work is the concept of the spatiotemporal helix, a 
novel spatiotemporal object model. It allows us to model object 
movements and deformations, supporting complex 
spatiotemporal analysis. 
This is a key development to support the analysis of 
spatiotemporal phenomena that have certain spatial extent and 
change their position and/or extent over time. Such phenomena 
can be slowly moving (e.g. urbanization trends depicted in a 
series of monthly satellite images) or rapidly evolving (e.g. 
hurricanes depicted in hourly or daily datasets), and they take 
place over a fixed area (e.g. flooding) or may be constantly 
changing their location (e.g. a moving fire front). 
The rest of the paper is organized as follows: In Section 2 we 
introduce the concept of the spatiotemporal helix as a concise 
representation of spatiotemporal events. The automated 
generation of spatiotemporal helixes makes use of two 
automated techniques we have developed: a technique based on 
self-organizing maps for the generalization of point trajectories 
(Section 3) and differential snakes, an extension of the model of 
deformable contour models to perform outline comparison 
(Section 4). Section 5 addresses spatiotemporal analysis issues 
using helixes with final comments following in section 6. 
—515— 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.