Full text: Close-range imaging, long-range vision

  
  
  
hybridization: it proceeds by comparing nodes and prongs to 
establish links between helix pairs (Fig. 5). 
This involves the comparison of their corresponding records as 
they were presented in Section 2. Similarities can be identified 
in instances where the corresponding records (e.g. azimuth 
values or positions) differ less than an acceptable threshold (see 
e.g. the buffer zones in Fig. 6). 
  
  
  
  
  
  
  
     
Buffer in 
position 
Buffer in 
rotation 
    
Figure 6: The effect of buffering in spatiotemporal helix 
comparison when comparing two spatiotemporal 
helixes (one represented by a continuous line, the 
other by a dashed line). 
A similarity metric S is then provided as a coincidence 
percentage: 
S — (duration of coincidence)/(duration of event), (3) 
Where duration coincidence is the aggregate time during which 
the two events were displaying similar properties (e.g. both 
were pointing North). In order to support this comparison, the 
range of values of each property may be tessellated in few 
subsets. For example, azimuth information may be presented as 
4 (N, W, S, and E) or even 8 (adding NE, SE, SW, NW) discrete 
directions as opposed to 360 discrete degrees. 
6. COMMENTS 
In this paper we introduced the concept of spatiotemporal helix 
as a model of spatiotemporal events. It allows us to model 
efficiently changes in the location and extent of a phenomenon, 
and supports the comparison of events to identify similarities 
and complex relationships among them. This comparison of 
spatiotemporal helixes allows us to produce meaningful 
qualitative metrics to what up to this point have been considered 
as quantitative queries. While our motivation is the analysis of 
events as they are captured in motion imagery datasets, the 
concept of the spatiotemporal helix can be applied to any type 
of multitemporal datasets with spatially registered information 
(e.g. land use patterns as they are depicted in a multitemporal 
sequence of maps, disease spread as it is recorded in a GIS etc.). 
REFERENCES 
Agouris P., A. Stefanidis & S. Gyftakis, 2001. Differential 
Snakes for Change Detection in Road  Segments, 
Photogrammetric Engineering & Remote Sensing, 67(12), pp. 
1391-1399. 
Kass M., A. Witkin, & D. Terzopoulos, 1987. Snakes: Active 
contour models, in Proceedings of the Ist Int. Conf. on 
Computer Vision, London, pp. 259-268 
Kohonen, T., 1982. Self-organized Formation of Topologically 
Correct Feature Maps. Biological Cybernetics: 59-69. 
Kollios G., V. Tsotras, D. Gunopulos, A. Delis, & M. 
Hadjieleftheriou, 2001. Indexing Animated Objects Using 
Spatiotemporal Access Methods, /EEE TKDE, 13(5): 758-777. 
Partsinevelos P., A. Stefanidis & P. Agouris, 2001. Automated 
Spatiotemporal Scaling for Video Generalization, IEEE-ICIPOI 
(International Conference on Image Processing), Thessaloniki, 
Vol. 1, pp.177-180. 
Pfoser D., & Y. Theodoridis, 2000. Generating Semantics- 
Based Trajectories of Moving Objects, Intern. Workshop on 
Emerging Technologies for Geo-Based Applications, Ascona. 
Smith M. & T. Kanade, 1995. Video Skimming for Quick 
Browsing based on Audio and Image Characterization, Tech. 
report CMU-CS-95-186, Carnegie Mellon University. 
Stefanidis A., P. Partsinevelos. K. Eickhorst & P. Agouris, 
2001. Spatiotemporal Lifelines in Support of Video Queries, 
Proceedings DEXA Workshop 2001, pp. 865-869, Munich. 
Sistla A.P., O. Wolfson, S. Chamberlain, & S. Dao, 1997. 
Modeling and Querying Moving Objects. /CDE, pp. 422-432 
Tao Y., & D. Papadias: 2001. MV3R-Tree: A Spatio-Temporal 
Access Method for Timestamp and Interval Queries. VLDB 
2001: 431-440. 
Vazirgiannis M., & O. Wolfson, 2001. A Spatiotemporal Query 
Language for Moving Point Objects, SSTD '01, Los Angeles, 
pp. 20-35. 
(2) 
Wolfson O., A.P.Sistla, B. Xu, J. Zhou, & S. Chamberlain: 
DOMINO: Databases fOr MovINg Object tracking. SIGMOD 
Conference 1999, pp. 547-549. 
Yeung M., & B.L. Yeo, 1997. Video Visualization for Compact 
Presentation and Fast Browsing of Pictorial Content, IEEE 
Trans. on CSVT, 7(S), pp. 771-785. 
ACKNOWLEDGEMENTS 
This work was supported by the National Science Foundation 
under grants number DG-9983445 and IIS-0121269. We would 
like to thank Mr. Sotiris Gyftakis for providing Figure 3. 
—518—
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.