Full text: Technical Commission III (B3)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
e Calculate D,, »(k) pseudo-spectrums for various n, for ex- 
ample, n = 2,4, 8,16 in each pixel of the image on frame 
number k. 
eo If signal exits in some pixels, then |D, 2(k)| in them will 
be greater than zero. It can be or a signal from the object, or 
some noise on the image sequence. To make an algorithm 
more robust, we should filter the noise with some threshold. 
This threshold can be found adaptively on each frame using 
methods described above. 
e Divide the whole accumulator image on many square parts 
using grid. Assume each small square as moving if its value 
is greater than threshold and not moving (background) oth- 
erwise. Let us call these small image squares moving image 
elements wi ...wm. 
Moving object is created from moving image elements w1 .. . Wm. 
Various moving elements exist for all values of n (or don't exist 
if there's no moving objects on video sequence on current frame). 
It's obvious that pseudospectrums with longer memory are more 
robust to noise, but it takes longer to react for them, when a sig- 
nal in some pixels starts being received. Pseudospectrums with 
shorter memory react to a pixel signal much faster, but they react 
to noise as well as to a real signal. So if an element is a moving 
one, its signal should exist on most of faster pseudospectrums. 
And if it is a new or disappeared object, its signal should ex- 
ist on most of slower pseudospectrums. Let us suppose that we 
have a set of moving objects A; ... As1 and set of new or dis- 
appeared objects A1 ... A,» on a previous frame, set of moving 
image elements wı ...wm1 and elements that concern to new or 
disappeared objects w1 ...wm2 ON current frame. So we must 
somehow associate all objects with their new regions. Let us see 
hypotheses forming for moving objects: 
e No object associates with the moving element. So this mov- 
ing element belongs to a new object. 
e No moving element associates with the object. This object 
is treated as lost on this frame. Maybe it will be found in 
future. 
e Several moving elements are associated with the object. This 
object is treated as found on this frame. New position is cal- 
culated for it. 
e Several objects are associated with one moving element. 
This case is called a “collision”. It’s the most difficult case, 
it should be treated very carefully. We have to use additional 
algorithms to parse this conflict. 
As a result, on each frame we have a number of moving objects 
with their unique IDs and a number of new or disappeared objects 
with their unique IDs too. 
5 EXPERIMENTAL RESULTS 
Described algorithms were tested using the private video bases 
and public domain video bases like PETS (PETS video database, 
n.d.), ETISEO (ETISEO video database, n.d.). Typical screen- 
shot of object tracking visualization is presented on Figure 4. 
We created an algorithm analyzing and testing block that is based 
on comparison of automatic object detection and tracking results 
with results of manual object marking. Performance is measured 
562 
in FPS (frames per second processed). Detection probability is 
estimated in terms of “precision” and “recall”. 
The “Precision” is a percentage ratio of real (human-marked) ob- 
jects traced by the algorithm to all number of objects traced by 
algorithm. Simply put, 100% minus precision is a percentage of 
outliers provided by algorithm. The “Recall” equals is a percent- 
age ratio of human-marked objects found by the algorithm to all 
number of human-marked objects in a sequence, i.e. 100% minus 
recall means percentage of real objects that were not found by the 
algorithm somehow. 
The table 1 contains some video sequences from PETS and ETISEO 
databases and corresponding processing results. FPS was espe- 
cially estimated for budget PC configuration: Intel Atom N270 
1600 MHz processor and 1 Gb of RAM memory. 
6 CONCLUSION 
The problem of automatic video analysis for object detection and 
tracking is the most significant algorithmic topic in the digital 
video surveillance. The new motion analysis and object tracking 
technique is presented. Motion analysis algorithms are based on 
forming and processing of multiple-regression pseudospectrums. 
The object detection and tracking scheme contains: detection of 
moving pixel groups based on pseudospectrum analysis; forming 
of object hypotheses and interframe object tracking; spatiotem- Ee 
poral filtration of object motion parameters. Results of testing on 
public domain PETS and ETISEO video test beds are outlined. 
REFERENCES 
Anandan, P., 1989. A computational framework and an algorithm 
for the measurement of visual motion. Int. J. Comp. Vision 2, 
pp. 283-310. 
Barron, J., Fleet, D. and Beauchemin, S., 1994. Performance of 
optical flow techniques. Internat. Jour. of Computer Vision 12(1), 
pp. 43-77. 
Box, G., Jenkins, G. M. and Reinsel, G., 1994. Time series anal- 
ysis: Forecasting and control (3rd edition). 
ETISEO video database, n.d. http://www- 
sop.inria.fr/orion/ETISEO/. 
Gabor, D., 1946. Theory of communication. Journal of the Insti- 
tute of Electrical Engineers 93, pp. 429-457. 
Heeger, D. J., 1988. Optical flow using spatiotemporal filters. Int. 
J. Comp. Vision 1, pp. 279-302. 
Horn, B. K. P. and Schunck, B. G., 1981. Determining optical 
flow. Artificial Intelligence 17, pp. 185-203. 
Nagel, H., 1983. Displacement vectors derived from second- 
order intensity variations in image sequences. CGIP 9, pp. 85- 
117. 
  
PETS video database, n.d. http://www.cvg.rdg.ac.uk/slides/pets.html. 
Singh, A., 1992. Optic flow computation. a unified perspective. Figure 4 
IEEE Computer Society Press pp. 168-177. and park
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.