2004
e, the
tically
cated
as. In
s will
initial
to the
found
| these
cedure
based
| west
of the
mass
vision
> same
s been
For
and 3,
s. This
uction,
| Selt-
of the
s rapid
g them
ganizes
earning
ig at a
leration
g most
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
When we ask for a generalized picture of the spine with 4 nodes
we can see that the nodes for frames 1 and 2 from Figure 5 arc
merged into a single intermediate node to save space (Figure 9).
When we ask for 3 nodes, only frame 3 retains its original node.
This decreases the number of nodes used to define the
polygon's location, and leads to a reduction in the amount of
space needed to store this data while maintaining the most
important characteristics of the object’s spatiotemporal
behavior. For more detailed information on our SOM work see
(Kohonen 1982: Kohonen 1997; Doucette, Agouris et al-2001).
Figure 9: SOMs constructed with 4 (L) and 3 (R) nodes
5.4 Node Placement and Prong Information
The final stage in helix construction is to move each extracted
node to the closest position recorded in the frames and to add
prong information. For example, when four nodes are extracted
in the SOM process, three of the nodes are located at the
object's position in frames 3, 4, and 5. The fourth node is
located between the object's positions in frames 1 and 2, but is
closer to that of frame 2 (Figure 9). Thus, when constructing
the helix, our algorithm places the final nodes in frames 2, 3, 4,
and 5 (Figure 10 left).
When examining the SOM of 3 nodes, we end up with final
node placement in frames 1, 3, and 5 (Figure 10 right). In our
example, this would select frames from the original dataset, and
use only them to define the placement of the polygon over time.
It is more accurate than using the node placements from the
third step, because it does not create interpolated positions, but
uses locations that were already part of the dataset.
Figure 10: Complete helixes for 4(L) and 3(R) nodes with
spines and prongs
In addition to selecting the most important object instances that
should be recorded in our database, the fourth stage in our helix
construction process also compares changes in object expansion
or contraction to a user-defined threshold. In this example, the
threshold has been set at 20%. The largest change that was
found in our example dataset occurs between frames 3 and 4,
where there is a large reduction of area in the west quadrant and
a smaller reduction in the east quadrant. This is represented in
our helixes by a long line emerging from the “west” side of the
node at frame 4 and a shorter line on the “east” side of the same
49
node. This indicates that the polygon has undergone the most
significant change in outline between these two frames.
6. ADDITIONAL EXPERIMENTS
In addition to these basic experiments in extracting helix
information, we have tested the integrity of our calculations, as
well as their performance speed. For helix generation,
constructed datasets of 700 frames and used differing user-
defined thresholds to determine the number of nodes and prongs
that define the helix. Figure 1! shows two helixes that were
constructed during this phase. Both have 17 nodes, but helix
“a” has more prongs than helix "b." Their respective prong
thresholds are 10% and 20%.
Figure 11: Helixes constructed from differing thresholds
In order to determine the usefulness of our prongs in
reconstructing an object at any given time instance, we used
only the image of the object at (=0 sec, and modified the initial
object outline using only the expansions and contractions as
indicated by the prong magnitudes and angles. We then
compared these results to the actual object boundaries in frame
700. We found that with our dataset, we were able to
reconstruct the object with 83% accuracy when using a prong
threshold of 20% and with around 94% accuracy when using
any prong threshold below 15%. There seems to be a level of
prong definition beyond which no additional benefit is gained
in storing the extra information. See (Stefanidis, Eickhorst et
al. 2003) for a more detailed discussion of this topic.
Another type of experiment that we conducted involves the
computation of similarity indices using the metrics discussed in
section 3. We created a dataset of 100 helixes, comprising an
average of 19 nodes and 7 prongs each, and used both the
abstract and quantitative metrics to compare each helix to the
larger pool of candidate helixes. We noted the time that it took
to run cach of these queries, and found that the abstract query
averaged 2 seconds to run, while the more intensive quantitative
query took 4 seconds. These are very encouraging results as
many applications in the geospatial realin are large-scale efforts
where computational times are of the utmost importance.
7. FUTURE DIRECTIONS
We are currently exploring various ways to visualize node and
prong values with colors, various levels of shading, fuzziness,
or other overlays. This information is intended to supplement
the quantitative values of the helix components, to support
quick decision-making though visual analysis. For instance, if
one wanted to be visually alerted to nodes where accelerations