paper which will address the user interface and
visualization issues.
Before continuing, a small theoretical digression is in
order. One of the most useful concepts in defining
quality in user interfaces is J.J. Gibson's theory of
affordances (Gibson, 1966). This is theory of
perception which states that we do not perceive
patterns of light and shade, or lines and edges, or
motion flow. Instead what we have evolved to
perceive are 'affordances' which may be described as
possibilities for action, or use. This we perceive
surfaces as having the potential for walking or sitting,
we perceive objects as potential tools or potential food
and certain complex environments as holding potential
danger. The theory embeds perception in action, and
as such it makes sense of a good user interface as one
in which the user perceives the right set of affordances.
Through it, with minimal instruction the user can
perceive the affordances of the computer system he or
she is expected to use. The system should also afford
the easy execution of the tasks for which it was
designed.
—@— 2D (tb)
—i— Stereo perspective (1b)
—&— Stereo head coupled perspective (1b)
—&— Head coupled perspective (1b) ed X
50 +
% Error
0 50 100 150 200 250 300
Number of Nodes
Figure 1. The results of a study of path tracing in an
information network. Using a stereo, head coupled
perspective view, as shown in figure 2. resulted in three times
as many nodes being understood at the same error rate.
We have recently obtained hard evidence that even for
visualizing abstract information networks, where
understanding the connectivity is important, visualizing
in 3D is important (see Figure 1). However, it is not
the fact that it is a perspective view that helps, but
rather the enhanced space perception that comes from
stereo viewing (which increases the size of the network
that can be understood by 60%) and even more from
motion parallax of the data (which increases the size by
120%). If motion parallax is combined with stereo
viewing we find that three times the network size can
be understood for a constant error rate.
488
2. METHODS AND METAPHORS FOR
VIEWPOINT NAVIGATION
Viewpoint placement, 3D scene exploration and virtual
camera control are all aspects of the same problem in
computer graphics, namely how to move the viewpoint
in a virtual 3D scene. The kinds of task where this is
important are molecular modeling (Surles, 1992),
walkthroughs of architectural simulations (Brooks,
1986), camera control in animation systems, and
flights over digital terrain maps representing subsea or
remote sensing data (Stewart, 1991) as well as
numerous CAD and advanced GIS applications.
For a number of years we have been studying a six
degree-of-freedom variant of the common mouse input
device. We call it a Bat because a bat is like a mouse
that flies (or fledermaus in German). The device
senses both position (x,y,z) and orientation (azimuth,
elevation and roll) information. In some studies we
showed how this device could be used for object
placement (Ware, 1990). However, more recently we
have concentrated on using the Bat in ways that allow
us to explore different methods and metaphors for
virtual camera control.
Often methods for viewpoint control are based on
metaphors which help the user to get a conceptual
grasp of the way the system will behave. Thus if the
user is told that he or she is flying through the data, it is
quite different than telling the user that the data is on a
turntable which can be rotated. Most of the remainder
of this paper is organized as a survey of different
virtual camera control methods, both as employed in
my research laboratory and by others.
1) Eyeball in Hand Metaphor and Camera controllers.
The phrase "Eyeball In hand" describes a Metaphor in
which the user directly manipulates the viewpoint as if
it were held in his or her hand. The metaphor requires
that the user imagine a model of the virtual
environment somewhere in the vicinity of the monitor.
The eyeball (a spatial positioning device) is placed at
the desired viewpoint and the scene from this
viewpoint is displayed on the monitor. Cognitive
affordance problems arise from the difficulty some
subjects have of imagining the model. Ware and
Osborne, (1990) found large individual differences in
this respect. Also, if the eyeball is pointed away from
the screen the correspondence between hand motion
and the image motion is confusing. Physical
affordances are restricted by the physical limitation of
the device space - it can be awkward or impossible to
place the "eyeball" in certain positions.
There is a non direct-manipulation variation on this
theme which allows for complex camera commands of
the kind a director might give to the cameraman.
Recent work by Gleicher and Witkin (1992) explores
the use of high level commands to give the user control
over the
directing
the direct
certain mi
or zoom |
approach
which pos
the user tc
al, 1992).
2) World i
In the "W
changed t
displayed
rotated ck
object on a
Polhemus'
rotations
simultanec
viewpoint
single, rea
the enviro
enclosed ii
does not
affordance:
center of rc
interior the
of being er
hand moti
(Ware and |
The World
the eyeball
her viewpo
Useful vari:
virtual turni
sphere (Ch
from a mou
direct man
devices ten
Often they
the range o
We are usi
DEM visual
3) Function
It is a comn
common gr
translate anc
directly by
include this |
it does not e
A much mc
functions is
MacKinlay,
which corre
surface, hah
also evaluat
movements,