ion Systems
ion, storage,
mercial GIS
situations in
interface, the
stems. While
providers is
2), a special
uired during
quires high
o it) that is
y in dynamic
and standard
the needs of
act on it as a
1 adapted to
1 techniques
n techniques
cessible and
liques
encies is a
management
roblems and
he problems
the situation
ccess to the
:ollaborative
| situations.
| suited for
erfaces have
- interaction
er interfaces
ction (Jakob
have been
urrently no
use of such
plications.
multi-touch
at C-LAB
ization table
| interaction
is approach
ques to the
owards the
part of the
seTable has
to disaster
ll HD image
surface and
. For finger-
1) is applied
ombined DI
of the table
nnected to a
rithms. The
iffusor sheet
that enables pen-based interaction by using Anoto digital pens
(Anoto, 2012; Haller et. al., 2007).
The useTable software is able to handle multi-touch interaction,
tangible interaction with physical objects on the table surface
and pen based interaction. In addition to these features that are
also available on other platforms a new detection and
tracking framework for advanced interaction using a depth-
sensing camera (Kobayashi, 2008), called dSensingNI, was
developed that extends the possibility of tangible and gestural
interaction beyond the table surface (Jung et. al., 2011). The
dSensingNI framework is capable of tracking user fingers and
palm of hands, which enables precise and advanced multi-touch
interactions as well as tangible interactions. For tangible
interaction arbitrary physical objects can be used to control
interaction. Using the depth- sensing camera, physical objects
can be used in common (2D) actions, such as placing and
moving, and also in 3D actions, such as grouping or stacking.
The depth-sensing also allows extending the multi-touch
interaction to object surfaces without the need for integrated
logic and sensors.
Combing RFID chips and depth-sensing cameras the platform
enables to identify and track the persons that are interacting
with the useTable. This allows applying different functionality
to different users based on their roles during an interaction, a
central requirement not addressed by off-the-shelve multi- touch
tables.
Fig. 2: C-LA useTable
Using the useTable and dSensingNI as base technologies, a
number of different interaction and visualization techniques
have been implemented. These techniques enable experiments
with users, e.g., to study the usability differences between touch
input, pen-input and the use of interaction-objects. A key
advantage of the interactive display in the disaster management
application is the ability to rapidly switch between different
maps and map representations. Using a layer concept different
maps and additional information (e.g., airborne imagery) can be
mixed while maintaining the established workflow. The
extension of the visualization beyond map-display allows
experimenting with integration of derived information (e.g.,
danger zones, uncertainty) as well as task dependent map
generalization and highlighting strategies.
Insights from these studies are used to guide the development
at the base technology level. For example, experience showed
that in some scenarios a strict separation between visualization
of the current situation and the planning of future actions is
essential. Our design approach allows to adapt to these
requirements by modifying and extending the set of available
base technologies. To provide an intuitive separation we
extended the useTable into an L- Shape display. The L-Shape
employs the useTable for planning as described. An additional
wall display was added to visualize the current situation.
2.4 Alternative large-scale multi-touch displays
Large-scale multi-touch displays are becoming increasingly
available as commercial products, e.g. Samsung SUR40 (Fig. 3,
foreground) is a 40” multi-touch table display that is widely
available (Surface2, 2012) and GestureTek offers the
GestDisplay, a vertical 60” multi-touch display that also
supports gesture based interaction (Fig. 4) (GestDislay, 2012).
Another example is the PrimeTouch display (Fig. 3,
background) that detects up to 32 points simultaneously on
LCD and plasma screens up to 103” and customized screens up
to 200” and curved displays. It is based on a IR LED light plane
projected slightly over the display and photo diodes for
tracking.
Fig. 3: 65” PrimeTouch and Samsung SUR40
Fig. 4: GestureTek GestDisplay (Image: GestureTek)
3. REQUIREMENTS AND DESIGN
In our user centred design process we collaborate directly with
the German Federal Agency for Technical Relief (Technisches
Hilfswerk/THW). Our iterative design approach is based on ISO
standard 13407 (now replaced by 9241-210:2010) from which
we derived an operational process specifically adapted to the
development of user interfaces that employ emerging
technologies for which no proven off-the-shelve components
are available. Within the design process the different activities
correspond to common practice, covering context of use (in our
case study of user roles for stakeholders and real world disaster
situations); user requirement (definition of scenarios,
identification of requirements, definition of appropriate
measures); production of design solution (from scenarios over
prototypes to implementation) and evaluation (analysis of
requirements, review of designs, tests of prototypes, and
evaluation of the complete system). Our adaptation essentially
57