Full text: Proceedings, XXth congress (Part 8)

ınbul 2004 
| the area 
re inside 
- Node. 
for each 
ct so the 
ne object 
essary to 
> parts of 
on 
o an IFS- 
scene, all 
e created 
a surface 
the parcel 
nected to 
rd can be 
way you 
f the land 
ain, and it 
red in the 
ad. 
y example 
ided with 
how what 
ing walls 
navigates 
with their 
s in green 
1g a table, 
e). 
le is used 
's, the use 
esponding 
L Viewer 
s color. 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B-YF. Istanbul 2004 
  
  
Figure 4. Thematic query based on USE from attribute table. 
(O Stadtmessungsamt Stuttgart) 
3. VIRTUAL REALITY 
The second part of the project is focussed on Virtual Reality. 
Virtual Reality is an immersive experience in which participants 
wear tracked, head-mounted displays to visualize, and listen to 
3-D sounds and are free to explore and interact within a 3-D 
world. This part of project is aimed to allow user to interact 
with the world and display the attributes as text, while the user 
is still navigating around in immersive environment like 
C.A.V.E or Responsive Workbench. 
VR is multi-modal that defines the more hardware oriented 
interface definition as the combination and cooperation of 
various input and output channels, which distinguishes the 
interface channels more from the point of media instead of the 
sensory perception. Dynamic behaviour introduces the notion of 
time into virtual environments. A highly interactive system has 
to deal with time in general and specifically with variable time 
frames and their synchronization. Responsive virtual 
environments should operate in real-time, that is, the response 
time and update rate of the system is high enough so that the 
boundary between user and virtual environment, the so-called 
interface, seems to vanish. This property is one of the major 
differences between VR systems and other 3D systems. 
3.1 Tools 
Whether they're images, 3-D sounds, or tactile vibrations, all 
aspects of VR must be coordinated and precisely delivered 
otherwise confusion will result. The computer system and 
external hardware that supply effectors (HMD or C.A.V.E) with 
the necessary sensory information comprise the reality engine 
(Pimentel and Teixeira 1993). 
This project has been tested in the C.A.V.E at Fraunhofer 
Research Institite for Industrial Engineering IAO in Stuttgart 
using the application software "Lightning VR" developed by 
them. Its core component is a database called object pool. The 
interactivity and the behaviour can be introduced with 
functional objects, so-called scripts and communication 
channels, so-called routes. These communication channels 
define application specific event propagation and therefore the 
interactivity (Blach et al 1998). 
Tcl/Tk programming language is used for interaction with 
lightning, as it is easy to integrate into other systems 
(Ousterhout, 1993). 
25 
3.2 Input Data 
Input geometry data for this application is the same VRML 
model converted from the ASCIH file in the above web3D 
project. The model consists of buildings and terrain information 
of Stuttgart city. Each building has unique name defined to the 
node that is describing the geometry. Lightning VR uses 
LibVRML97 for reading and displaying VRML files, which is a 
portable C++ class library (Chris Morley, 2002). This library 
can access the properties like position, orientation, scale, centre 
of bounding box etc. of the world files in scene. 
The attribute information concerned to the objects, which was 
purposefully created and saved as text file called “connection 
table” in the first part of project. This connection table has the 
information like Definition name, x, y, z, House name etc. Here 
definition name will act as primary key in connecting spatial 
and non-spatial data. 
3.3 Selection of building 
The input device can be a 3D mouse or glove etc, for the 
identification of objects in the virtual environment. The 
properties like position, orientation, scale etc, of input device is 
calculated by motion sensor in Lightning. In order to make 
input device a selecting tool, a ray has to be made active using 
ItSelectRay. It's a library under Lightning server, which is 
invisible. It can sense the objects hit and returns field values 
like first object hit, position & orientation, count of hit objects, 
etc, to the application. It is quite similar to pick() method of 
Cortona in Web3D project. The position and orientation values 
of ray were now routed (borrowed) from motion sensors 
properties, to make the input instrument act as a selecting tool. 
Now this ray (cursor) can be used to sense any object in virtual 
world. While moving the input device over building, it senses 
the object, recognizes it and sends the definition (name) to 
NodeHitOut property of SelectRay. This definition name will be 
used as primary key to match with the concerned attribute data 
that is stored in the text file. 
3.4 Connection to attribute data 
The attribute data of the model is available as separate text file. 
The program written in Tcl will read the text file line by line 
and check if the definition name from the ray, matches with first 
field of the text file. Once matched, the whole line of the text 
file is converted to text element using ltText property of 
Lightning and displayed over the object, which you pointed in 
runtime scene situation, as shown in figure 5. 
  
Powered By sgi 
OpenGL Performer 
  
  
  
  
Figure 5. Selection and attribute display of the building 
(O Stadtmessungsamt Stuttgart) 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.