Full text: Proceedings (Part B3b-2)

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part Bib. Beijing 2008 
516 
lane line colour/line type attributes are recognized, and 
therefore the output is a functional description of the road 
geometry. GIS database with lane lines and their attributes can 
better support many applications. For instance, intelligent 
driving assistants can tell the driver which lane to change to, not 
only which side to change to. Third, the output is an absolute- 
georeferenced model of lane lines in mapping coordinates. This 
means that the output is directly compatible to GIS database. 
The paper is presented in 9 sections where section 2 gives the 
overview of the system. Sections 3 to 7 describe the design 
details of ARVEE. Sections 8 and 9 describe the experimental 
results and conclusions. 
2. VISAT™ MMS OVERVIEW 
VISAT IM has been developed at the University of Calgary in 
the early 1990s and was among the first terrestrial MMS at that 
time. Recently, an improved version was developed by 
Absolute Mapping Solutions Inc, Calgary, Canada 
(www.amsvisat.com), see Figure 1. The system’s hardware 
components include a strapdown Inertial Navigation System 
(INS), a dual frequency GPS receiver, 6 to 12 digital colour 
cameras, and an integrated Distance Measurement Instrument 
(DMI), and the VISAT™ system controller. The camera cluster 
provides a 330° panoramic field of view (see Figure 2). The 
images are captured in sets every 2—10 meters, each of these 
image sets will be called a survey point. The DMI provides the 
van longitudinal velocities and consequently linear distances to 
triggers the cameras at user pre-defined constant intervals. The 
data-logging program, VISAT™ Log, allows for different 
camera configurations and different image recording distances 
or trigger the camera by time if necessary (both can be changed 
in real-time). In terms of secondary functions, the camera 
cluster provides redundancy, i.e. more than two images of the 
same object. Using the VISAT™ georeferenced images, 
mapping accuracies of 0.1 - 0.3 m, for all objects within the 
filed of view of the cameras can be achieved in urban or 
highway environments while operating at road speeds of up to 
100 km/hr. 
The user can then interface with the geo-referenced images 
through VISAT Station™ — a softcopy photogrammetric 
workstation mainly designed for manual feature extraction from 
georeferenced images, collected by the VISAT 7 M system, or 
any other georeferenced media. VISAT Station environment is 
fully integrated with ArcGIS, and permits user-friendly viewing 
of the imagery. Moreover, VISAT Station™ is a client/server 
application, enables many user terminals to access the same 
image data base and perform parallel processing. 
Figure 1: The VISAT™ MMS Van 
Î / K Ì 
Figure 2: The VISAT™ Vision System 
3. GIS FEATURE EXTRACTION FRAMEWORK 
Figure 3 shows the GIS feature extraction framework for 
VISAT™. The input is georeferenced images acquired by the 
VISAT™ van. The extraction of 3D information is based on 
the integration of both image processing and photogrammetric 
analysis. The photogrammetric analysis uses available system 
parameters and geometrical constrains to provide a channel 
between 3D and 2D spaces. The image analysis extracts GIS- 
feature-related information in the images. Both results are used 
in a pattern recognition procedures, which locates the GIS 
features in the images and classify them into pre-specified 
categories. Then the GIS features are modelled in 3D to meet 
the requirements of GIS database. 
Figure 3: GIS feature extraction framework 
ARVEE follows the above framework. Generally, there are two 
stages of processing in ARVEE. The first operates is on image 
level by only considering images from one survey point. At this 
stage, linear features are extracted from each image, and 
projected onto a road ortho image, which is achieved by an 
improved inverse perspective mapping with vehicle fluctuation 
compensation (see section 4). Then, linear features are filtered 
and grouped into lane line segments (LLS). Geometric and 
radiometric characteristics are extracted for each LLS (see 
section 5). The second stage operates on high level which 
processes the whole MMS survey images results. All LLSs 
from different survey points are integrated to generate 
continuous lane line 3D model and their attributes. A Multiple-
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.