Full text: CMRT09

In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 — Paris, France, 3-4 September, 2009 
1 
EFFICIENT ROAD MAPPING VIA INTERACTIVE IMAGE SEGMENTATION 
O. Barinova, R. Shapovalov, S. Sudakov, A. Velizhev, A. Konushin 
Moscow State University, Dept, of Computational Mathematics and Cybernetics 
{obarinova, Shapovalov, ssudakov, avelizhev, ktosh}@graphics.cs.msu.ru 
Commission III, WG III/5 
KEY WORDS: Automation, Video, Processing, Incremental, Learning, Object, Detection 
ABSTRACT: 
Last years witnessed the growth of demand for road monitoring systems based on image or video analysis. These systems usually 
consist of a survey vehicle equipped with photo and video cameras, laser scanners and other instruments. Sensors mounted on the 
van collect different types of data while the vehicle goes along the road. Recorded video can be geographically referenced with the 
help of global positioning systems. Road monitoring systems require special software for data processing. This paper addresses the 
problem of video analysis automation, and particularly the pavement monitoring functionality of such mobile laboratories. We show 
that computer vision methods applied to this problem help to reduce amount of manual labour during data analysis. Our method 
transforms video collected by mobile laboratory into rectified geo-referenced images of road pavement surface, and allows mapping 
of lane marking and road pavement defects with minimum user interaction. In our work the mapping workflow consists of two 
stages: off-line and online stage. In order to reduce user effort during error correction we take advantage of hierarchical image 
segmentation, which helps to delete false detections or mark missing objects with just a few clicks. Through continuous training of 
detection algorithm with the help of operator input error rate of automatic detection decreases; thus minimal input is required for 
accurate mapping. Experiments on real-world road data show effectiveness of our approach. 
1. INTRODUCTION 
Roadway monitoring systems are widely-used for supervising 
road pavement surface and repair planning. These systems 
usually include a complex of video cameras and other sensors 
mounted on a car as shown on Figure 1. The sensors record 
road pavement surface when travelling on a pavement at 
traffic speed. 
Most existing software for road monitoring involves manual 
processing of video collected by these mobile laboratories. 
Operator manually marks objects like lane marking and 
pavement surface defects (potholes, cracking and patches) on 
each video frame. This procedure is laborious and takes 
plenty of time; therefore the task of automation of objects 
detection comes into focus. In this paper we consider the 
problem of automation of video analysis for pavement 
surface monitoring. We describe a tool which assists in 
utilising visual observation data of pavement surface and 
mapping lane marking and pavement surface defects. 
Our main goal is to minimize effort of operator at the time of 
mapping lane marking and road defects while preserving 
accuracy of mapping result. The effectiveness of our method 
is achieved by intensive usage of computer vision techniques 
together with user-friendly interface that allows checking 
results of automatic detection and correcting errors if needed. 
As long as direct mapping of lane marking and road 
pavement defects in video sequences faces severe difficulties, 
we transform video into rectified images of road pavement 
surface. These images are further processed during interactive 
mapping. 
While to our knowledge there haven’t been much research on 
topic of road defects detection, lane detection is a well- 
researched area of computer vision with applications in 
autonomous vehicles and driver support systems. Despite 
perceived simplicity of finding white markings on a dark 
road, it can be very difficult to determine lane markings on 
various types of road. These difficulties arise from shadows, 
changes in the road surfaces itself, and differing types of lane 
markings. A lane detection system must be able to pick out 
all manner of markings from cluttered roadways and filter 
them to produce a reliable estimate of the vehicle position 
and trajectory relative to the lane as well as the parameters of 
the lane itself such as its curvature and width. 
Existing methods for lane marking detection are usually 
based on edge detection (McDonald, 2001) and gradient 
analysis (Lu, 2007). Use of edges makes detection results 
sensitive to noise, changes in lighting conditions and 
shadows. Another approach uses steerable filters (McCall, 
2004) which can be convolved with the input image and 
provide features that allow them to be used to detect both 
dots and solid lines while providing robustness to cluttering 
and lighting changes. 
As long as these methods were designed for autonomous 
vehicles, they aim at tracking of lane marking in video. In our 
work the goal is to detect lane marking in still images of road 
surface. Also our task is to detect precise contours of lane 
marking instead of just determining lane marking direction. 
This task is closely related to the field of semantic image 
segmentation, therefore the method we propose for detection 
is based on semantic segmentation of rectified road images. 
Rectified images can differ substantially depending of 
roadway material, time of survey and weather conditions. 
Therefore automatic detection tuned on one road image can 
perform poorly on other images. For this reason we have 
developed a detection algorithm which is automatically tuned 
with the aid of user interaction in order to perform best on 
each particular road. This allows accounting for specific 
characteristics of every particular road, or even a road 
section.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.