Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

ISPRS Commission II, Vol.34, Part 3A „Photogrammetric Computer Vision“, Graz, 2002 
  
RECOVERING FACADE TEXTURE AND MICROSTRUCTURE FROM REAL-WORLD IMAGES 
Xiaoguang Wang", Stefano Totaro^, Franck Taillandier^, Allen R. Hanson?, and Seth Teller“ 
^ Cognex Corporation, Natick, MA 01760, USA - xwang@cs.umass.edu 
? Dipartimento di Elettronica ed Informatica, Univ. of Padua, Italy - tost@dei.unipd.it 
* [nstitut Géographique National, Saint-Mandé Cédex, France - franck.taillandier@ign.fr 
“ Dept. of Computer Science, Univ. of Massachusetts, Amherst, MA 01003, USA - hanson@cs.umass.edu 
* Laboratory for Computer Science, MIT, Cambridge, MA 02139, USA - seth@graphics.les.mit.edu 
Commisson III, WG 7 
KEY WORDS: Texture, Surface Geometry, Texture Fusion, Microstructure, Urban Site Reconstruction 
ABSTRACT: 
We present a set of algorithms that recovers detailed building surface structures from multiple images taken under normal urban conditions, 
where severe occlusions and lighting variations occur and are difficult to be modeled effectively. An iterative weighted-average algorithm is 
designed to recover high-quality consensus texture of the wall facades. A generic algorithm is developed to extract the 2D microstructures. 
Depth is estimated and combined with 2D structures to obtain 3D structures, facilitating urban site model refinement and visualization. 
1 INTRODUCTION 
Extracting and rendering detailed 3D urban environments is an im- 
portant problem in computer vision and computer graphics because 
of its numerous applications. The main bottleneck of this problem 
lies in the need for human intervention in current systems, prevent- 
ing them from being scalable to large datasets (Teller, 1998). A 
large body of research has been made for automating some of the 
processes, including reconstruction of coarse 3D geometric mod- 
els, mainly at the level of buildings (Collins et al., 1998; Coorg and 
Teller, 1999; Firschein and Strat, 1996). Detailed analysis of facade 
texture and substructures has been very limited (Mayer, 1999). 
Real-world texture and detailed structure are important because they 
provide high visual realism as well as cultural and functional infor- 
mation of the urban site. However, it becomes increasingly difficult 
when the information of concern is detailed to the degree of microc- 
structure (surface structures such as windows that possess few sup- 
porting pixels due to insufficient image resolution). Figure 1(a2, b2, 
c2, d) shows some sample (rectified) images of a real-world building 
facade captured from different viewpoints. Large portions of useful 
texture are either occluded by other objects or degraded due to sig- 
nificant lighting variations. Some occlusions are caused by regular 
structures, such as other buildings, which could be modeled using ex- 
isting techniques; some others are caused by irregular objects, such 
as trees, which are very difficult to be modeled effectively. 
In summary, a detailed analysis of such images poses a difficult prob- 
lem due to various factors that affect image quality, including (1) 
varying resolution due to perspective effects, (2) noise introduction 
during acquisition, (3) non-uniform illumination caused by lighting 
variations and complex environmental settings, (4) occlusions caused 
by modeled objects, such as other buildings, (5) occlusions caused by 
unmodeled objects, such as trees, utility poles, and cars. A system 
must be capable of dealing with all these coexisting factors in order 
to facilitate a detailed analysis. In addition, interactive methods are 
not preferable because of the large number of pixels and structures 
present in many situations (e.g. more than a thousand windows for 
four or five buildings). 
We develop an automated method for effectively recovering high- 
quality facade texture and its microstructure pattern from multiple 
images. Input to our method is a set of images annotated with camera 
intrinsic parameters and reasonably accurate (but not exact) camera 
pose, as well as a coarse geometric model, mainly the facade planes 
of buildings in the urban site. 
The information required as input to our method is available using 
existing techniques developed in the City Scanning Project (Teller, 
1998). Image acquisition is performed by a semi-autonomous robot 
(Bosse et al, 2000), which is a movable platform mounted with a dif- 
ferential GPS sensor, a relative motion sensor, and a rotative CCD 
camera. The robot acquires thousands of images from different lo- 
cations (called nodes) on the ground surface, annotating the images 
with metadata such as time and pose estimation. Spatial positions of 
the nodes are refined using feature correspondences across images 
(Antone and Teller, 2000). A set of facades, each corresponding to a 
wall surface of a building, is then extracted from the original images 
using a priori constraints (such as vertical surfaces and horizontal 
lines) to establish the geometric model of the urban site (Collins et 
al., 1998; Coorg and Teller 1999). 
Section 2 describes an iterative, weighted-average approach to high- 
quality facade texture recovery. Sections 3 and 4 introduce 2-D and 
3-D methods for microstructure extraction. Section 5 concludes the 
paper with discussions. 
2 TEXTURE RECOVERY 
Facade texture recovery is itself an important task for computer 
graphics applications; it is even more important when microstruc- 
ture is of concern, because a high-fidelity texture representation is 
key to the success of detailed image analysis. Multi-view methods 
have been proposed for texture fusion/recovery, such as interpola- 
tion methods (Debevec et al., 1996), reflectance models (Sato et al., 
1997), and inpainting techniques (Bertalmio et al., 2000). The main 
drawback of these methods is that they do not handle occlusions au- 
tomatically. A system introduced by (Wang and Hanson, 2001) is 
capable of determining occlusions caused by regular, modeled struc- 
A - 381 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.