International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
TOWARD AUTOMATED FACADE TEXTURE GENERATION FOR 3D PHOTO-
REALISTIC CITY MODELLING WITH SMARTPHONES OR TABLET PCS
Sendo Wang
Department of Geography, National Taiwan Normal University, 162 HePing East Rd. Sec. 1, Taipei 10610, Taiwan -
sendo@ntnu.edu.tw
Commission III, WG I11/4
KEY WORDS: Three-dimensional Building Modelling, Photo-realism City Model, Texture Rendering, Direct Georeferencing,
Automation, Digital Close-range Photogrammetry, Virtual Reality
ABSTRACT:
An automated model-image fitting algorithm is proposed in this paper for generating facade texture image from pictures taken by
smartphones or tablet PCs. The facade texture generation requires tremendous labour work and thus, has been the bottleneck of 3D
photo-realistic city modelling. With advanced developments of the micro electro mechanical system (MEMS), camera, global
positioning system (GPS), and gyroscope (G-sensors) can all be integrated into a smartphone or a table PC. These sensors bring the
possibility of direct-georeferencing for the pictures taken by smartphones or tablet PCs. Since the accuracy of these sensors cannot
compared to the surveying instruments, the image position and orientation derived from these sensors are not capable of
photogrammetric measurements. This paper adopted the least-squares model-image fitting (LSMIF) algorithm to iteratively improve
the image's exterior orientation. The image position from GPS and the image orientation from gyroscope are treated as the initial
values. By fitting the projection of the wireframe model to the extracted edge pixels on image, the image exterior orientation
elements are solved when the optimal fitting achieved. With the exact exterior orientation elements, the wireframe model of the
building can be correctly projected on the image and, therefore, the facade texture image can be extracted from the picture.
1. INTRODUCTION
A photo-realistic 3D building model does not only describe the
geometric information about the building but also represent its
real appearance. There are a number of approaches for
reconstructing the geometric model from photogrammetric
images, from LiDAR point cloud, or from both of them (Braun,
et al. 1995; Chapman, et al. 1992; Fórstner, 1999; Grün, 2000,
Lang and Fórstner, 1996; Lowe, 1991; Tseng and Wang, 2003;
Veldhuis, 1998; Wang and Tseng, 2009). However, the façade
mapping relies on the manual operations to create texture
images is still the bottle neck of the photo-realistic building
modelling. The recent mobile computing devices, such as
smartphones and tablet PCs, usually equip with not only high-
resolution camera but also built-in GPS receiver and G-sensors.
These sensors can be used for the direct geo-referencing while
taking pictures of buildings. This paper proposes a concept
toward automated facade texture generation for the photo-
realistic 3D building modelling using smartphones or tablet
PCs. When the picture is taken, the device's 3D coordinates are
recorded from the built-in GPS receiver and its three rotation
angles are also recorded from the G-sensors. However, these
parameters are too rough to reconstruct the object space stereo
model for photogrammetric purpose. Therefore, a model-image
fitting algorithm based on least-squares adjustments is
proposed to determine precise image orientation.
The reconstruction of photo-realistic 3D building models
consists of three major issues: (1) modelling the object; (2)
determining the image orientation; (3) creating the realistic
texture image from photos. In this paper, the aerial photographs
are used to reconstruct the geometric models of buildings,
while the pictures taken by the personal computing device is
used as the fagade texture. By introducing the “Floating Model”
concept, the object modelling and image orientation problem
can be solved efficiently through the semi-automated
procedures based on the Least-squares Model-image fitting
(LSMIF). A friendly human-machine inter-acting interface
program is designed for an operator to choose suitable model,
and to move, to rotate, or to resize the model so it can
approximately fit to all of the images. An ad-hoc Least-squares
Model-image Fitting algorithm is developed to solve the
optimal fitting between projected model line segments and
extracted edge pixels. Since the object model can be extracted
and the photo orientation can be determined, the creation of
realistic texture image, which is also called inverse mapping,
can be automated by coordinate transforming and image
resampling. Figure 1 shows the workflow of the proposed
photo-realistic 3D building modelling procedures.
Interr
In the prc
“model sel
interaction
more robı
While th
projection’
carried ou
procedure
methods, v
2.
To deal w
concept of
can be cat
or volum
models foi
includes tl
rectangle,
The volum
cylinder, t
primitive 1
with a set
datum poi
the floatir
translation
and 3 rot
around X-
rotation o:
from eacl
parameter:
space and
The little
The yelloy
while the
changing |
"floating"
The volun
pose para
shape and
shape pai
Changing
primitive i
rectangula
different s
has an adc
Shows thr
change of
that does ;
original m
the shape
characteri:
Certain co
affect the |