ul 2004
ie wall
h wall
e wall
ut 3-D
tithe
es and
section.
ed, the
ig the
or this
ifically
cations
1). We
texture
model.
iting a
uilding
segment
ment of
shed by
of texel
graphic
ired to
own the
, 1994).
Y, and
, t shift,
spective
Ri Rs Ms) 4 X
u a, 0 u À t : :
Ro RS OR. NEN
wlel]0*-. v0 ET 2 :
Rey Ro Rot. 11 Z
s 0 0 1: 0 > di ;
0 0 0 1 T
We can display the 3-D building object with texture
conveniently using perspective projection based on viewing
angles and scale after the. relationship between building
geometric model and ortho-rectified image is prepared by the
method mentioned before. The rendering result is shown in
Figure 5a, where the ortho-rectified roof image is used as
texture and overlaid atop building model in ArcScene.
However, this data structure is not adequate to represent the
texture for both roof and walls at the same time in ArcScene for
display. ArcScene doesn't support the PolygonZ format that we
chose for storing geometry model of the buildings to display if
it has several patches belong to the same object. Therefore, we
cannot use the rendering class of ArcScene for wall texturing
although the shapefile contains the geospatial information of
vertical walls. They need to be treated separately.
(b)
Figure 5. Textured Building Roof (a) and walls (b)
To display wall textures, we add a graphic layer to represent the
texture of vertical wall. That layer is generated only for
visualization and has no concern about data structure or query
because it doesn't have any attribute data and geospatial data.
All information about the vertical walls is contained in the same
shapefile that includes the polygon data for the roof. Using the
spatial data about building wall in building layer, we create the
wall polygon temporarily and associate the texture with that
polygon. After that, we get the parameters that relate the texture
image with the view plane using the same way as roof texturing
and then we change the polygons with texture into graphics and
save them. This process is repeated until every wall texture of
entire building is generated. The next step is then to link the
image file to the appropriate wall. Taking pictures of all
building wall in the study area is almost impossible and
unnecessary practically. Therefore, we categorized the building
walls according to the material, color, and the height-width
19
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B-YF. Istanbul 2004
ratio. Three types of walls are used in this study: dark red brick,
light red brick, and gray concrete with various sizes.
ArcObjects for 3-D photorealistic building modeling can select
and assign the wall texture images, which are grouped already,
to the temporary wall object automatically. That program
calculates the ratio of height and width of image and then
compares it with ratio of facade for vertical walls. After it finds
out the façades that have most similar ratio, the image is
textured on it. Figure 5b is the result with both roof and wall
textures being rendered.
Once the 3-D model is built, we can load this model into
ArcScene or other commercial GIS software that use the
shapefile format. The advantage of this is that we are able to
use the whole functions which are already existed in the
commercial GIS software such as changing view point,
shadowing, adding and combining data and so on. The building
model can be added to city model and become more realistic
using those functions.
The entire procedure mentioned above is applied to create a
photorealistic virtual model for the Purdue campus in West
Lafayette. During this process, the textures for the walls are
selected automatically according to the conditions mentioned
earlier. After that, the terrain model is generated with aerial
picture and DEM (Digital Elevation Model). Finally, the virtual
model for Purdue campus shows up through integration of 3D
building model and the terrain model. Figure 6 shows the
textured buildings in a closer view. This picture shows the
image textures fit into the building models well both in the
corner area and in the central area of the entire campus model.
Figure 7 represents the campus model at the ground level view,
while Figure 8 is the panorama of the campus model. The result
is more realistic if appropriate texture were selected through
manual editing and more texture groups were used according to
color and material of walls. In case of landmark such as a tower
and a statue and environmental objects such as tree and rock, it
might be hard to model and texture them with current data
model and texture mapping methods. They need to be modeled
separately and will be considered in future study.
( b )Figure 6. A Corner of the 3-D Photorealistic Model for
Purdue Campus (a) and Close View of Textured Building (b)