Full text: XVIIIth Congress (Part B5)

  
human operator interprets the scene by modelling a 
geometric approximation of it in the CAAD program 
(figure 3). The measurement is then handled 
automatically by the DIPS, based on this 
approximation. This procedure is referred to as 3D 
feature extraction. 
1.3 CAD-based 3D feature extraction 
In order to establish feature extraction that provides 
for high precision as well as for reliability, a top- 
down strategy is chosen. The semantic object model 
is used to detect the features described by this 
model. Thus only relevant features are extracted 
and redundant information and data complexity are 
reduced to a minimum. 
The three-dimensional position of the object is 
derived by simultaneous multi-frame feature 
extraction, whereby the object model is 
reconstructed and used to triangulate the object 
points from corresponding image points. 
It is evident that in most cases linear boundaries 
(edges) of an architectural feature contain more 
information than the vertices (corners) of this 
feature. Although edges are only a small percentage 
of the whole image content, they have major 
importance for the description of object 
discontinuities. The CAD-based 3D feature 
extraction routine takes advantage of this 
knowledge. It first locates the edges of the features 
to be measured and then derives the vertices as 
intersections of appropriate lines. 
The position of the edge is determined with 
subpixel precision by fitting a second-order 
polynomial in the direction of the gradient. The 
maximum point of the fitting curve corresponds to 
the subpixel position of the edge. The covariance 
matrix of the estimated polynomial parameters 
represents the accuracy of the edge point. 
The 3D feature-extraction is described in more detail 
in [4][5]. 
1.4 Automation 
We described the 3D feature extraction as a semi- 
automated top-down procedure. A CAD-generated 
feature is matched with corresponding images. In 
fact, any computer-vision strategy has ultimately a 
very strong top-down component. Theorists have 
pointed out that this is true for human perception 
as well [7][8]. Seeing is largely recognizing, is so to 
speak a top-down much more than a bottom-up 
process. 
Human perception happens simultaneously at many 
different levels. If we want to, we can perceive the 
world around us as being composed of lines or of 
colours. But it is most natural for us to see it as being 
composed of objects. It's at the level of objects that 
238 
we can understand the world. To lift the degree of 
automation to a higher level, we argue that it is 
necessary that also in architectural photogrammetry, 
the evaluation should be based on the notion of 
objects. 
This is true not only for the providing of qualitative 
guidance in the computer measurement process, 
which we will discuss later on. The notion of objects 
is also the prerequisit for a possible interpretation of 
the scene by a computer-program. It should be 
mentioned here, however, that an automatic 
interpretation of an architectural object faces many 
difficulties, not the least of them being that there's 
just simply no one correct way to model any 
architectural object. This is obvious not only from 
well-known texts about architectural theory 
[9][10][11]. It also quite simply has to do with the 
fact that no two CAAD-operators will model the 
same building the same way. 
This doesn't mean that automation has to stop here. 
Increasing automation of the whole modelling and 
measuring process could instead be achieved by a 
computer-learning mechanism, that allows the user 
to teach his modelling preferences to the system. 
An object-oriented data-integration is, as we will 
show, the essential prerequisit for these 
functionalities. 
2. DATASTRUCTURES IN DIPS AND IN CAAD 
2.1 General Considerations 
= —_—" 
d uniquepointe S | 
[3DObject — 
ocho 
    
Figure 1: Schematic comparison of Datastructure in DIPS and CAAD: 
uniqueness of points is not adhered to in CAAD 
Comparing the datastructures that are commonly 
used in digital photogrammetry and the ones found 
in CAAD systems, one essential difference can be 
stated as universal. It is the use of unique points in 
photogrammetry that is not adhered to in CAAD. 
CAAD datastructures are geared towards modelling 
capabilities, for which discrete elements have 
proven to be useful. (see figure 1). 
Furthermore it can be said that structuring means 
that go beyond points, lines and layers are to this 
day rather rare in photogrammetry systems. Means 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B5. Vienna 1996 
of higher 1 
in CAAD, 
photogram 
Another di 
for the ex« 
involved i 
usually not 
no standa 
whereas fo 
is used in 1 
world: DXI 
While ther 
improve c 
exchange 
pointed ou 
in discussi 
developme 
and CAAD 
structure a 
how a dat: 
and pro 
informatio: 
2.2 Data-S 
System 
The Datas 
project ca. 
There is o: 
that refere 
(unique m 
identical : 
representa 
every imag 
the 2D po 
but they : 
point or ol 
3D repres 
extraction 
For the 
Structure n 
2.3 Data-S 
Standard 
DXF is on 
formats ir 
back to 19 
AutoCAD, 
between c 
operating 
and chang 
AutoCAD. 
software, 
almost a 
market. Sc 
facto stanc
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.