Full text: XVIIth ISPRS Congress (Part B3)

Vision. 
hniques 
on) and 
jving a 
objects 
g (back 
nts in a 
sh level 
ich as 
wledge 
straints 
thness, 
ometry 
duce a 
nts of 
terrain 
ints of 
ie most 
vision 
:m, the 
3, it is 
object 
hat the 
pixels 
uctural 
Vithout 
ctically 
of the 
: trace) 
into the 
t space 
e with 
atching 
ate line 
object 
belling 
axation 
| been 
paper 
uping, 
well as 
2. LINE DETECTION AND GROUPING 
Despite the large amount of research, effective 
extraction of straight lines has remained a difficult 
problem. Usually a local operator is used to detect the 
local discontinuities or rapid changes in some image 
features, followed by aggregating procedure to link the 
local edges into more global structures. These methods 
include Hough transforms!9!^5, edge tracking and 
contour following'®, curving fittings, etc. In our 
research, the following methods have been implemented. 
Edge detection and filtering 
Sobel operator is employed to detect local edges. In 
most of practical situations, the image data are noisy 
and, since edge are high spatial-frequency events, edge 
detections enhance the noise. In order to get reliable 
global information for later processing, a optimization 
method developed by Duncan® has been implemented, 
which is aimed at providing a bridge between local, 
low-level edge and line detection and higher-level object 
boundary using a new form of continuous labelling. 
Extract straight lines from edge direction 
Based on the observation by Burns, edge gradient 
orientation can serve as a very good base to extract line- 
support region. In our research, we have used Duncan’s 
technique to bridge the edge orientation gap caused by 
noise and possible irregularity of the object boundary. 
Following this, there are four steps in extracting straight 
lines: 1). segment and label pixels into line-support 
regions based on similarity of gradient orientation. 2). 
use least square method to accurately allocate the 
straight line position within each line-support region. 3). 
verify the straight lines by comparing the difference 
between the allocated line and the contour which has 
average intensity grey value, passing through the line- 
support region. 4). calculating attribute for each line, 
e.g. contrast, length, orientation, etc. 
Perceptual grouping 
linear structures are usually broken due to a variety of 
factors, such as markings, noise in the image, and 
inadequacies in the low-level processes. Additionally, 
some of the breaks are due to real structures in the 
image. Because Duncan filter can only bridge the gap 
within one or two pixels, additional perceptual grouping 
in vector form is required. Grouping of straight lines 
has been the subject of much investigation. The reader 
is referred to some references??^?, Because we only 
use straight lines in our current implementation for 
matching, only collinearity is considered in grouping 
image line segments. A more precise way to implement 
these decision would be to use 3D information if the two 
line segments are on the same plane, which is in turn 
based on the matching result. 
3. CANDIDATE MATCHING IN IMAGE SPACE 
535 
Feature-based matching is very common technique in the 
image space^^??. The commonly used features have 
been edges detected in the image. However, edges as 
primitives may be too local. In our approach, we match 
straight lines, which consists of connected edges, and 
hence the inter-scanline connectivity is implicitly in the 
matching process. 
Constraints of matching 
Marr" and Poggio have suggested use of the following 
two constraints: 
1). Uniqueness. Each point in an image may be 
assigned at most one disparity value. 
2). . Continuity. Matter is cohesive, therefore disparity 
values change smoothly, except at a few depth 
discontinuities. 
In our image/object dual space matching approach, we 
unify the uniqueness constraint in the relaxation 
procedure, but substitute the continuity to the general 
geometric constraints of scenes. 
Baker? also proposed another "ordering" constraints for 
un-transparent objects which is valid in the most of 
cases. We keep this constraints in the image space 
because this is more easier to implement than in the 
object space. 
Matching attributes 
After the low-level processing, the line segments are 
described by 
-- coordinates of the end points 
-- orientation 
-- strength(average contrast) 
Matching criteria: 
On this aspect, we use some of criteria developed by 
Medioni and Navatia?. 
e overlapping: detail of explanation is referred to 
Medioni. Actually, this is the another version of 
epipolar geometric constraints. We calculate the 
corresponding threshold by following formula: 
t, — oW + dy (1) 
where 
œ is the estimation of error in the angle 
parameters of camera geometry, it should 
be noted that there are totally three angel 
parameters to describe the orientation of a 
camera. Here o is a overall estimation of 
these three angle errors. 
W is the width of the photograph. 
dy is the estimation of error of y-direction 
shift between two images. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.