Full text: XVIIth ISPRS Congress (Part B3)

ı can 
th a 
| the 
has 
nsity 
gion 
ssian 
rface 
noise 
ann- 
peak, 
pixel 
ding 
ot a 
the 
lling 
dary 
ents. 
s the 
at "a 
hord 
, the 
A to 
0 not 
such 
urs. 
gion 
L;= 0 + 2m, + [mlog(—) + m,log(—)] + 
m, m, 
log(D,D ym; (3) 
Where 
Ls  : number of bits describing the shape 
of a region 
m, : total number of points on the 
boundary 
mg: number of straight line segments on 
the boundary 
m, .: number of points fulfilling the chord 
property 
m, : number of outliers 
D,D,: number of pixels along x and y 
direction of the image 
In accordance with (2), we also use 4 items 
encoding the shape of regions. For the points on 
the curve which meet the chord condition, no 
additional coding is needed as far as the nodes 
specifying the straight line segments are known, 
so first item in this case is zero. The second item 
in (3) is the number of bits describing the 
outliers. If the boundary is encoded in Freeman 
chain code, 3 bits is required to store each pixel 
(for 8 directions). But if we store the edges 
between the pixels instead of pixels themselves, 
only 2 bits are necessary (for 4 directions). The 
third term is in the same meaning as equation 
(2). The final component is used for the coding of 
nodes connecting straight line segments. 
4. APPLICATIONS 
Segmentation 
The segmentation quality can be significantly 
improved by utilizing the image intensity as well as 
high level knowledge about the objects contained on 
the image. We have successfully integrated the 
shape constraint into segmentation using three 
layers in Fig.1, i.e. original image, segmented image 
and vector data, on which 
the segmentation is the result from the operations 
carried on segmented image and inter-reactions from 
the original image and vector data. Split-and-merge 
is the main mechanism in the segmentation 
procedure, which merges small regions into more 
meaningful big region, or split the big region into 
small regions when required. An initial 
segmentation is performed to get basic regions from 
the original image. After each level of split-and- 
merge, vectorization procedure transfers the region 
boundary into vector description, followed by a 
curve fitting algorithm which derives a compacted 
vector data based on the generic model. Based on 
the result of curve fitting, a measurement is 
calculated using MDL principle to describe the 
605 
uniformity of region by shape constraints. Such 
measurement is integrated with the information 
derived from the original image intensity to improve 
the decision making of split-and-merge of regions. 
For the detail of this part of work, reader is referred 
to [Zhang,92b]. 
Stereo matching 
The stereo matching (or correspondence) remains 
one of permanent problems in Computer Vision. In 
[Zhang,91a,92a], author presents a new approach to 
solve the problem, which incorporates the image 
space based matching techniques with the high level 
knowledge about the objects. The low-level 
processing (edge detection, feature extraction) and 
candidate matching are carried out in image space, 
while the final matching is determined in object 
space as solving a consistent labelling problem 
which results from the integration of candidate 
matching, high level constraints of objects and other 
constraints of image matching. One of the innovative 
features in our approach lies in back-projecting 
(back tracing) the line pairs from candidate 
matching into the object (scene) space, and 
combining all the constraints in a unified process. 
We substitute the concept of "figure continuity” 
usually used in the image matching with the high 
level knowledge from the object space. 
Integration of segmentation and stereo matching 
Our experiment has shown that segmentation is one 
of major difficulties in matching, among other 
reasons. One of possible solution is to integrate the 
segmentation with matching interactively as 
proposed in the following: after an initial 
segmentation which forms the lowest level in "N- 
node tree", a candidate stereo matching is carried 
out, which assigns the corresponding regions from 
one image to another images by using simple 
criterions such as shape, intensity difference, etc. 
During the next step of segmentation, stereo 
information is included, that is, in considering the 
merging of one region with its neighbouring region, 
the corresponding regions in the candidate pools are 
extracted and a unified measurement is calculated 
which integrate the intensity and shape information 
from both images as well as the some invariant 
properties constrained by the central projection 
geometry [Forsyth] [Boyer]. After each stage of 
segmentation, stereo matching is performed which 
introduces the other matching constraints such as 
uniqueness, together with the constraints described 
by object models. Such segmentation and matching 
procedure continue interactively until the result does 
not change. 
Integration of edge-based and region-based 
segmentation 
It is observed by a lot of researchers that no single 
method can provide a complete interpretation of 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.