Full text: Fusion of sensor data, knowledge sources and algorithms for extraction and classification of topographic objects

International Archives of Photogrammetry and Remote Sensing,Vol. 32, Part 7-4-3 W6, Valladolid, Spain, 3-4 June, 1999 
65 
Figure 6: Scheme for generating the Laplacian box. Upper 
row: generated Gaussian pyramid, lower row: generated 
Laplacian pyramid. 
The resampling factor ki for the Gaussian pyramid levels i 
generated before is k{ = 2*. Because we are interested 
in a uniform size of all generated levels of the Laplacian 
pyramid, our resampling factor differs to the factor used in 
(Burt and Adelson, 1983). Due to the uniform size of the 
generated images in the different levels, we call this stack 
Laplacian box. 
4.4 Normalization for generation a multichannel im 
age 
In section 4.3 we discussed the use of the scale space in 
order to obtain a rich image description. Our next step is 
to specify this predicate. In the segmentation process we 
use all generated resolution levels in parallel. Combining 
all levels U (0 < i < N) of the Laplacian box of all texture 
parameters, we get a multichannel image, thus we actually 
stack the three Laplacian boxes. 
In order to be able to fuse the different channels of the 
Laplacian box, we need to normalize these channels. 
For the normalization, we use the expected noise behavior 
of the filter kernels of the Laplace box, which we determine 
by analyzing the impulse response, based on the linearity 
of the generation process: 
If a filter h{r.c) is applied to an image g(r,c) with white 
noise n(r,c) ~ N(0,<rl), the noise variance <j 2 n , of the re 
sulting image g'{r,c) = h(r, c) * g(r, c) is given by: 
°n' = O’r 
Therefore the influence factor of the filter operation is the to 
tal of the squares of the filter coefficients. This corresponds 
to the proposal of (Ballard and Rao, 1994) who take the 
total energy of the filters. For our specific case, the analy 
sis of the impulse response of the levels U of the Laplacian 
box, generated using the binomial mask B A (Jàhne, 1989), 
we get the normalization factor /¿: 
/»■=-?-=£ h 2 { (r,c) 
The normalization factors only depend on the used filter 
mask. Fig. 7 shows all channels of the feature space for 
the texture edge extraction. 
Figure 7: Feature space for texture edge extraction. From 
top to down the aerial image with strength, anisotropy and 
direction of the texture respectively, and from left to right 
the levels of the Laplacian box for each feature. 
5 TEXTURE EDGE EXTRACTION 
The final task is the extraction of texture edges. 
5.1 Edge detection 
We use the feature extraction program FEX to extract the 
texture edges. This program analyzes the local autocovari 
ance function of a multichannel image g using the negative 
Hessian Tg, in our specific case T(SCAF). Using FEX 
for edge detection, results in texture edges. These edges 
separate neighboring textured areas depend on the user- 
selectable parameters of FEX (resolution scale, scale for 
lines and a significance level for internal statistical tests). 
Altogether, we need to specify five parameters: 
1. The differentiation scale si, needed for determining the 
texture properties at the highest resolution.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.