Full text: Proceedings, XXth congress (Part 4)

04 
an 
of 
er 
ke 
ral 
ler 
on 
mn. 
1€- 
nd 
)0; 
to 
the 
ols 
we 
om 
the 
are 
the 
ixel 
atic 
nal 
ata, 
SY 
his 
the 
"ing 
rate 
| on 
are 
grey 
area 
ulti- 
olds 
cture 
)04). 
rast, 
erns. 
ean, 
dow. 
TCy 
C 
ga 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004 
values in an image and analyse the grey level pattern and 
variations in a pixel's neighbourhood by determining pixels 
positions that have equal or nearly equal grey values. This grey 
level variation can be directional or not. While the mean and 
variance provide a simple description of the statistics of the grey 
values, the entropy measure provides spatial information of the 
grey values related to their directionality and frequency of 
occurrence. Thus it can be used to detect and extract image 
regions based on the relative degree of randomness of their 
structure patterns. 
When the feature is linear or an edge in an image, edge 
detection methods can be used to determine sharp changes in 
the pixel values. These changes in brightness in the two- 
dimensional image function, /(x,y), are determined by various 
edge-detection operators based on the two directional partial 
derivatives, (d//dx, dl/dy), which are approximated as image 
pixel differences. Mapping feature operations required selection 
of specific edges rather than all edges in the image and as much 
as possible low error in edge detection location and type. The 
most commonly edge detection operators are the Sobel, Prewitt 
and the Laplacian. Their disadvantage is that they are sensitive 
to noise and they might produce more than one response to a 
single edge. Therefore, one edge operator that is recommended 
for mapping feature extraction is the Canny operator (Canny, 
1986). The Canny operator produces a low error in the 
detection of an edge, keeps the distance between the detected 
edge and the true edge to minimum, and has only one response 
to a single edge (El-Hakim, 1996). 
Finally, the quality of feature extraction can be improved with 
the integration in the process of existing knowledge either as 
part of the process or as additional constraints. For the former, 
better knowledge of the type of the training areas will result in 
high classification accuracies and therefore to higher extraction 
accuracies. For the latter the idea is based on the principle of 
determining and establishing conditions that uniquely 
characterized the features of interest in order to increase the 
success of recognizing and extracting these particular features 
from image. These conditions can be applied as *pseudo" bands 
such as a DEM layer, which can be included in the 
classification process to improve the classification results for 
extracting vegetation or buildings (Eiumnoh and Shrestha, 
1997; Hodgson et al., 2003). Or they can be applied as spatial 
constraints, where the extraction of a feature is based on the 
intersection of conditions-derived spatial layers using logical 
operators. 
Currently most of the efforts for automated or rather semi- 
automated feature extraction are concentrated on thematic type 
of extraction, such as roads, water bodies, vegetation, and 
buildings (Auclair et al., 2001; Jodouin et al., 2003; Baltsavias, 
2004; Zhang, 2004). 
2.3 Change detection 
Change detection requires the comparison of two temporal 
datasets for the identification and location of differences in their 
patterns. Although in many cases the comparison must be 
conducted between heterogeneous datasets, for example “new” 
image and “old” vector database data, the actual comparison is 
conducted with homogeneous types of data. That is, the change 
detection is reduced between image data or between vector data. 
The former is referred as image-to-image change detection, . 
while the later as feature-based change detection. 
613 
2.3.1 Image-to-image. In the case of multi-temporal images 
we can distinguish two basic approaches. An indirect image 
change detection, where the change analysis follows an image 
classification process. The comparison can be done by either 
differencing the two raster classified thematic layers or by 
extracting the boundaries of the thematic regions and conduct a 
vector (i.e., feature-based) change analysis. With this approach 
we overcome problems related to image acquisition conditions, 
such as different sensors, atmospheric and illumination 
conditions and viewing geometries. The accuracy of the 
detected changes is proportional to the accuracy of the image 
orthorectification and of the classification results. 
The second approach is the direct comparison of two temporal 
images (Singh, 1989). Various techniques supported by the 
functionality of IP and GIS systems are: 
image differencing, where the two co-registered temporal 
images are subtracted pixel-by-pixel. This approach is affected 
by the various image acquisition conditions and some form of 
radiometric normalization is applied to both images to reduce 
these effects. Still the determination of the threshold between 
change and no-change in the histogram of the difference image 
is a critical issue for the resulting changes. 
* image ratioing, where the ratio of the values of corresponding 
pixels between the two temporal images are computed. If there 
is no or minimal change the ratio is close to 1. Again some form 
of radiometric normalization between the two images needs to 
be applied, while the selection of the threshold is critical as 
well. 
* image regression, where the pixel values of the second image 
are assumed to be a linear functions of the corresponding pixel 
values of the first image. A least squares regression can be used 
to determine the linear function. Using this function the 
estimated pixel values for the second image can be computed. 
The difference image is determined between the estimated 
second image and the first image using either image 
differencing or image ratioing. If there is no change the pixel 
values of the unchanged areas will be close to the estimated 
pixel values, otherwise there will be changes. 
* principal component analysis (PCA) for multispectral 
multitemporal images, which can be applied either to each of 
the images and the principal component of each data can be 
compared with one of the above methods, or can be applied to a 
combined image consisting of the combined bands of the 
images to be compared. 
2.3.2  Feature-based. For the feature-based approach various 
functions of spatial analysis are used, such as layer union, layer 
intersection, buffer generation, and topological overlay. The 
spatial change AS; » is defined as the difference between the 
spatial union of the two temporal homogeneous vector datasets 
S; and 55 minus their common spatial elements (Armenakis et 
al., 2003): 
A812 7 (Sy 085 - (Sj ^85) 
-(Sj- (Sj 82) 0 ($5- (Sij 85) 
= Del 9 Agg 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.