Full text: Technical Commission III (B3)

vegetated areas and their "condition," and it remains the most 
well-known and used index to detect live green plant canopies 
in multispectral remote sensing data. Once the feasibility to 
detect vegetation had been demonstrated, users tended to also 
use the NDVI to quantify the photosynthetic capacity of plant 
canopies. The NDVI is calculated from these individual 
measurements as follows (NASA) : 
NDVI = NIR —VIS 
NIR - VIS 
These spectral reflectances are themselves ratios of the reflected 
over the incoming radiation in each spectral band individually; 
hence they take on values between 0.0 and 1.0. By design, the 
NDVI itself thus varies between -1.0 and +1.0. It should be 
noted that NDVI is functionally, but not linearly, equivalent to 
the simple infrared/red ratio (NIR/VIS). 
There are many methods that can be used for image 
segmentation. The NDVI is one of the most widely used indices 
for differentiating between vegetation and non-vegetation areas 
in remote sensing (ZHANG et al., 2006). For the NDVI, the 
threshold for vegetation extraction is usually positive and near 
to zero, it may vary from 0.05 to 0.15. Human supervision is 
helpful for selecting the best threshold from typical imagery. 
For our test data, non-vegetation areas are not well removed 
with a threshold of 0.0, while many areas are falsely removed 
with a threshold of 0.2. The best result is obtained with a 
threshold of 0.1. 
(D 
2.2 Vegetation segment and removal of RGB image 
In the traditional multispectral remote sensing which is 
achieved by aeronautics and space platforms, the Red and Near- 
infrared (NIR) bands. However, especially in the ground 
platform, NIR band is little utilized by compute vision and 
digital photogrammetry which usually only take RBG bands 
into account. Therefore, it is important to recognize vegetation 
occlusion in the ground close-range scene of buildings based on 
visible light RGB images. 
CIE L*a*b is very popular for image segmentation in computer 
vision. This is based directly on CIE XYZ (1931) and is another 
attempt to linearise the perceptibility of unit vector colour 
differences. Again, it is non-linear, and the conversions are still 
reversible. Colouring information is referred to the colour of the 
white point of the system, subscript n. The non-linear 
relationships for L* a* and b* are the same as for CIELUV and 
are intended to mimic the logarithmic response of the eye (Ford 
and Roberts, 1998). 
Vihite 
L* 100 
7 Yellow 
+b" 
d 
  
  
Black 
L'sfü 
Figure 1. Definition of CIE L*a*b 
The CIELAB color scale is an approximately uniform color 
scale. In a uniform color scale, the differences between points 
plotted in the color space correspond to visual differences 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
between the colors plotted. The CIELAB color space is 
organized in a cube form. The L* axis runs from top to bottom. 
The maximum for L* is 100, which represents a perfect 
reflecting diffuser. The minimum for L* is zero, which 
represents black. The a* and b* axes have no specific numerical 
limits. Positive a* is red. Negative a* is green. Positive b* is 
yellow. Negative b* is blue. Figl is a diagram representing the 
CIELAB color space (HunterLab, 2008). 
The CIE XYZ colour space was presented by CIE in 1931 (SEII 
2002). Three axes X, Y and Z are orthogonally defined by the 
basic colours R, G and B (red, green and blue). Generally, only 
points in the surface X+Y+Z=1 are considered. Each RGB point 
can be transformed into colour space CIE XYZ as follows (SEII 
2002): 
X]| [0.412291 0.357664 0.180209] | R 
Y |=| 0.212588 0.715329 0.072084 |-|G| ©) 
Z | |0.019326 0.119221 0.949102 | | B 
The component L represents the light Lightness with value from 
0 to 100. The components a* and b* represent colour: a* varies 
from green (with value -120) to red (with value + 120), b* 
varies from blue (with value -120) to yellow (with value + 120). 
Those can be written as: 
Lz116-(/ Y Y^ —16 i£f0.008856 « (Y / Y,) 
L=903.3-(Y/Y,) else 3) 
a=500-(F(X/X,)-F@/Y,)) 
ba200-(F(Y /YM-F(2/2,) 
Where (X, Y, Z) is the point to be converted, which can be 
obtained from Eq. (2). Let p represents X/X,, Y/Y, and Z/Z, 
respectively, then: 
F(p)z p^  ifp>0.008856 @ 
F(p)27.787. p-16/116 else 
(X, Y, Z, is the tristimulus values for the illuminant 
(HunterLab, 2008), which is also called white point. Here 
illuminant can be: X,,70.312779, Y,-0.329184 , Z, —0.358037 
(ZHANG et al, 2006). Other parameters can be (Ford and 
Roberts, 1998; HunterLab, 2008): 
3 
* É Jy 3 
AL 7 sample ~ “standard 
* * * 
Aa = ample T C tandard 
* * * 
Ab zb, u^ (5) 
sample standard 
AE" =NAL? + Aa” + Ab” 
AC = € ie i C ned 
h,, = arctan(b' / a^) 
  
Where: 
C za 5" 
+ A L* means sample is lighter or darker than standard; + À a* 
means sample is redder or greener than standard; — ^ b* means 
sample is yellower or bluer than standard 
Many experiments have been carried out to compare the 
performance of image segmentation between NDVI and CIE 
L*a*b (ZHANG et al., 2006). So the CIE L*a*b approach is 
used for vegetation segment and removal. Another reason is that 
using CIE L*a*b, vegetation can also be extracted from visible 
light RGB images because the component a* is negative for 
vegetation in standard visible light RGB imagery and close to - 
120 for green vegetation. To segment RGB imagery with CIE 
L*a*b, a* from -0.15 to -0.05 should be applied as a threshold. 
    
  
  
    
   
   
   
   
   
    
    
   
    
    
    
    
   
   
    
   
  
  
   
  
    
   
  
   
   
  
    
     
  
    
    
    
    
   
    
   
    
   
     
Th 
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.