In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B
614
processing image from study area wasn’t polluted by clouds.
Figure 2 illustrates the processing image with ST values.
Cordinate System: Universal Transverse Mercator
Reference Elipsoid: Mayford
Central Meridian; 51° WGr
Figure 2. NOAA image processed with ST information from
6/12/2003 on Rio dos Sinos Hydrographic Basin
With the processing and georeferencing image was possible to
put over it a digital elevation model obtained from isolines at
vertical equidistant of 20m and georeferenced on Torres vertical
datum. Next, for each pixel centroid was obtained the following
information: east and north UTM coordinates, altitude and ST.
2.3 Proposed neural network structure
The ANN was structured on multilayer perceptron (MLP),
whose algorithm principle is based on errorcorrection learning.
When a pattern is presented to the network for the first time, it
produces a random output. The difference between this output
and the intended compose the error, that is calculated by the self
algorithm. The backpropagation algorithm makes that the
weights from output layer been the first to be adjusted and after
the weights from residual layers, correcting them from back to
front, with the objective of reduce the error. This process is
repeated during the learning until the error become acceptable
(Silva et al., 2004).
The neurons utilized in the ANN were set based on the model
proposed by Haykin (2001), as show the Figure 3. In the
synaptic weights (Wkj) the k index refer to the neuron in
question while the j index report to the synapse input signal
which weight has relation. The function of the weight is
multiply the synapse input signal connected to the neuron.
ANNs can also present additional weights, called “bias”, that
have the role in preventing error generation when all input data
are null, because so the matrix of weights don’t suffer
modifications in the training. Activation function is a function
of internal order, been a decision made by self neuron over what
do with the resultant value from the sum of pondered inputs.
Transference function is a function of output or logic threshold.
It controls the activation intensity to obtain the wanted
performance from network.
bias
Figure 3. Artificial neuron structure utilized on ANN. Adapted
from Haykin (2001)
Mathematically, Figure 3 can be expressed in equations 5, 6 and
7.
j=\
(5)
= u k +b k
(6)
= <p{v k )
(7)
where: uk is the output from linear combinator (additive
junction);
• wkj are the synaptic weights;
• Xj are the input variables;
• is the activation potential;
• b it the bias;
• yk is the output signal of k neuron;
• is the activation function.
For the network was used a supervised training through the
Levemberg-Maquardt algorithm, which used the Newton
method that applies the minimum approximation for error
function (Haykin, 2001). In this case, ANN was trained through
pairs of input and output presentations, in others words, for each
input provided for network exist an expected output that is also
provided for the training. The network produces an output
answer where it self is compared with the expected output (that
was provided). The difference between network answer and
expected answer (known), generate a residue (error). This
obtained error is used to calculate the necessary adjust for
synaptic weights from network, that will be corrected until the
network answer coincide with expected output. Such is the
minimization error process (Haykin, 2001).
Continue talking about this learning type (Haykin, 2001), the
necessary calculations to minimize the error are important and
related to utilized algorithm, like on backpropagation, for
example, where the consider parameters as interactions number
by input pattern are used to get the minimum error value on
training (network capability to escape from local minimums).
Equation 8 shows the error function (MSE - Mean Squared
Error) that will be minimized on training step:
MSE = (8)
n
where:* dj is the expected output value from ANN;
• yj is the obtained output value;
With the objective to select an ANN that could supply a better
performance, were realized many tests, modifying the number
of intermediate layers, the number of neurons per layer and the
activation function, enable the selection of the best ANN for ST
estimation.
The ANN variables from input and output layers were
normalized inside the interval [0-1].
2.4 Results analysis
ANN was trained by information extracted from processing
NOAA thermal satellite image with a surface coverage from
6/12/2003. Its pixel size of 1X1 Km provided a quantity of 3737
points for the training process. The existing meteorological
stations on BHRS enabled the temperature and averages of air
relative humidity obtainment from satellite image period.
To test the proposed model were collected in field ST
information with a laser thermometer on 3/18/2008. With the
assistance of a GPS receiver model Trimble Pocket were
obtained the UTM coordinates (SIRGAS) for ST sample points.
Temperature and averages of air relative humidity were taken