Full text: XVIIth ISPRS Congress (Part B5)

  
1) data management; 
2) simple least squares interpolation, to remove 
the non-stationary trend; 
3) search for the optimum spacing, for the 
computation of empirical values of the 
covariance functions, when the date are not 
regularly gridded; 
4) empirical estimation of the autocovariance and 
crosscovariance functions of stochastic 
processes with some average invariance property 
with respect to a suitable group of coordinate 
transformations; 
(steps 2, 3 and 4 are repeated until the 
empirical covariance functions look as 
non-stationary covariance); 
5) interpolation of empirical functions by means 
of suitable positive definited models, 
especially with finite covarince function; 
tricubic 
problems, 
6) finite elements interpolation, by 
splines, to solve some computational 
if any, and save computing time; 
(steps 3, 4, 5 and 6 are repeated until 
computational problems remain in filtering); 
7) filtering of the noise from the signal and 
computation of the m.s.e. of the estimated 
signal; 
8) analysis of the noise by of data 
snooping of Baarda type; 
means 
(by using the residual noise, steps 4 and 5 are 
newly executed; if its empirical covariance 
functions look as coloured residual noise, a 
new step of collocation is started); 
9) prediction of the signal on check points and/or 
on the points of a regular grid; 
10) plot of results by suitable graphics represen- 
tation. 
Figure 3.1 shows the flow chart of the system of 
programs. 
When estimating the covariance function of a 
process in three dimensions on a large set of 
data, particular care must be taken of the 
numerical procedure used, to avoid wasting of 
computing time. To this aim special algorithms of 
sorting, merging and clustering have been 
implemented in order to obtain quick 
identification of neighboring points. The same 
care is required for the data management. 
It is at this level that a first blunder rejection 
is done: this is achieved simply by comparing each 
point value with a moving average taken on the 
neighboring points only. This is considered as a 
pure blunder elimination, while the more refined 
analysis described at step 8 is used to recognize 
particular features of the model. 
Indeed, if the data are regularly gridded, the 
analysis of the characteristics of the noise and 
its slope and bending allows for the 
discrimination between outliers and break lines. 
The same is true, with minor changes, when the 
data are not regularly gridded but their density 
is generally high. Finally, if the density is low, 
no information on the break lines is available as 
output data. 
384 
When filtering the noise from the signal of a 
process in three dimensions on large set of data, 
particular care should be taken of the numerical 
procedure to avoid wasting of computing time: to 
this aim the conjugate gradient method (with 
preconditioning and reordering algorithms, if 
necessary) is used. 
As regard the vectorial processes, all the 
components are filtered simultaneously, when the 
crosscorrelations are not too high. Otherwise, 
because of the ill-conditioned system, the 
components must be filtered separately, to avoid 
numerical problem. 
After the filtering the residual crosscorrelations 
should be considered in a second step, if 
neccessary. 
4. THE TEST EXAMPLES 
The system of programs runs on the SUN Spark and 
DIGITAL Vax computers. 
Two real examples of turbolence flow fields are 
used to test the new system. 
The study of these examples has been completed for 
small sets of data and it will be repeated in the 
future considering all data together. 
The first example contains 811 observations, which 
are irregularly distributed but dense (average 
distance among neighboring going to equal to 10 
um); the second one contains 452 observations with 
the same kind of distribution (average distance 
among neighboring points equal to 5 pum). 
Their behaviour is very rough. Indeed the 
residuals, after a polynomial interpolation of the 
second order, have approximately the same size and 
shape. This means that the trend removal should 
not be very important in this case. 
However, when, the correlation length is quite 
large, the filtering by least squares collocation 
will give serious computation problems when the 
set of data is large. For this reason a 
pre-filtering must be done. The easiest way to 
perform this seems to be the finite elements 
method. The same technique has been indipendently 
applied for a  suboptimal filtering from a 
statistical point of view, but with reduced 
computing time and memory requirements. Besides 
the solution is well-conditioned, from a numerical 
point of view. 
Therefore the "old" residuals have been 
interpolated by bicubic spline functions (their 
lags are 50 and 25 um in the first example and 50, 
28 and 15 ym in the second one) and "new" 
residuals have been obtained. This operation 
will furnish a correlation length of reasonable 
size. 
At the moment because the sets of data are small, 
the filtering by least squares collocation has 
been directly performed without computational 
problems. The residual noise of the both examples 
is very flat, and their covariance functions look 
as those of white noise processes. Note that a 
filtering by stochastic approach is preferable 
with respect to expanding the finite elements 
model by reducing the lag of the bicubic spline 
functions. Indeed the capability to follow the 
fields behaviour is in the first case higher than 
in the second one. 
Table 4.1 summarizes the 
processing the two examples. 
The evaluation of the results has not yet been done 
by the expert of hydromechanics; nevertheless the 
values of the a posteriori variance of the noise 
and the estimation error corfirm the values of the 
standard deviation of the observations for the 
results obtained by
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.