Full text: Actes du onzième Congrès International de Photogrammétrie (fascicule 3)

high geometrical quality that they can be regarded as free 
from errors, at least in comparison with the device under 
calibration, the concepts “true” values and “true” errors 
can be applied, although always with care. Consequently, 
the concept and term accuracy can be used in this con 
nection. 
2.2. The Concepts and Terms Deviation and 
Error. 
At this moment there is reason to discuss briefly the 
terms deviation and error, which frequently are used 
in statistics and theory of errors, individually and in the 
combinations standard deviation and standard error (of 
unit weight), mean deviation and mean error, probable 
deviation and probable error, etc. 
There seem to be good reasons to combine the term 
deviation with the concept and term precision and the term 
error with the concept and term accuracy. 
In statistics deviation is usually applied to express the 
difference between individual repeated or replicated 
measurements and the average. It seems suitable to define 
deviation = measured value — average = x — x 
The variance is then defined as 
n—1 n—1 
where n is the number of measurements and n — 1 the de 
grees of freedom 1 
The standard deviation is then defined as the positive 
square root of the variance or 
Ordinarily, s is an expression for the precision (rather 
imprecision) of one and each of the measurements x. The 
precision of the average x is evidently higher than that of 
one single measurement, i.e. the standard deviation of the 
average is lower than that of a single measurement. This is 
also expressed by the well known formula for the standard 
deviation of the average 
s 
This formula can be derived by applying the special law of 
propagation of errors and deviations to the expression for 
the average 
-_I]*_*l+*2 + + X n 
n n 
1 It should be noted that the latin character 5 is used instead of 
the greek a (sigma). This is in agreement with ordinary statistical 
practice where o represents the entire population and s a sample 
of deviations. In measurements the population of deviations 
and errors is infinitely large. Therefore, here s is always used 
instead of o. 
where each of the measured values x x ... x n is assumed to 
be affected with the standard deviation s. Each value is 
then assumed to be independent and free from correlation 
with the others. If all measured values were affected with 
errors of the same magnitude and direction (a constant 
error) they would not appear in the standard deviation. 
The measured values would in such a case be physically 
correlated. 
If the results of measurements, for instance the average 
x, is compared with given (true) values, which is the case in 
all calibration procedures, the discrepancy or error is 
defined as 
+ e = measured value—given value 2 
Provided that the given value can be regarded as free 
from errors, at least in comparison with the measured 
value, the quantity e can be regarded as a true error or 
discrepancy. 
If there are several such determinations of errors of 
similar character, for instance in photogrammetric model 
coordinates, each of the errors represents the concept of 
accuracy. Statistically, they can be represented by the 
root mean square error (discrepancy) defined as 
Frequently and particularly in connection with calibra 
tions, the discrepancies e are regarded as indirect measure 
ments of (gross errors), systematic (regular) errors and 
irregular errors of the measured values. 
The discrepancies e are interpreted as a linear differential 
formula (mathematical model) where the parameters are 
possible systematic errors according to physical and other 
circumstances. In particular the manufacturers of instru 
ments should be the natural source of information as to 
possible systematic errors of the instruments. Because in 
principle there always shall be redundant discrepancies in 
the calibration procedure, the parameters shall ordinarily 
be determined under the condition that the sum of the 
squares of the residuals shall be a minimum. This leads to 
the system of normal equations, the solution of which 
gives the parameters, the minimized sum of squares, etc. 
If this minimized sum of squares is divided by the number 
of redundant discrepancies the variance of the residuals or 
residual variance is obtained. The positive square root of 
this variance is denoted standard error of unit weight (s 0 ) 
and is an expression for the accuracy of each of the mea 
sured values after removing the regular errors. Because 
each systematic error is a well defined linear function of the 
measured values which are assumed to be independent, 
2 A correction v, to be added to the measured value in order 
to obtain the given value, is consequently defined as v = - e. 
If, for some reason, other definitions of the signs of e and v are 
used, as for instance in geodesy, this should always be clearly 
stated. 
» 
the stan 
from tl 
standar 
standar 
weight 
the n< 
error 
instanc< 
propagi 
be pai< 
parame 
weight 
in a m 
dents 
With 
and of 
ficance 
functio 
other 
with 
of the 
ment 
with 
of th< 
II: 6 
shall 
actual 
when 
2.3 
set 
in 
they 
be 
that 
mat! 
orth 
influ 
edge 
assu 
10
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.