Full text: Papers accepted on the basis of peer-reviewed abstracts (Part B)

*S, Voi. XXXVIII, Part 7B 
In: Wagner W„ Székely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Voi. XXXVIII, Part 7B 
ions and high frequency 
rojection is operated. All 
ìatic (PA) pixels are 
same processing flow to 
image (blue, green, red, 
i by rational functions: in 
lodel, metadata contains 
s that can be used by 
npled into a cartographic 
nd terrain distortions, 
located to be used into 
IS). Product location is 
rence3D™, if available) 
sd on (Baillarin 2004)]). 
)r ortho-rectification. The 
ty metadata. 
be PAN-sharpened to 
lage. 
ho-images, automatically 
'individual strips. This is 
agility and the precise 
le strips are all converted 
r computed tie points and 
ically homogenised, then 
ng-line. 
AL PLANE: THE 
DUCT 
focal plane makes the 
lse. A new product level 
Lussy 2006). 
a basic product specially 
ommunity and delivered 
inction model. 
Figure 3: Focal plane layout and location of ideal array 
In order to greatly simplify the use of sensor model, the Sensor 
Level product simulates the imaging geometry of a simple 
push-broom linear array, located very close to the PA TDI 
arrays. Besides, this ideal array is supposed to belong to a 
perfect instrument with no optical distortion and carried by a 
platform with no high attitude perturbations. This attitude jitter 
correction (made with a polynomial fitting) allows both for 
simple attitude modelling and more accurate representation of 
the imaging geometry by the rational functions sensor model 
(see further). 
3.2 Processing and image quality 
The production of this ideal linear array imagery is made from 
the raw image and its rigorous sensor model. 
The raw image is resampled into the Sensor Level geometry 
taking into account a DEM. The direct geolocation is made 
with an accurate Sensor Level geometric model. Thus, Sensor 
Level image and its geometric model are consistent. The 
impacts of the above processing on the geometric accuracy of 
the resulting products have to be significantly small (errors less 
than centimetres). These errors are due to: 
The quality of the resampling process, 
The accuracy of the DEM used (generally SRTM 
DTED1). 
The Sensor Level product is delivered with two geometric 
models: 
a “rigorous sensor” model 
a rational function model 
Users can choose either the rigorous sensor model, or the 
rational function sensor model: results are very comparable. 
On one hand, the rigorous sensor model is defined from a 
complete set of parameters of the image acquisition: 
alignment and focal plane characteristics (linear array) 
image time stamp 
smoothed attitude and ephemeris time tagged 
Such rigorous models are conventionally applied in 
photogrammetric processing because of the clear separation 
between various physical parameters and so, easier to use in 
block adjustments (refinement using GCP). 
On the other hand, the Rational Function Model, RFM, is an 
approximation of the rigorous sensor model. It allows full 
three-dimensional sensor geolocation using a ratio of 
polynomials (Tao 2001), using a standardized and very simple 
relationship between raw pixels and geographic coordinates. 
The RFM is able to achieve a very high accuracy with respect 
to the original rigorous sensor model. Accuracy assessment 
shows that RFMs yield a worst-case error below 0.02 pixel 
compared with its rigorous sensor model under all possible 
acquisition conditions. 
Therefore, when the RFM is used for imagery exploitation, the 
achievable accuracy is virtually equivalent to the accuracy of 
the original physical sensor model: the 0.02 pixel (1.4 cm) 
difference between the two models is an order of magnitude 
smaller than the planimetric accuracy and is therefore a 
negligible error. The RFM fully benefits from the pre 
processing applied to generate the Sensor Level product 
(removing high frequency distortions) allowing rational 
functions to precisely represent this smooth geometric model. 
RFM can be used as a replacement sensor model for 
photogrammetric processing. 
which would have been 
msor (SPOT-like) in the 
) be able to exploit the 
*e (such as DEM or 3D 
ito account the complex 
tie (mainly because of the 
fitly tilted TDI arrays for 
ach XS band), the raw 
! different products with 
arrays 
omponent 
r 
mm 
i » 1 mm 
To obtain the best results: 
Resampling process is made with a highly accurate 
method (using spline interpolators (Unser 1999)), 
The DEM is pre-processed in order to minimize the relief 
artefacts due to errors and/or blunders. 
The geometric model differences between raw image and 
Sensor Level (especially attitude and detector model) are 
minimized to decrease the parallax and the altitude error 
effects. 
Hence, the quality of a Sensor Level image is mainly linked 
with the quality of the corresponding raw image (the geometric 
budgets are detailed in (De Lussy, 2006 )). The only remaining 
difference is due to the little parallax between Sensor Level 
model and Real sensor (less than 80prad) combined to a 
uncertainty of the DEM. In term of location accuracy, the 
difference between Sensor Level images and real sensor 
images is less than 3.10-3 according to the SRTM 30m 
accuracy at 99.7%. 
3.3 Accuracy of Sensor Level geometric model 
The geometric modelling refers to the relationship between 
raw pixels in the image and geographic coordinates on ground. 
4. ORTHO-RECTIFIED PRODUCTS 
PERFORMANCES 
The other set of products made available by the Pleiades-HR 
system are the ortho-images (and ortho-mosaic) products. 
These products are ortho-rectified thanks to an accurate DEM 
(Reference3D™ if available, or a DTED1 System DEM by 
default). They are then easily usable with G1S as map products. 
The ortho-rectification processing takes advantage of the high 
location accuracy of Pleiades-HR: 14 m probable (90% of the 
images) and up to 25 m maximum (99.7% of the images) of 
circular error. 
For multi-temporal registration, it will also be possible to 
register the ortho-image to a reference image (Reference3D™ 
database). Even if this processing won’t increase the location 
accuracy, it shall guarantee a perfect multi temporal 
registration between images. 
The method is detailed in (Baillarin 2004). It is composed of 
three independent steps: 
1) Image and reference setup in the same geometry using a raw 
location model, 
2) Image mis-registration measurements, using an automatic 
and generic process, 
53
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.