Full text: Commission IV (Part 4)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, 2012 
XXII ISPRS Congress, 25 August - 01 September 2012, Melbourne, Australia 
215 
into the final output. Due to the intrinsic diversity of processing 
algorithms, merge procedures vary. For some algorithms, 
additional edge operations must be carried out before the merge 
between two adjacent results; while for other ones, all the 
intermediate results can be merged in a batch mode without any 
additional edge operations. 
Therefore, the complete execution graph can be divided into two 
categories according to the decomposition and merge paradigms: 
two-level n-ary tree, and n-level binary tree patterns, illustrated 
by Fig.4. In the first type, all the intermediate results are merged 
in a whole; in the second type two adjacent intermediate results 
are merged hierarchically. 
Figure 4. Two types of Split-and-Merge paradigms 
(left: two-level n-ary tree; right: n-level binary tree) 
For a specific LiDAR algorithm, the execution pattern is defined 
by its internal procedure. Users/programmers only focus on the 
actual implementation of two tasks: Split (program S) and 
Merge (program M). After the implementation of these two 
programs, the framework will automatically generate a 
collection of scripts to encapsulate these individual split and 
merge tasks. Illustrated by interpolating four point blocks into a 
DEM, the generated task scripts are listed in Fig.5 and Table 1. 
the local area network. The open source TORQUE project 
(Staples, 2006) was customized to provide a basis for our 
parallel framework. 
Fig. 6 illustrates the structure of our parallel framework.The 
system consists of four components: pbs server, pbs sched, 
database, and pbsmom. These four components collaborate 
closely to perform LiDAR point cloud processing. 
Figure 6. The TORQUE-based parallel framework 
The database is the newly designed module for TORQUE. It 
stores current information for later scheduling, e.g. the 
distribution of decomposed blocks, the status of job execution, 
and the state of I/O between nodes. The database is hosted in a 
MySQL instance. The detailed information about these tables is 
listed in Table 2, 3 and 4. 
Field Name 
Type 
Length 
block name 
vchar 
40 
nodename 
vchar 
40 
node type 
integer 
Table 2. The structure of tbl block distribution 
Figure 5. Four point blocks for interpolation 
idw interpolate -i abc_0_0.1as -o abc_04.dem 
idwjnterpolate -i abc 1 O.las -o abc 05.dem 
idw interpolate -i abc O l.las -o abc_06.dem 
idwjnterpolate -i abc 0 2.las -o abc_07.dem 
demmerge -i abcJM.dem/ abc_05. dem / abc_06. dem / 
abc 07. dem -o abc 01.dem 
Table 1. A list of automatically generated task scripts 
Field Name 
Type 
Length 
task id 
vchar 
40 
node name 
vchar 
40 
start time 
datetime 
completion time 
datetime 
Table 3. The structure of tbl task execution 
Field Name 
Type 
Length 
block name 
vchar 
40 
node_source 
vchar 
40 
node dest 
vchar 
40 
start time 
datetime 
completiontime 
datetime 
Table 4. The structure of tbl io information 
4. THE PARALLEL FRAMEWORK 
Our universal parallel framework is built on a standard SMP 
(Symmetric Multiprocessor) cluster. Each node is connected by
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.