Full text: Technical Commission IV (B4)

ct 
ur 
he 
d, 
It 
on, 
1a 
; is 
The pbs server is the central module in TORQUE. It can 
communicate with other modules and accept the user's 
commands via network protocol. Besides its main functions, 
such as receiving/creating a batch job, modifying the job, and 
dispatching the job, a specially designed function was added to 
extract data dependencies of a batch job. The input data are an 
important criterion for later scheduling. 
The pbs mom is the daemon which places the job into 
execution on the node where it resides. One pbs mom runs on 
each computing node. The pbs mom receives a copy of the job 
from pbs server, creates a new session, places the job into 
execution, monitors the status of the running job and reports the 
status to pbs server. The modification to the pbs mom enables 
it to report the data status to the database after successfully 
executing a job, including input blocks and output results. 
The daemon, pbs sched, implements the administrator's policy 
to control which job can be ready, when this job is run and with 
which resources. The pbs sched communicates with the 
pbs server to determine the availability of jobs in the job queue 
and the state of various system resources. It also communicates 
with the database for block information to make a scheduling 
decision. 
5. EXPERIMENTS 
5.1 Experimental Environment and Datasets 
The experimental environment was a six-node Linux cluster 
running RedHat Enterprise Linux 5.5. Each node has two Quad- 
Core Intel Xeon CPUs, 8GB DDR2-667 ECC SDRAM, and 
ITB hard disk (7200 rpm, 32- MB cache). In this cluster, one 
node is configured as the master node, while the other five are 
the workers. 
The LiDAR point cloud of Gilmer County, West Virginia is 
chosen for our experiments, illustrated in Fig.7. It contains 
0.883 billion points and occupies 16.4 GB of external space. 
The average point space is about 1.4m. 
  
Figure 7. The Gilmer county LiDAR dataset 
5.2 Experimental Algorithms 
One common LiDAR processing algorithms, Delaunay 
triangulation (DT), was chosen to demonstrate the proposed 
Split-and-Merge paradigm. The algorithm was executed on the 
proposed parallel framework to examine its efficiency and 
suitability. 
The Delaunay triangulation pipeline for our proposed 
framework is modified from a parallel approach, called 
ParaStream (Wu et al., 2011). ParaStream integrates traditional 
D&C methods with streaming computation, and can generate a 
Delaunay triangulation for billions of LiDAR points on 
multicore architectures within ten to twenty minutes. 
The implementation of Split step in the Split-and-Merge 
paradigm is to carry out Delaunay triangulation for each 
decomposed block, erase the finalized triangles from the current 
triangulation (InnerErase), and output the temporary results. The 
Merge step in the Split-and-Merge paradigm merges the 
triangulations of two adjacent blocks and also erases the 
finalized triangles (InterErase). All these discrete tasks need no 
neighbor definition. The entire Delaunay triangulation pipeline 
falls into the type of n-level binary tree. 
5.3 Results and Discussion 
All the Split and Merge tasks for the algorithm was written in 
C++ and compiled with linux gcc 4.3. In the experiments, the 
execution time, speedup, and efficiency were used as the 
metrics for evaluating the performance of the parallel 
framework. 
The first experiment evaluated the influence of different task 
granularity on parallel performance. The decomposition size of 
1000m was adopted. The detailed test results are listed in Table 
5 and shown in Fig. 8. 
  
  
  
  
Processors DT 
1 10380 
3 3840 
5 3300 
  
  
  
  
Table 5. Execution time (in seconds) with the DT algorithm 
—I-— Parallel D 
Speedup 
  
0 T T 1 
0 2 4 6 
  
Number of processors 
Figure 8. Speedup of parallel DT in this framework 
All these experimental results demonstrate that significant 
speedup and high data-throughput are achieved with this 
framework. At the same time, with this parallel framework, our 
205 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.