Full text: Commission II (Part 2)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B2, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
107 
A CACHE DESIGN METHOD FOR SPATIAL 
INFORMATION VISUALIZATION IN 3D REAL-TIME RENDERING ENGINE 
Xuefeng Dai 3 , Hanjiang Xiong 3 , Xianwei Zheng 3 
a State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 
129 Luoyu Road, Wuhan, 430079, China -daixuefeng203@126.com 
KEY WORDS: Memory cache, Disk cache, 3D Rendering Engine, Multi-thread, Replacement policy 
ABSTRACT: 
A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data 
getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing 
through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi 
threads and large fde are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the 
elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the 
position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated 
from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk 
cache. When a disk cache file size reaches the limit length (128M is the top in the experiment) , no item will be eliminated from 
the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the 
earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so 
the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to 
save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as 
possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination 
cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory 
cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that 
should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering 
scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to 
the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either 
in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The 
cache designed as described above in our experiment shows reliable and efficient, and the data loading time and hies I/O time 
decreased sharply, especially when the rendering data getting larger. 
1. INTRODUCTION 
A well-designed cache system has positive impacts on the 3D 
real-time rendering engine, it can make the scene rendering 
smoothly, especially in the visualization of the mass geographic 
data. Data caching is an important technique for improving data 
availability and access latency [1]. A cache system can be 
complicated when considered with the real-time rendering 
engine, both memory and disk cache should be taken into 
account, and the replacement policy could differ. The main 
purpose of the cache system is to prepare the data that need 
most in the rendering engine. 
The core of the cache system is the replacement policy. LRU[2] 
is one of the best-known replacement policies, This algorithm to 
choose the most long visit is not being replaced as a block by 
block, it takes into account the temporal locality[3] rather than 
spatial locality. O’ Nell and others propose LRU-k[4],this 
algorithm to choose last but k visit is not being replaced as a 
block by block, actually, LRU is a special case of LRU-k when 
k equals 1.LRU-k has to store additional information, but how 
long it will be stored is not be solved well. Megiddo and others 
propose ARC[5]. This strategy use two LRU queues to manager 
the page cache, one queue is used to manage the pages which 
only be visited once, the other is used to manager the pages 
which are visited more than once, this strategy can adjust the 
size of the two queues according to the temporal or spatial 
locality. In [6] and [7] dynamic caching mechanisms are 
discussed, in [8-11] cooperative proxy caching is examined. 
In the real-time rendering engine, cache replacement policy 
should be considered with the scene information. In our policy, 
each data item in the cache has a weigh value which is 
calculated dynamically, and the data item is replaced by the 
weigh value, it will be discussed in 2.1.4 and 2.2.2. Our cache 
system is consists of three parts, memory cache, disk cache and 
multi- threading mechanism. 
2. CACHE SYSTEM 
2.1 Memory cache 
Memory Cache 
Elimination cache 
Item 1 
PreUnLoad 
Item 2 
PreUnLoad 
Item 3 
PreUnLoad 
Item 4 
PreUnLoad 
. . . 
Item N 
PreLoad 
PreUnLoad 
Rendering cache 
Item 1 
Loaded 
; 
Item 3 
PreUnLoad 
I 
rïëm'T 
"UoadecT 
► 
Item N 
Loaded 
Loaded 
PreRendehing cache 
Item 1 
PreLoad 
: 
Item 2 
PreLoad 
: 
Item 3 
PreLoad 
; 
Item 4 
PreLoad 
Item N 
PreLoad 
i 
PreUnLoad 
Figure 1. Data transfer in memory cache
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.