next up previous contents
Next: Batch Rendering Up: Massive Remote Batch Visualizer Previous: Parallel Renderer Communication   Contents


Example Dataset

A scientifically significant dataset has been provided by the Materials Science users with which we are working. At this time we have been requested not to publish any images of that dataset until their work has been considered for publication elsewhere (it is being submitted to a high-impact publication). The images are considered key to the impact and novelty of the paper. We intend to update this report with images once they have been published. However, details of the size of data set and performance figures can be given.

The dataset is a uniform volume of dimensions 7150$\times$7150$\times$7369 containing byte data (density values in the range 0-255). The dataset is to be volume rendered and the large size of dataset makes it a useful test case because the parallel volume render module is particularly memory hungry. Each pstmpunode(m) render process takes a copy of the sub-domain from the pstmpunode(p) process (which is typically only executing the parallel file reader code). It also allocates another volume that is $1/32$ the size of the volume for voxel lighting calculations. For gigabyte volumes this can be significant. It is memory usage that dictates how many processes are needed to render the dataset. Volume rendering takes place at image resolution and so performance is mainly influenced by the size of the final image and whether the transfer function produces semi-transparent regions in the data (this increases the execution time of the volume render algorithm). However, the AVS volume renderer does not tile the image and so adding more processors does not necessarily improve frame rate. It does, however, reduce the size of sub-domain of data that each pstmpunode render process will have to store.


Table: Volume rendering a 351GB dataset for two image sizes. The number of domains used is the minimum number required to volume render this dataset for the given mppnppn value.
  $512^2$ $1024^2$ $512^2$ $1024^2$
Domains 127 255
Total procs 255 511
mppnppn 2 4
GB per Domain 2.8 1.4
Build+Distrib(s) 0.044900 0.045211 0.186890 0.050382
Render(s) 0.469087 0.695153 0.201461 0.672819
Total(s) 0.513987 0.740364 0.388351 0.723201
Frames per sec 1.9 1.3 2.5 1.3


Table [*] shows various statistics obtained when rendering the volume dataset. Rendering was performed for image sizes of $512^2$ and $1024^2$. The number of domains in which to divide the data is the minimum number required to volume render this data. Dropping down to a smaller queue size (and hence fewer sub-domains) resulted in the pstmpunode rendering processes running out of memory. The times given are for operations performed by the express process on the login node. The Build+Distrib operation is the construction of the scene graph and distribution to all rendering processes, as discussed in section [*]. The render time is the total time taken from sending out the scene graph to receiving a composited image back from the render processes. The render processes will have rendered their data during this operation so it is dependent on the AVS parallel volume render module. The total time is the sum of the two times. The Frames per second time is the best time that can be achieved when rendering the data if no transfer of the image to the user's remote desktop is required.

Despite the low frame rate we are able to manipulate the visualization interactively. This includes manipulation of the volume render transfer function to reveal details within the volume. We have not been able to render this dataset prior to running AVS/Express on HECToR.



Subsections
next up previous contents
Next: Batch Rendering Up: Massive Remote Batch Visualizer Previous: Parallel Renderer Communication   Contents
George Leaver 2010-07-29