A scientifically significant dataset has been provided by the Materials Science users with which we are working. At this time we have been requested not to publish any images of that dataset until their work has been considered for publication elsewhere (it is being submitted to a high-impact publication). The images are considered key to the impact and novelty of the paper. We intend to update this report with images once they have been published. However, details of the size of data set and performance figures can be given.
The dataset is a uniform volume of dimensions 71507150
7369
containing byte data (density values in the range 0-255). The dataset is to be
volume rendered and the large size of dataset makes it a useful test case
because the parallel volume render module is particularly memory hungry. Each
pstmpunode(m) render process takes a copy of the sub-domain from the
pstmpunode(p) process (which is typically only executing the parallel file
reader code). It also allocates another volume that is
the size of the
volume for voxel lighting calculations. For gigabyte volumes this can be
significant. It is memory usage that dictates how many processes are needed to
render the dataset. Volume rendering takes place at image resolution and so
performance is mainly influenced by the size of the final image and whether the
transfer function produces semi-transparent regions in the data (this increases
the execution time of the volume render algorithm). However, the AVS volume
renderer does not tile the image and so adding more processors does not
necessarily improve frame rate. It does, however, reduce the size of sub-domain
of data that each pstmpunode render process will have to store.
Table shows various statistics obtained when rendering the
volume dataset. Rendering was performed for image sizes of
and
. The number of domains in which to divide the data is the minimum
number required to volume render this data. Dropping down to a smaller queue
size (and hence fewer sub-domains) resulted in the pstmpunode rendering
processes running out of memory. The times given are for operations performed by
the express process on the login node. The Build+Distrib operation is the
construction of the scene graph and distribution to all rendering processes, as
discussed in section
. The render time is the total time taken
from sending out the scene graph to receiving a composited image back from the
render processes. The render processes will have rendered their data during this
operation so it is dependent on the AVS parallel volume render module. The total
time is the sum of the two times. The Frames per second time is the best time
that can be achieved when rendering the data if no transfer of the image to the
user's remote desktop is required.
Despite the low frame rate we are able to manipulate the visualization interactively. This includes manipulation of the volume render transfer function to reveal details within the volume. We have not been able to render this dataset prior to running AVS/Express on HECToR.