next up previous contents
Next: The dCSE Project Up: Massive Remote Batch Visualizer Previous: Contents   Contents


Introduction

The AVS/Express visualization product [1] is a scientific visualization application allowing the rendering of many forms of data. It uses the visualization pipeline method [2] in which data is read in to the system, possibly filtered, and then mapped to geometry for rendering. At each stage the user is able to control how that stage processes the data, whether it be choosing an appropriate file reader, deciding how the data is filtered (for example whether it is down-sampled or cropped) and then how that data is converted to geometry. This last stage usually employs a suitable visualization technique such as isosurfacing or volume rendering to produce an image of the dataset. AVS/Express provides a user interface in which these stages are represented by modules and allows the user to connect the modules together forming a data-flow style network. Figure [*] shows an example network in which data is read in via a file reader module before being cropped (filtered) and then mapped to geometry via various isosurface modules and an orthoslice module. All geometry is sent to the viewer module at the bottom of the network. A module may have its own parameters which are controlled in the user interface panel on the left. The user can interact with the visualization in the viewer window.

Figure: AVS/Express Network Editor and Visualization Window.
\includegraphics[width=12cm, draft=false, clip=true]{Plates/ddr-network-viz}

The particular version of AVS/Express to be ported is the AVS/Express Distributed Data Renderer (DDR) Edition, version 7.2.1. This product provides parallel module computation by allowing the AVS modules to execute on decomposed datasets, whereby the dataset is split in to smaller domains of data and distributed to a number of compute processes which then process the data according to the AVS modules in use. In addition to the standard AVS Software and OpenGL renderers, the DDR edition also provides a parallel renderer (referred to as the MPU renderer for historical reasons) that is capable of rendering the geometry generated by the parallel modules. Separate rendering processes receive geometry from particular compute processes, render an image of the geometry and then composite the images together to form a final visualization of the entire dataset. This technique is referred to as sort-last rendering [3] and allows a dataset to be rendered that exceeds the capabilities of a single rendering process (whether that be a GPU or software renderer). This version of the product is able to render data on distributed compute nodes where no GPU hardware is available using the MesaGL [4] software implementation of OpenGL [5].

A motivation for this project is that the visualization of large datasets has long been a bottleneck in applications where validation of data acquisition from scientific equipment is required at an early stage. Such validation would allow correctness of methods (such as the set up of a physical experiment) to be determined prior to further computational or imaging-machine resources being spent. This is particularly important to our target end users in Materials Science. Despite advances in GPU hardware, researchers are able to produce datasets that are too large to visualize using modern graphics workstations and clusters. An alternative to multi-GPU systems is to use the large memory and core counts offered by supercomputers such as HECToR even if GPU hardware is not available.


next up previous contents
Next: The dCSE Project Up: Massive Remote Batch Visualizer Previous: Contents   Contents
George Leaver 2010-07-29