The existing visualization code is an MPI application that comprises a number of executables. The main executable is express which provides the AVS network editor, module user interface and visualization window. This is always rank 0 in the MPI job. The other components are two types of MPI executables, namely the pstnode and mpunode executables. described below.
Figure shows the scheme now described: the pstnode processes execute
parallel module codes according to the modules in the visualization network. A
key concept is that a dataset is never accessed directly by the express process. Instead a dataset is decomposed in to a number of smaller sub-domains,
one for each pstnode process. The express process will instruct the
pstnode processes on how they should process their sub-domain of data. For
example, a min/max filtering module may pass its parameters (minimum and maximum
values) to the pstnode processes. They will filter their sub-domains of data
accordingly. A small amount of information may need to be returned to the
express process from each pstnode process so that it can update the user
interface. For example, a reduction on the actual minimum and maximum data
values present in each sub-domain can be used to display the dataset's overall
minimum and maximum values in the user interface. Similarly a parallel
isosurface module will receive parameters from the user interface (e.g., the
isosurface level to compute) but the computation will take place within the
pstnode processes on their current sub-domain of data. The sub-domain of data
within a pstnode process remains fixed. It is this decomposition of data and
encapsulation within the pstnode process that allows AVS/Express to work with
large datasets. At no point should sub-domains be gathered and recomposed in the
main express process. Doing so would almost certainly exceed the memory
resources of the node on which the express process is running.
The visualization network will specify which modules should produce renderable
geometry. Any geometry produced by the pstnode processes will be passed
directly to an assigned mpunode process. The mpunode MPI processes execute
the AVS/Express rendering methods in parallel. They receive global scene graph
data from express and insert in to the scene graph the geometry received from
their assigned pstnode. Hence each mpunode process only renders a fraction of
the total geometry in the scene. The images produced by the mpunode processes
are composited together (using either depth testing or alpha blending) and the
final image is sent back to the express process for final display in the user
interface. All communication between the various MPI processes is performed
using MPI point-to-point or collective communication facilities depending on
whether the message is domain-specific or common across all domains. However, at
this stage the compositor uses tcp/ip communication as no MPI layer exists in the
open source compositing library used by AVS (see section ).