Parallel Implementation with PETSc

After further discussions with NAG CSE, it was suggested that using the PETSc library [3] would have significant benefits both for performance and for future development. After reviewing the functionality contained in PETSc, it was agreed to use the library.

PETSc contains a distributed parallel layer, with support for data types such as vectors and sparse matrices, as required by MicroMag and methods for manipulating them. Critically, the SUNDIALS CVODE library [1] can be called from PETSc. There is significant support for Krylov subspace solvers which are required to determine the RHS. Moreover the Finite Element partitioner Metis, and its parallel variant Parmetis can also be called from PETSc. Future development work could allow the partitioning of the unstructured grid to be done at run time rather than at a preprocessing phase as originally planned in WP 2.1-2.2.

At this stage, approximately half the development time had been used to perform three tasks.

  1. Re-implement the calls to CVODE in serial in the refactored code.
  2. Attempt to implement the semi-parallel strategy and reveal its flaws, leading to its abandonment.
  3. Evaluate the PETSc library functionality and decide on new strategy.

With less than three months left of development effort to re-write the code to make use of the PETSc data parallel data structures, and calls to the various solvers, there was an uncontrolled risk that there wouldn't be sufficient time. The risk was uncontrolled in the sense there was no time left for any contingency if there were any unforeseen problems with the development. However, it was felt there were significant benefits for future development, including performance, portability and additional functionality, such as using Parmetis, that it was decided to use PETSc despite the small amount of time remaining.



Subsections
Chris Maynard 2011-06-08