The HECToR Service is now closed and has been superceded by ARCHER.

Massively Parallel Computing for Incompressible Smoothed Particle Hydrodynamics (ISPH)

This Distributed Computational Science and Engineering (dCSE) project will concern the optimization of an efficient fluid solver which uses the incompressible smoothed particle hydrodynamics (ISPH) method for violent free-surface flows on offshore and coastal structures. In ISPH, previous benchmarks showed that the simulation costs are dominated by the neighbour searching algorithm and the pressure Poisson solver. In parallelisation, a Hilbert space filling curve and the Zoltan package have been used to perform domain decomposition where preservation of the spatial locality is critical for the performance of neighbour searching. This project will develop the ISPH software into an attractive engineering tool for complex fullscale engineering problems, capable of handling more than 100 million particles.

The overall aims of this project were:

  • Optimize the particle mapping functions for the 2D and 3D version of ISPH by using MPI_PUT instead of MPI_GATHERV.
  • Optimize the neighbour list searching module by firstly updating the original data structures for the neighbour list search kernel with a preconditioned dynamic vector approach and secondly by adding a new neighbour list searching kernel with a linked list approach.
  • Improve the pressure Poisson solver by implementing better use of PETSc sparse matrix formats and matrix renumbering.

The individual achievements of the project are summarised below:

  • The mapping functions were optimised by implementing MPI one-sided communications, with the Zoltan distributed directory utility.
  • For simulations with evenly distributed particles, less than 20% of the total time is now consumed by the mapping functions.
  • A preconditioned dynamic vector and linked list approach was developed for the nearest neighbour list search. This option which may be selected at run time, uses optimized smooth kernel functions and recalculated particle distance.
  • This gives better cache performance and a smaller memory footprint. It is also around 10 times faster when using small numbers of particles, and up to 3 times faster for larger numbers of particles.
  • Use of PETSc for the pressure Poisson solver was improved. The CSR format is now used and the particles are renumbered before assembling the global matrix. This has reduced both the memory footprint and the number of global communications.
  • These developments have been incorporated back into the main ISPH source code.
  • This work was presented at the 8th International SPHERIC Workshop, Trondheim, Norway, June 4-6, 2013 and a conference paper was also published.
  • The work has improved the ISPH code performance dramatically. The code will now scale well for up to 8,000 cores with up to 100 million particles.

Please see PDF for a report which summarises this project.