*This software development project will serve two separate purposes: It will enable more efficient use of I-O within the unstructured Cfoam-CABARET code and improve the scalability of the structured PEQUOD code. Here, the terms unstructured/structured refer to the spatial decomposition used within the model.
*

*The main functional difference between Cfoam-CABARET and PEQUOD is that PEQUOD models incompressible flow and therefore requires an additional parallel elliptic pde (Helmholtz) solver for the pressure. The data decomposition in PEQUOD is intrinsically far easier to work with than Cfoam-CABARET. It is also worthwhile noting that relationships between neighbouring cells and partitions are generally more complex to manage in the latter code. The consequence is that data associated with unstructured grid layouts requires a lot of effort so that it can be managed and updated by the methods which determine new values based on other values in their physical proximity.
*

* The original objectives for this project were as follows:
WP1: Enhanced Parallel I-O within Cfoam-CABARET
Update the I-O in Cfoam-CABARET. The original method used single process I-O whereby an unstructured grid was read in by the master process and then broadcast to all other processes. A similar process was also used for output where checkpoint/Tecplot360 binary files would be gathered and then written by the master process, at intervals specified by the user. An MPI-IO model will be implemented such that a
cell grid can be read in less than 5 minutes for a typical simulation with at least 1000 cores.
WP2: Parallelised quasi-geostrophic CABARET code
Develop a scalable multi-layer quasi-geostrophic solver and benchmark on a double gyre quasi-geostrophic model. This model will include 3 layers, forcing and dissipative processes on a 1025 X 1025 uniform grid, wind forcing parametrisation and the parameters of lateral viscosity and bottom friction. This will be achieved in the following steps:
WP2.1: Implementation of a structured spatial decomposition for the computational grid
The global domain will be partitioned such that each process will each be assigned a sub-section.
WP2.2: Implement parallel structured CABARET (for incompressible flow)
Develop a quasi-geostrophic parallel solver by implementing asynchronous MPI calls at the conservative predictor step for the interpolation of cell-centred conservative variables to the cell vertices and at the extrapolation step where the local cell-based characteristic splitting for the updated conservative and flux variables is performed. This code will be tested against a simple "classical" parallel implementation of a central leapfrog and the central finite difference scheme for which the results are well known.
WP3: Parallelisation of the Helmholtz solver
Develop a scalable layered quasi-geostrophic solver including performance of the standard elliptic solver that inverts potential vorticity in a closed domain or zonally periodic channel in order to get velocity streamfunction. This method uses the Fourier Analysis and cyclic reduction direction by direction. The code will be benchmarked on the double gyre quasi-geostrophic model.
*

* The overall aim of WP2 and WP3 will be to enable models with a horizontal-grid interval of 1km, which with the CABARET advection scheme implies that the turbulence scales of 6km and longer will be represented quite accurately. Historically, wind-driven ocean circulation models with 1km resolution have never been used for long-time integrations even for poor vertical resolution. The study of important nonlinear interactions of relatively high-order vertical modes and their large-scale consequences will also be enabled - using 10 layers. Finally, runs on 2km, 4km, and 8km grids will become possible. The aim of WP1 is to reduce the amount of time spent performing I-O by 5 times for a typical simulation.
*

Phil Ridley 2012-10-01