The CC Block

\includegraphics[width=18cm]{water.eps}
Figure 4: A schematic of how the CC block is inserted into the PETSc Mat object. Note that only the master process inserts the CC block elements into the PETSc Mat object. Note also that PETSc only accepts the upper-triangular part of the symmetric matrix and so row and column indices are interchanged upon insertion.

During the PETSc assembly stage the CC elements may be passed about so as to assign the CC elements to the correct MPI process for the diagonalization stage. This message passing is carried out implicitly by PETSc. Because the CC block is small and also because the master process usually ends up being assigned the whole CC block for the diagonalization, this is not a communication intensive stage.

From an algorithmic viewpoint the construction of the CC block is the most complicated phase, but, due to its small dimension, is the least computationally demanding. The reason that the construction of the CC block is complicated is due to the fact that it is built up in terms of so-called ``classes'', with what essentially amounts to a separate algorithm for each class. This makes the task of parallelizing the construction of the CC block non-trivial. However, because the serial compute time for the construction of the CC block is minimal, when compared to the construction of the BB block, the approach taken here has been for only a single MPI process to insert the CC elements into the PETSc Mat object. While only one process inserts the CC elements into the Mat object each of the processes constructs the CC block in its entirety. The reason for this is that there are arrays filled during the CC block construction that are needed by all processes involved in the construction of the BC block, i.e., each MPI process involved in the contruction of the BC block needs to have these arrays. In summary, the construction of the CC block is not partitioned among MPI processes, but is rather constructed (in full) on all processes with only a single process subsequently invoking the sorting of the CC arrays along with insertion of sorted elements into the PETSc Mat object. Because PETSc accepts only the upper-triangular part of a symmetric matrix, the indices associated with column number in scatci_serial now point to row number and the indices associated with row number in scatci_serial now point to column number. This amounts to a simple transpose of the sparse matrix. A schematic of how the CC elements are inserted into the PETSc Mat object can be seen in figure [*]

Paul Roberts 2012-06-01