There are many pre-processing routines which must be run in serial, such as blockMesh. However, HECToR's `serial' queue employs shared nodes and, as such, users only get a share of the node's total memory. During this work, it was found that, whenever the simulation was simply too large, that the `serial' queue was too small. Thus, when this occurred, we would run the serial job in a small parallel queue. This is described in more detail, and an example batch file is presented, in Section 4.3.
However, it was found that some of the benchmark cases were simply too large to pre-process on HECToR. When this occurred, this issue was circumvented by running OpenFOAM on another platform, namely Ness at EPCC . This platform has 2 nodes of 32GB each, where each node has 16 AMD Opterons. Effectively, this is a fat HECToR node, and codes such as blockMesh, mapFields and stitchMesh, were successful run serially in a parallel queue of 8 cores, thereby utilising 16GB of memory.