To run the tutorials, copy them from the package account into a new directory named run within your user directory, <username>-1.6, using the following the 3 commands:
cd $WM_PROJECT_USER_DIR mkdir -p $FOAM_RUN cp -r $FOAM_TUTORIALS $FOAM_RUNwhere the third line may a few minutes to execute.
This following batch script runs tests the installation of the tutorials using HECToR's `serial' queue.
#!/bin/bash --login #PBS -q serial #PBS -N testInstall #PBS -l walltime=01:00:00 #PBS -A z01 . /opt/modules/3.1.6/init/bash module swap PrgEnv-pgi PrgEnv-gnu module swap gcc gcc/4.3.3 module swap xt-mpt xt-mpt/3.2.0 source /work/z01/z01/gavin/OpenFOAM/OpenFOAM-1.6/etc/bashrc export LD_LIBRARY_PATH=$WM_PROJECT_DIR/mylib:$LD_LIBRARY_PATH cd $FOAM_RUN/tutorials Allclean AlltestNB All serial queues employ dual-core nodes thus, if users employ the quad-core version, then the results may not be correct.
This following batch runs the final part of the Dam Break tutorial, a parallel run of interFOAM on 8 cores, for user gavin. (The preparatory stages, namely running blockMesh, setFields and decomposePar, must be run before interFoam.).
#!/bin/bash --login #PBS -l mppwidth=8 #PBS -l mppnppn=4 #PBS -N dam_tutorial #PBS -l walltime=01:00:00 #PBS -A z01 export NSLOTS=`qstat -f $PBS_JOBID | awk '/mppwidth/ {print $3}'` export NTASK=`qstat -f $PBS_JOBID | awk '/mppnppn/ {print $3}'` . /opt/modules/3.1.6/init/bash module swap PrgEnv-pgi PrgEnv-gnu module swap gcc gcc/4.3.3 module swap xt-mpt xt-mpt/3.2.0 source /work/z01/z01/gavin/OpenFOAM/OpenFOAM-1.6/etc/bashrc export LD_LIBRARY_PATH=$WM_PROJECT_DIR/mylib:$LD_LIBRARY_PATH cd $FOAM_RUN/tutorials/multiphase/interFoam/laminar/damBreak export MPICH_PTL_EAGER_LONG=1 aprun -n $NSLOTS -N $NTASK interFoam -parallel
NB All parallel jobs are run on quad-core nodes, hence we set #PBS -l mppnppn=4. Users can run either the dual- or quad-core versions of OpenFOAM in parallel, but the dual-core version may not perform as well.
Setting the environment variable MPICH_PTL_EAGER_LONG=1 was found to speed up execution by around 7% for large numbers of cores, and had no adverse affect on performance for small number of cores.