- Profiling of OpenFOAM solvers
Profiling of OpenFOAM solvers
Profiling your HPC applications can be of great use if one wants to investigate the performance. This is normally not done as routine when using “off-the-shelf software”. However, if you know that you are going to use a lot of resources and time on a specific code, it might be worth investigating what settings that give the best performance and “value for money” when it comes to using the assigned CPU hours in an efficient way.
This small guide will show the procedure of linking the IPM (integrated performance monitoring) profiler with OpenFOAM, however the procedure is identical if you want to link for example Darshan to OpenFOAM as well.
Using LD_PRELOAD (does not work at the moment)
The quick and dirty way to link IPM and OpenFOAM together is to set the
LD_PRELOAD environment variable to the path to IPM’s dynamic library. This tells the linker to link IPM to OpenFOAM at execution time, without the need for recompiling etc.
LD_PRELOAD will in fact be set as soon as you load the IPM module in Vilje, so the only thing you need to do is to load the package in your job scrip right before or after you load the OpenFOAM package.
Note: This function is BROKEN in combination with the MPT-MPI versions currently installed on Vilje (as of aug. 2012), and setting LD_PRELOAD will cause MPT to crash with the following error message:
MPI: MPI_COMM_WORLD rank 104 has terminated without calling MPI_Finalize() MPI: aborting job MPI: Received signal 9
This is not an OpenFOAM-specific issue, and applies to all MPI applications in combination with MPT-MPI and LD_PRELOAD.
Recompiling OpenFOAM solver
Since the quick and dirty way described above does not work we have to find another way to tell the system’s linker that OpenFOAM should link to IPM. This can be done by recompiling the solver we are interested in profiling.
The following guide is based on the guides found at the unofficial OpenFOAM Wiki and the OpenFOAM documentation. We are using the widely used
pisoFoam solver as an example, but this guide should apply to all solvers and other applications (such as
snappyHexMesh). We are going to use the most recent OpenFOAM version, that is 2.1.1 at the time of writing, but again, this should work with other versions as well.
- Load OpenFOAM, MPT, IPM and the compiler from Intel:
module load openfoam module load mpt module load intelcomp module load ipm unset LD_PRELOAD
LD_PRELOADis unset to avoid the system linker to link all the applications we are running to IPM.
- Copy the solver sources from the current location to a new location in your home directory:
mkdir -p $WM_PROJECT_USER_DIR/applications/solvers/incompressible cp -r $FOAM_APP/solvers/incompressible/pisoFoam $WM_PROJECT_USER_DIR/applications/solvers/incompressible/pisoFoamIPM
pisoFoamIPMto distinguish it from the non-IPM version.
- You should rename
pisoFoamIPM.C, and the
pisoFoam.depcan be deleted:Terminal window
- Enter the
Makedirectory, and delete the
- Change the contents of the
files-file to be:Make/files
EXE = $(FOAM_USER_APPBIN)/pisoFoamIPM
- Now here comes the real deal: open the
optionsfile and specify that we shall link against IPM:Make/options
EXE_INC = \
EXE_LIBS = \
-lipmIf you want to link to something else than IPM, for example Darshan, you must specify the path to the Darshan shared library after the capital
$(IPM_LIBPATH)) and replace
- Go one level down from the
Makedirectory, so that you are in the
pisoFoamIPMdirectory. We are now ready to compile our solver! This is done by a simple
wmakecommand, and should be fairly quick: Terminal window
- You can check if the solver has compiled properly by running
pisoFoamIPM -help. This command should give some output. If it fails, something is wrong.
Now you must remember to also load IPM every time you are running
pisoFoamIPM, and unset the
LD_PRELOAD variable to avoid loading it twice (or crashing MPT as described above).
Using MPI_Pcontrol to analyze performance
If you are working on improving the performance of a certain piece of a solver, and you want to profile that part in special, that can be done by IPM. If we look in the small (but helpful) IPM user guide we see that we can insert the
MPI_Pcontrol function with the parameters
1 (to indicate the start of a section) and
-1 (to indicate the end of a section) to let IPM distinguish between the different code sections.
The only problem is that the OpenFOAM solvers do not link to MPI directly, since the MPI implementation is hidden inside the
Pstream class. Therefore, we must edit the dependencies of our newly created solver. This is done by adding two lines in the top of the
Then the time is come to edit the solver source
pisoFoamIPM.C itself. The first thing you need to do is to include
#include "mpi.h" at the top of the file. Then you can insert
MPI_Pcontrol calls as it suits you. This is the results of our editing (you might of course create different regions):pisoFoamIPM.C
After this is finished, re-compile the solver with the
When you now run the
pisoFoamIPM application, we will get information about how long time the solver spent inside the individual regions, i.e. how long time it used to solve the momentum equations, pressure correctors and execute the turbulence model. Communication and MPI statistics is also grouped and displayed on a per-region basis as well as general statistics for the entire solver. Note that there are pieces of code in our solver that is not part of any explicit region, and they are handled together as if they were a separate region (called