Running STARCCM+ using OpenMPI on Ubuntu with SLURM and Infiniband

This is a walkthrough on my work on running a proprietary computational fluid dynamics code, StarCCM+ on Ubuntu18.04 using the snap version of SLURM with openMPI 4.0.4 over Infiniband.

You can use this to perform scaling studies, track down issues and optimizing performance or use it as you like. Much of this will work on other OS:es too.

This is the workbench used here:

Hardware: 2 hosts with 2×20 cores 187GB ram.
Infiniband: Mellanox MT28908 Family [ConnectX-6]
OS: Linux 4.15.0-109-generic (x86_64) Ubuntu18.04.4
SLURM 20.04 (
OpenMPI: 4.0.4 (ucx, openib)
StarCCM+: STAR-CCM+14.06.012
A Reference model which is small enough for your computers and large enough to run over 2 nodes.

Lets get started.

Modify ulimits on all nodes.

This is done by editing /etc/security/limits.d/30-slurm.conf

* soft nofile  65000
* hard nofile  65000
* soft memlock unlimited
* hard memlock unlimited
* soft stack unlimited
* hard stack unlimited

Modify slurm systemd unit startup files to make ulimit permanent to the slurmd processes.

$ sudo systemctl edit snap.slurm.slurmd.service


* Restart slurm on all nodes.

$ sudo systemctl restart snap.slurm.slurmd.service

* Make sure login nodes has correct ulimits after a login.

* Validate that all worker nodes also has correct values on ulimits when using slurm. For example:

$ srun -N 1 --pty bash
$ ulimit -a

You must have all consistent settings for ulimit or things will go sideways. Remember that slurm propagates ulimits from the submitting node, so make sure those are consistent there too.

Compile OpenMPI 4.0.4

At the time, this is the latest version. This is my configure but I think you can compile it differently for your needs.

$ ./configure --without-cm --with-ib --prefix=/opt/openmpi-4.0.4
$ make
$ make install

Validate that openmpi can see the correct mca ucx

I’m most concerned in this step that ucx pml is available in the MCA for openmpi, so after my compilation is done, I check for that and the openib btl.

$ /opt/openmpi-4.0.4/bin/ompi_info  | grep -E 'btl|ucx'

MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.4)
MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.4)
MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.4)
MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.4)
MCA fbtl: posix (MCA v2.1.0, API v2.0.0, Component v4.0.4)
MCA osc: ucx (MCA v2.1.0, API v3.0.0, Component v4.0.4)
MCA pml: ucx (MCA v2.1.0, API v2.0.0, Component v4.0.4)

What we are looking for here is:

* MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.4)
* MCA pml: ucx (MCA v2.1.0, API v2.0.0, Component v4.0.4)

The rest are not important at this point. But you might know better, please let me know. You can see in the jobscript later where these modules are referenced.

Validate that ucx_info see your Infiniband device and ib_verbs transports

In my case, I have a Mellanox device (show with: ibv_devices) so I should see that with ucx_info:

$ ucx_info -d | grep -1 mlx5_0

# Memory domain: mlx5_0
#     Component: ib

#   Transport: rc_verbs
#      Device: mlx5_0:1

#   Transport: rc_mlx5
#      Device: mlx5_0:1

#   Transport: dc_mlx5
#      Device: mlx5_0:1

#   Transport: ud_verbs
#      Device: mlx5_0:1

#   Transport: ud_mlx5
#      Device: mlx5_0:1

Modify the STARCCM+ installation

My version of StarCCM uses an old ucx and calls /usr/bin/ucx_info. At some point ending during startup, it fails when its not able to find when using our custom openMPI. Perhaps there is a way to force starccm+ to look for ucx_info on the system, but I have not found any way to do this.

To have StarCCM+ ignore its own ucx, simply remove the ucx from the installation tree and replace with an empty directory.

$ sudo  rm -rf /opt/STAR-CCM+14.06.012/ucx/1.5.0-cda-001/linux-x86_64*
$ mkdir -p /opt/STAR-CCM+14.06.012/ucx/1.5.0-cda-001/linux-x86_64-2.17/gnu7.1/lib

This is not needed on OS:es such at centos6 and centos7 because they use the deprecated library

Time to write the job-script

#SBATCH -J starccmref
#SBATCH -n 80
set -o xtrace
set -e

# StarCCM+
export PATH=$PATH:/opt/STAR-CCM+14.06.012/star/bin

# OpenMPI
export OPENMPI_DIR=/opt/openmpi-4.0.4
export PATH=${OPENMPI_DIR}/bin:$PATH

# Kill any leftovers from previous runs


# Assemble a nodelist using this python lib
$hostListbin --append=: --append-slurm-tasks=$SLURM_TASKS_PER_NODE -e $SLURM_JOB_NODELIST >  $NODE_FILE
# Start
starccm+ -machinefile ${NODE_FILE} \
         -power \
         -batch ./ \
         -np $SLURM_NTASKS \
         -ldlibpath $LD_LIBRARY_PATH \
         -classpath $STAR_CLASS_PATH \
         -fabricverbose \
         -mpi openmpi \
         -mpiflags "--mca pml ucx --mca btl openib --mca pml_base_verbose 10 --mca mtl_base_verbose 10"  \
# Kill off any rogue processes

You probably need to modify this above script for your own environment but the general things are in there.

Submit to slurm

You want this job to run on multiple machines, so for me I use a -n 80 to allocate 2×40 cores which is understood by slurm to allocate the two nodes I have used in the example. If you have less more more cores than I do, use a 2xN number in your submit.

$ squeue -p debug -n 80 ./

You can watch your Infiniband counters to see that significant amount of traffic is sent over the wire which will indicate that you have succeeded.

watch -d cat /sys/class/infiniband/mlx5_0/ports/1/counters/port_rcv_packets

I’ve been presenting at Ubuntu Masters about the setup I use to work with my systems which allows me to do things like this easily. Here is a link to that material:

Here is also another simular walkthrough of doing this also with the CFD application Powerflow:

1 Comments on “Running STARCCM+ using OpenMPI on Ubuntu with SLURM and Infiniband”

  1. Pingback: Running Powerflow on Ubuntu with SLURM and Infiniband | Erik Lönroth

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: