5. Integration with Slurm

Preview Release

This is an early release of the Slurm plugin for the IPU. As such the software is subject to change without notice.

The Slurm plugin is available on request from Graphcore support.

The section describes integration of the V-IPU with Slurm. Slurm is a popular open-source cluster management and job scheduling system. The integration of V-IPU with Slurm is provided through a custom V-IPU resource selection plugin for Slurm systems. In addition, a helper job submission plugin is provided to validate submitted job requests and set up a generic resource (GRES) for the V-IPU accounting.

For more details about Slurm and it’s architecture, please refer to the Slurm website.

A Slurm plugin is a dynamically linked code object providing customized implementation of well-defined Slurm APIs. Slurm plugins are loaded at runtime by the Slurm libraries and the customized API callbacks are called at appropriate stages.

Resource selection plugins are a type of Slurm plugins which implement the Slurm resource/node selection APIs. The resource selection APIs provide rich interfaces to allow for customized selection of nodes for jobs, as well as performing any tasks needed for preparing the job runs (such as partition creation in our case), and appropriate clean-up code at job termination (such as partition deletion in our case).

5.1. Configuring Slurm to use V-IPU select plugin

Note

This document assumes that you have access to pre-compiled Slurm binaries with the V-IPU plugin support or you have already patched and recompiled your Slurm installation with the V-IPU support.

To enable V-IPU resource selection in Slurm, you need to configure the SelectType as select/vipu in the Slurm configuration. The V-IPU Slurm plugin is a layered plugin, which means it can enable V-IPU support for existing resource selection plugins. Options pertaining to selected secondary resource selection plugins can be specified under SelectTypeParameters.

The following is an example of the Slurm configuration enabling the V-IPU resource selection plugin layered on top of a consumable resource allocation plugin (select/cons_res) with the CPU as a consumable resource:

SelectType=select/vipu
SelectTypeParameters = other_cons_res,CR_CPU

Note the other_ prefix added to the layered resource selection plugin’s name. In the same way, other_cons_tres, and other_linear can also be configured. For SelectTypeParameters supported by each of the existing resource selection plugin, refer to the Slurm documentation.

5.2. Configuration parameters

Configuration parameters for the V-IPU resource selection plugin are set in separate configuration files that need to be stored in the same directory as slurm.conf. The default configuration file is named vipu.conf. Moreover, administrators can configure additional GRES models for the V-IPU representing different V-IPU clusters. For the additional GRES models, configuration files are named with the desired model name. For instance, a GRES model,``pod1``, needs a corresponding configuration file named as pod1.conf in the Slurm configuration directory.

The following configuration options are supported:

  • ApiHost: The host name or IP address for the V-IPU controller.

  • ApiPort: The port number for the V-IPU controller. Default port is 8090.

  • IpuofDir: The directory where IPUoF configuration files for user jobs will be stored.

  • MaxIpusPerJob: Maximum IPUs allowed per job. Default value is 256.

  • ApiTimeout: Timeout in seconds for the V-IPU client. The default value is 50.

  • ForceDeletePartition: Set to 1 to specify forced deletion of partition in case of failures. The default value is 0.

  • UseReconfigPartition: Set to 1 to specify that reconfigurable partitions should be created. The default value is 0.

The V-IPU resource selection plugin also supports secure communication with mTLS between the V-IPU controller and the plugin. For setting up secure communication, IsSecure needs to be enabled in the V-IPU configuration files. Please note that V-IPU controller must have been configured to run in the secure mode for this option to work.

The following parameters are needed for secure mode:

  • ApiUser: The username of the API user that the plugin will use.

  • ApiAccessKey: Unique access key assigned to the API user.

  • ClientCertFile: Client certificate to be used for the TLS.

  • ClientKeyFile: Client key file to be used for the TLS.

  • ServerCertFile: Server certificate to be used for the TLS.

In addition, slurm.conf should contain the following configuration to allow sharing IPUoF configuration files needed by the Graphcore Poplar SDK.

  • VipuIpuofDir: Path to shared storage location writable by scheduler, and readable by all nodes and user accounts.

5.3. The V-IPU job submission plugin

The V-IPU job submission plugin can be configured to track V-IPU usage via a flexible GRES plugin mechanism. To enable V-IPU job submission plugin, the V-IPU job submission plugin, you can add vipu to the list of job submission plugins in the Slurm configuration and add vipu to the list of GRES types defined for the Slurm cluster.

JobSubmitPlugins=vipu
GresTypes=vipu

In addition, for each node that can access a V-IPU resource, the following node GRES configuration must be added:

Gres=vipu:<GRES_MODEL>:no_consume:<max partition size>

5.4. An example Slurm Controller configuration

Note

Note that the following settings will override or take precedence over any values configured in your existing slurm.conf configuration file.

In the following, we outline an example of using the V-IPU plugin to configure a Slurm cluster containing a single IPU-POD64, with 4 compute nodes having shared access to a directory /home/ipuof. The GRES model is named as pod6 and a V-IPU Controller is running using default port without mTLS on the first node.

Node names are assumed to be ipu-pod64-001 through ipu-pod64-004.

  1. At the end of the slurm.conf, add the following line:

    Include v-ipu-plugin.conf
    
  2. Create a file called v-ipu-plugin.conf in the same directory as the slurm.conf containing the following parameters:

    SelectType=select/vipu
    SelectTypeParameters = other_cons_res,CR_CPU
    VipuIpuofDir=/home/ipuof
    JobSubmitPlugins=vipu
    GresTypes=vipu
    
    NodeName= ipu-pod64-001 State=UNKNOWN Gres=vipu:pod64:no_consume:64 CPUs=96 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=760000 TmpDisk=4760000
    NodeName= ipu-pod64-002 State=UNKNOWN Gres=vipu:pod64:no_consume:64 CPUs=96 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=760000 TmpDisk=4760000
    NodeName= ipu-pod64-003 State=UNKNOWN Gres=vipu:pod64:no_consume:64 CPUs=96 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=760000 TmpDisk=4760000
    NodeName= ipu-pod64-004 State=UNKNOWN Gres=vipu:pod64:no_consume:64 CPUs=96 Boards=1 SocketsPerBoard=2 CoresPerSocket=24 ThreadsPerCore=2 RealMemory=760000 TmpDisk=4760000
    
    PartitionName=v-ipu Nodes=ipu-pod64-00[1-4] Default=NO MaxTime=INFINITE State=UP
    
  3. Create a file called vipu.conf in the same directory as the slurm.conf containing the following parameters:

    ApiHost=ipu-pod64-001
    IpuofDir=/home/ipuof
    
  4. Create a symbolic link to the vipu.conf file called pod64.conf in the same directory as the slurm.conf.

5.5. Job submission and parameters

The V-IPU resource selection plugin supports the following options:

  • --ipus: Number of IPUs requested for the job

  • -n / --ntasks: Number of tasks for the job. This will correspond to the number of GCDs requested for the job partition.

  • --num-replicas: Number of model replicas for the job.

These parameters can be configured in both sbatch and srun scripts as well as provided on the command line:

$ sbatch --ipus=2 --ntasks=1 --num-replicas=1 myjob.batch

Optional:

If V-IPU GRES has been configured, you can add the following option in the job definition to select a particular GRES model for the V-IPU.

--gres=vipu:<gres name>

You can configure the GRES model parameter in both sbatch and srun scripts as well as provided on the command line:

$ sbatch --ipus=2 --ntasks=1 --num-replicas=1 myjob.batch

If the desired GRES model to be used is pod64:

$ sbatch --ipus=2 --ntasks=1 --num-replicas=1 –gres=vipu:pod64 myjob.batch

5.5.1. Job script examples

The following is an example of a single gcd job script:

#!/bin/bash
#SBATCH  --job-name single-gcd-job
#SBATCH --ipus 2
#SBATCH -n 1
#SBATCH --time=00:30:00

srun ./ipu_program.sh

wait

You can configure a multi-GCD job in the same way, except for indicating the number of GCDs requested by setting the number of tasks:

#!/bin/bash
#SBATCH --job-name multi-gcd-job
#SBATCH --ipus 2
#SBATCH -n 2
#SBATCH --time=00:30:00

srun ./ipu_program.sh

wait