4. PopLibs API reference

The PopLibs libraries (Table 4.1) provide mathematical and machine-learning functions that can be used in Poplar programs for the IPU.

Table 4.1 PopLibs libraries

Library

Depends on

Description

popnn

poplin, poputil

Functions used in neural networks (for example, non-linearities, pooling and loss functions)

poplin

popops, poputil

Linear algebra functions (matrix multiplications, convolutions)

popops

poputil

Operations on tensors in control programs (elementwise functions and reductions)

poprand

poputil

Functions for populating tensors with random numbers

popsparse

Functions for operating on sparse tensors

poputil

General utility functions for building graphs

Adding PopLibs code to a graph program

To use the PopLibs libraries, you will need to include them on the linker command line (see Linking).

Linking adds the host-side code for the PopLibs libraries to your program. You also need to explicitly add the IPU device code for the libraries to your graph program. Each library includes a function called addCodelets() to do this.

Note

The poputil library only includes host code and so does not have an addCodelets() function.

Your program will need to link and call addCodelets() for each of the libraries used as well as any libraries they depend on. For example, if your program uses poplin and poprand, then you will need to include these and popops (used by poplin) in your program, as shown in Listing 4.1.

Listing 4.1 Adding PopLibs code to a program
#include <popops/codelets.hpp>
#include <poplin/codelets.hpp>
#include <poprand/codelets.hpp>

... // create the graph object

popops::addCodelets(graph);
poplin::addCodelets(graph);
poprand::addCodelets(graph);

Where graph is the Graph object containing your graph program.

Utility functions (poputil)

General utility functions for building graphs.

Tensor operations (popops)

Functions for building operations on tensors in control programs (such as element-wise functions and reductions).

Linear algebra functions (poplin)

Linear algebra functions (matrix multiplications, convolutions).

Random number operations (poprand)

Functions for tensor operations using random numbers. These make use of the hardware pseudo-random number generators (PRNG) on each tile. There is a separate PRNG for each worker thread. These are designed to allow every vertex to generate a different pseudo-random sequence but also, importantly, to ensure that the same sequence can be regenerated when required.

These function have an optional seed parameter for initialising the tiles’ PRNGs. Because there is no 64-bit integer type in device code, this is passed as a tensor of two 32-bit integers. This seed value is common to an entire graph or subgraph.

A “seed modifier” parameter is also used, which enables each vertex to generate a different pseudo-random sequence from the same seed. This is ignored if the seed is not specified.

The pseudo-random sequence is determined by a combination of tile-id, thread-id, seed and seed modifier.

If a seed is provided then, at the end of the operation, the PRNG state is restored to be the same as it was before the operation.

The functions have a reference tensor as a parameter. This is used to define the layout of the output tensor in order to guarantee deterministic results when a seed is specified. It ensures that if the same seed and seed modifier values are used then the same output is obtained.

Sparse tensor operations (popsparse)

Functions for operating on block sparse tensors. Static block and dynamic sparsity are supported.

Note: in the BlockSparseMatMul API, the sparse-weight matrix representing the parameters of the fully-connected layer per group is W, with a dense shape of [outputChannelsPerGroup, inputChannelsPerGroup].

The equivalent dense operations done for the different passes are where each multiplication is per group.

  • Fwd/Inf: Ao = W * Ai

    Where:

    • Ao has shape [outputChannelsPerGroup, batchSize]

    • Ai has shape [inputChannelsPerGroup, batchSize]

  • GradA: Gi = W’ * Go

    Where:

    • Go has shape [outputChannelsPerGroup, batchSize]

    • Gi has shape [inputChannelsPerGroup, batchSize]

  • GradW: Gw = Go * Ai

Neural network functions (popnn)

Functions used in neural networks (for example, non-linearities, pooling, loss functions).