Examples and Tutorials

Overview

Tutorials repository

Simple example programs and tutorials are available in the Graphcore GitHub tutorials repository. This includes:

  • Tutorials to help you get started using the Poplar SDK and Graphcore tools to run code on the IPU.

  • Feature examples: small code examples showing you how to use various software features when developing for IPUs.

  • Simple application examples: basic applications written in different frameworks targeting the IPU.

  • Kernel benchmarks: code for benchmarking the performance of some selected types of neural network layers on the IPU, using TensorFlow or our PopART framework.

The tutorials repository also contains code used in technical notes, videos and blogs.

Examples repository

Sample applications are available in the Graphcore GitHub examples repository. This includes:

  • Applications

    example applications written in different frameworks targeting the IPU.

  • Code examples

    smaller models and code examples.

TensorFlow 1

The following examples are provided for TensorFlow 1. These include full models as well as examples of how to measure performance, use multiple IPUs and implement custom ops in Poplar, among other things.

These examples are tested against TensorFlow 1. Some of them will run with TensorFlow 2 without any changes.

Feature examples

These include examples of how to measure performance, use multiple IPUs and implement custom ops in Poplar.

Performance

  • Inspecting tensors

    This example trains simple pipelined and non-pipelined models on the MNIST numeral data set and shows how tensors (containing activations and gradients) can be returned to the host for inspection using outfeed queues. This can be useful for debugging a model.

  • I/O benchmarking

    The MNIST simple application shows how to use dataset_benchmark() to determine the maximum achievable throughput for a given dataset.

Using multiple IPUs

Simple examples demonstrating and explaining different ways of using multiple IPUs are provided.

Custom ops

  • Custom op example

    Code that demonstrates how to define your own custom op using Poplar and PopLibs and use it in TensorFlow 1.

  • Custom op example with gradient

    Code that demonstrates how to define your own custom op using Poplar and PopLibs, and use it in TensorFlow 1. Also shows how to define the gradient of your custom op so that you can use automatic differentiation and operations that depend on it, such as the minimize() method of an optimizer.

Other examples

Simple application examples

Kernel benchmarks

  • Kernel benchmarking

    Code for benchmarking the performance of some selected neural network layers.

Code examples

  • UNet Industrial

    This is an image segmentation model for industrial use cases.

  • Markov chain Monte Carlo methods

    These are well known techniques for solving integration and optimisation problems in large-dimensional spaces. Their applications include algorithmic trading and computational biology. This example uses the TensorFlow Probability library.

  • CosmoFlow

    This is a deep learning model for calculating cosmological parameters. The model primarily consists of 3D convolutions, pooling operations, and dense layers.

  • Block sparsity

    Examples of performing block-sparse computations on the IPU in TensorFlow. Includes the Poplar code for two block-sparse custom ops (matrix multiplication and softmax), which are used to construct a simple MNIST example.

TensorFlow 2

Feature examples

Simple application examples

Example applications

  • Adversarial Generalized Method of Moments

    This example is an implementation of Adversarial Generalized Method of Moments, an approach for solving statistical problems based on generative adversarial networks with a wide variety of applications.

  • Graph Neural Network Example

    This example uses the Spektral GNN library to predict the heat capacity of various molecules in the QM9 dataset.

PyTorch

Tutorials

A set of tutorials to introduce the PyTorch framework support for the IPU.

Feature examples

  • Octave Convolutions

    A novel convolutional layer in neural networks. This example shows an implementation of how to train the model and run it for inference.

  • Using a custom op in a PyTorch model

    This example describes how to use a custom op created in Poplar for the PopTorch. It does not describe the creation of a custom op with Poplar in C++, only the use of the op within a PyTorch model.

Simple application examples

Complete examples of models

Pre-trained models

  • Hugging Face BERT

    A pre-trained BERT model made available by Hugging Face and which is implemented in PyTorch. This example consists of running one of the pre-trained BERT model on an IPU for an inference session.

Other examples

  • PopART MNIST

    An example of how to export a PyTorch model as an ONNX file and reuse this file with Graphcore’s PopART.

Example applications

PopART / ONNX

There are several examples demonstrating how to use the Poplar Advanced Runtime (PopART). These include full models as well as examples of how to use multiple IPUs, implement custom ops in Poplar and other key features provided by PopART.

Feature examples

Multi-IPU examples

Custom operators

  • Custom Operators

    This directory contains two example implementations of custom operators for PopART (Cube and LeakyReLU). Both examples create an operation definition with forward and backward parts, and include a simple inference script to demonstrate using the operators.

Other examples

  • Data Callbacks

    This example creates a simple computation graph and uses callbacks to feed data and retrieve the results. Time between host-device transfer and receipt of the result on the host is computed and displayed for a range of different data sizes.

  • Distributed MNIST with Horovod Training Demo

    This example uses distributed training with a Horovod PopART extension to train a network on the MNIST dataset.

  • Automatic and Manual Recomputing

    This example shows how to use manual and automatic recomputation in PopART with a seven-layer DNN and generated data.

Simple application examples

  • Simple MNIST Examples

    Contains two simple models, one linear and one using convolutions trained on the MNIST dataset.

  • PopART MNIST

    An example of how to export a PyTorch model as an ONNX file and reuse this file with Graphcore’s PopART.

Kernel benchmarks

  • Kernel Synthetic Benchmarks

    Contains synthetic benchmarks for models with two types of layer (LSTM and 2D Convolution) and synthetic data in training and inference.

Example applications

Code examples

  • Block Sparsity

    Examples of of using PopART to perform block-sparse computations on the IPU, including the Poplar code for two block-sparse custom ops (matrix multiplication and softmax). These are used to construct a number of examples and test scripts, including a simple MNIST example.

Poplar and PopLibs

Tutorials and examples that show how to use the Poplar and PopLibs libraries that underlie the other frameworks. Particularly useful if you need to extend the existing frameworks by writing a custom operation, for example.

Feature examples

  • Advanced example

    This example performs a simple computation but demonstrates some of the advanced features of Poplar (for example, using dedicated programs for the I/O, using PopLibs, saving/reusing executables, choosing the number of IPUs). It is a good follow-up to what the Poplar tutorials.

  • Prefetch

    This example shows how to implement prefetching of the input stream of a Poplar program using a callback, in order to maximise I/O performance.

Example applications