1. Introduction

The Poplar® SDK is the world’s first complete tool chain specifically designed for creating graph software for machine intelligence applications. Poplar seamlessly integrates with TensorFlow, PyTorch and Open Neural Network Exchange (ONNX) allowing developers to use their existing machine intelligence development tools and existing machine learning models.

Poplar enables you to exploit features of the Graphcore Intelligence Processing Unit (IPU), such as parallel execution and efficient floating-point operations. Models written in industry-standard machine learning (ML) frameworks such as TensorFlow and PyTorch are compiled by Poplar to run in parallel on one or more IPUs.

You can import models from other ML frameworks using PopART™, the Poplar advanced runtime, and run them on the IPU. You an also use the Poplar graph library from C++ to create graph programs to run on IPU hardware.

_images/software-stack.png

Fig. 1.1 The Poplar SDK software stack

For more information about the IPU and its programming models, refer to the IPU Programmer’s Guide.

2. TensorFlow

The Poplar SDK includes implementations of TensorFlow 1.15 and 2.4 for the IPU. This includes distributed TensorFlow IPU-specific estimators and optimisers.

Note

There are TensorFlow wheel files built and optimised for both AMD and Intel processors. You must install the appropriate version for your system.

For more information, refer to Targeting the IPU from TensorFlow 1 and Targeting the IPU from TensorFlow 2.

3. PyTorch

The Poplar software stack provides support for running PyTorch training and inference models on the IPU. This requires minimal changes to your existing PyTorch code.

You can create a wrapper for your existing PyTorch model with a single function call. This will create a PopTorch model that runs in parallel on a single IPU.

You can then choose how to split the model across multiple IPUs, to create a pipelined implementation that exploits data parallelism across the IPUs.

See the PyTorch for the IPU User Guide for more information.

4. PopART and ONNX

The Poplar advanced run-time (PopART) enables the efficient execution of both inference and training graphs on the IPU. Its main import format is Open Neural Network Exchange (ONNX), an open format for representing machine learning models.

You can import and execute models created using other industry-standard frameworks, including using differentiation and optimization to train their parameters. You can also create graphs in PopART directly. Models can be exported in ONNX format, for example after training.

PopART includes Python and C++ APIs.

See the PopART User Guide for more information.

5. The Poplar libraries

The Poplar libraries are a set of C++ libraries consisting of the Poplar graph library and the open-source PopLibs™ libraries.

The Poplar graph library provides direct access to the IPU by code written in C++. You can write complete programs using Poplar, or use it to write functions to be called from your application written in a higher-level framework such as TensorFlow.

Poplar enables you to construct graphs, define tensor data and control how the code and data are mapped onto the IPU for execution. The host code and IPU code are both contained in a single program. Poplar compiles code for the IPU and copies it to the device to be executed. You can also pre-compile the device code for faster startup.

The open-source PopLibs library provides a range of higher level functions commonly used in machine learning applications. This includes highly optimised and parallelised primitives for linear algebra such as matrix multiplications and convolutions. There are also several functions used in neural networks (for example, non-linearities, pooling and loss functions) and many other operations on tensor data.

The source code for PopLibs is provided so you can use the library as a starting point for implementing your own functions.

For more information, refer to the Poplar and PopLibs User Guide

Each vertex of a Poplar graph runs a “codelet” that executes code directly on one of the many parallel cores in the IPU. These codelets can be written in C++ or, when more performance is required, assembly.

6. Running programs on the IPU

The PopRun tool is provided with the SDK to assist with the task of running an application across multiple IPUs. It is a command line utility to launch distributed applications on IPU-POD compute systems. It creates multiple instances of the application. Each instance can either be launched on a single host server or multiple host servers within the same IPU-POD, depending on the number of host servers available on the target IPU-POD.

PopRun is implemented with the PopDist API. This provides functions you can use to write a distributed application.

For more information, see the PopDist and PopRun: User Guide.

7. PopVision™ analysis tools

PopVision analysis tools enable you to get an understanding of how applications are performing and utilising the IPU.

For more information see the PopVision User Guide.

7.1. PopVision Graph Analyser

The PopVision Graph Analyser is an analysis tool that helps you gain a deep understanding of how your application is performing and utilising the IPU. By integrating directly with the internal profiling support of the Poplar graph engine and compiler, it enables you to profile and optimise the performance and memory usage of your machine learning models, whether developed in TensorFlow, PyTorch or natively using Poplar and PopLibs.

_images/graph-analyser.png

Fig. 7.1 The PopVision Graph Analyser

The program provides a graphical display of information about your program, including:

  • Summary report: Essential program information.

  • Memory report: Analyse program memory consumption and layout on one or multiple IPUs.

  • Liveness report: Explore temporary peaks in memory and their impact.

  • Execution trace report: View program execution.

For more information see the the PopVision Graph Analyser blog post

7.2. PopVision System Analyser

The PopVision System Analyser allows you to identify bottlenecks on the host CPU by showing the profiling information collected by the PopVision Trace Instrumentation library for Poplar, frameworks and the user application.

_images/system-analyser.png

Fig. 7.2 The PopVision System Analyser

It provides information about the behaviour of the host-side application code. It shows an interactive graphical view of the timeline of execution steps, helping you to identify any bottlenecks between the CPUs and IPUs.

For more information see the the PopVision Analysis Tools blog post

7.3. The PopVision libraries

The PopVision analysis library (libpva) allows programmatic analysis of the IPU profiling information used by the Graph Analyser. The library provides both C++ and Python APIs that can be used to query the Poplar profiling information for you application.

The PopVision trace instrumentation library (libpvti) provides functions to manage the capturing of profiling information for the host-code of your IPU application. This data can then be explored with the PopVision System Analyser. The library provides C++ and Python APIs.

8. Contents of the SDK

The Poplar SDK and the PopVision analysis tools can be downloaded from the Graphcore software download portal. Full documentation is included in the SDK package and on the Graphcore documentation portal. Further information and resources can be found on the Graphcore developer site.

You will need a support account to access the software download portal.

See the Getting Started Guide for your IPU system for full installation instructions.

The installed SDK contains the following components:

  • Command line tools for managing the IPU hardware. See the IPU Command Line Tools document for details.

  • The Poplar, PopLibs, PopDist and PopVision libraries.

  • PopART with support for ONNX graphs.

  • PopTorch wheel file for running PyTorch code on the IPU.

  • Python wheel files for installing versions of TensorFlow 1 and 2 that target the IPU.

  • A Horovod wheel file to support distributed training in PopART.

  • The PopRun command line tool for running distributed applications across multiple IPUs and hosts.

  • Documentation.

Note that, on some cloud-based systems, each user sees a virtual machine and so you may need to install all the necessary software. See the Getting Started Guide for information on checking what software is installed.

8.1. Requirements

To use the Poplar software you will need suitable hardware, such as an IPU‑POD, or access to a cloud-based service that supports IPUs, such as Graphcloud.

The Poplar SDK is available for Ubuntu 18.04 or CentOS 7.6.

The tools require Python 3.6 to be installed. Other packages required by TensorFlow or PopTorch will be automatically installed when you install the wheel file. We recommend running TensorFlow and PopTorch in a Python virtual environment.

The Intel build of TensorFlow requires a processor that supports the AVX-512 instruction set extensions (Skylake, or later).

The AMD build of TensorFlow requires a Ryzen class processor.

The SDK also includes a software model of the IPU so it is possible to develop and test your code even when IPU hardware is not available. This has limited functionality but can be useful for unit testing, for example.

8.1.1. Docker containers

You can also download pre-configured Docker containers. These provide Poplar SDK images ready for deployment. The following Docker containers are available:

  • Tools: contains the necessary tools to interact with IPU devices

  • Poplar: contains Poplar, PopART and tools to interact with IPU devices

  • TensorFlow 1: contains everything in Poplar, with TensorFlow 1 pre-installed

  • TensorFlow 2: contains everything in Poplar, with TensorFlow 2 pre-installed

  • PyTorch - contains everything in Poplar, with PyTorch pre-installed

These are available for Ubuntu 18.04 only.

There are Intel and AMD variants of the TensorFlow containers. You must use the one that matches your hardware.

9. Support

Support is available from the Graphcore customer engineering team via the Graphcore support portal.

Graphcore also has GitHub repositories with further examples:

For more information, see the Examples page.

You can use the tags “ipu”, “poplar” and “popart” when asking questions or looking for answers on StackOverflow.