Software Documents
Licensed Software
This software is made available under the terms of the Graphcore End User License Agreement (EULA) and the Graphcore Container License Agreement. Please ensure you have read and accept the terms of the corresponding license before using the software. The Graphcore EULA applies unless indicated otherwise.
There are release notes for each software release.
PyTorch
PyTorch for the IPU: User Guide
User guide and API reference for PyTorch on the IPU
PyTorch Geometric for the IPU: User Guide
User guide and API reference for PyTorch Geometric on the IPU
PyTorch technical notes
Memory and Performance Optimisation on the IPU
Optimising high-performance machine learning models running on the IPU
Creating Custom Operations for the IPU
An overview of the steps for implementing a custom op in each of the frameworks available in the Poplar SDK
Optimising Temporary Memory Usage for Convolutions and Matmuls on the IPU
Using the “available memory proportion” option to optimise memory use or performance
TensorFlow
Warning
The versions of TensorFlow included in Poplar SDK 2.5 and earlier are not compatible with protobuf
version 4 (see TensorFlow issue #56077).
When you install a TensorFlow wheel from the Poplar SDK, you must ensure you have a compatible version of protobuf
, downgrading if necessary.
See the getting started guides for more information.
Targeting the IPU from TensorFlow 1
User guide and API reference for the IPU implementation of TensorFlow 1
Targeting the IPU from TensorFlow 2
User guide and API reference for the IPU implementation of TensorFlow 2
The Graphcore implementation of TensorFlow 2 includes Keras support for IPUs
TensorFlow technical notes
Porting TensorFlow 2 Models Quick Start
A short description of how to port TensorFlow 2 models to the IPU, including code snippets to help
Porting TensorFlow 1 models to the IPU
A practical guide to porting TensorFlow models to the IPU using the Poplar SDK.
Memory and Performance Optimisation on the IPU
Optimising high-performance machine learning models running on the IPU
Creating Custom Operations for the IPU
An overview of the steps for implementing a custom op in each of the frameworks available in the Poplar SDK
Optimising for the IPU: Computational Graph Recompilation and Executable Switching in TensorFlow
Strategies to minimise recompilation when running code on the IPU
Model Parallelism on the IPU with TensorFlow: Sharding and Pipelining
Ways of parallelising TensorFlow models on IPU hardware
Optimising Temporary Memory Usage for Convolutions and Matmuls on the IPU
Using the “available memory proportion” option to optimise memory use or performance
PopART
PopXL User Guide and API (experimental)
User Guide and API reference for working with PopXL
The Poplar Advanced Runtime (PopART) for importing and executing models using the ONNX format
PopART technical notes
Memory and Performance Optimisation on the IPU
Optimising high-performance machine learning models running on the IPU
Creating Custom Operations for the IPU
An overview of the steps for implementing a custom op in each of the frameworks available in the Poplar SDK
Optimising Temporary Memory Usage for Convolutions and Matmuls on the IPU
Using the “available memory proportion” option to optimise memory use or performance
Poplar graph programming framework
Information on how to use the Poplar graph programming tools to write code for the IPU
Poplar and PopLibs API Reference
Description of the classes and functions in the Poplar and PopLibs libraries
GCL User Guide and API Reference
Description of the classes and functions in the GCL library
Tile Vertex Instruction Set Architecture for Mk2 IPUs
Tile vertex instruction set architecture (ISA) for Mk2 IPUs. This contains a subset of the instruction set used by the Worker threads.
Poplar technical notes
Memory and Performance Optimisation on the IPU
Optimising high-performance machine learning models running on the IPU
Optimising Temporary Memory Usage for Convolutions and Matmuls on the IPU
Using the “available memory proportion” option to optimise memory use or performance
Running code on the IPU
PopDist and PopRun: User Guide
PopRun and PopDist support running applications across multiple IPUs
Tools to monitor and control the IPU hardware
Pre-built Graphcore Docker packages containing components of the Poplar SDK
A file format for exporting and importing models to run on the IPU, and a library for managing those files
A library built on the Poplar runtime to enable loading and running models stored in the Poplar Exchange Format (PopEF) on the IPU.
Poplar Triton Backend: User Guide
Information on the Poplar Triton backend: what it is used for, how to install it and how to use it
IPU TensorFlow Serving 2 User Guide
Information on exporting models from TensorFlow 2 and running them on IPUs using TensorFlow Serving
IPU TensorFlow Serving 1 User Guide
Information on exporting models from TensorFlow 1 and running them on IPUs using TensorFlow Serving
Kubernetes IPU Operator User Guide
Kubernetes Operator support for IPUs.
Using Kubernetes with MACVLAN to provide access to the RDMA network interface.
Profiling and debugging
PopVision Graph Analyser User Guide
The Graph Analyser can be downloaded from the PopVision tools web page
PopVision System Analyser User Guide
The System Analyser can be downloaded from the PopVision tools web page
PopVision Analysis Library (libpva) User Guide
The PopVision analysis library can be used for programmatic analysis of Poplar profiling information
PopVision Trace Instrumentation Library
The PopVision trace instrumentation library provides functions to capture data used by the PopVision System Analyser
System management tools
Information for recovering from ICU error states, for example after a failed firmware update.
User guide that defines the functions and the interfaces provided by the BMC software on an IPU-Machine (IPU-M2000 and Bow-2000).
User guide for the command line tools that provide information on the current status of the connected hardware. These tools are included with the Poplar SDK.
Graphcore IPU Info Library (gcipuinfo)
User guide for the Graphcore IPU Info library (gcipuinfo). This library provides an API for monitoring and gathering information about the IPUs available in a system, and the applications using them.
PopDist and PopRun: User Guide
User guide for configuring and running distributed applications. The Poplar Distributed Configuration Library (PopDist) provides an API to prepare applications for distributed execution and PopRun is the command line utility to launch distributed applications on Pod systems.
User guide for the tool that allows you to perform operations related to one or more IPU-Machines in a rack.
User guide explaining how to run applications in Docker on a Linux machine using one or more physical IPUs.
User guide for administrators of data centre clusters based on the Graphcore® Virtual-IPU™ (V-IPU). The V-IPU management software provides a control plane for large-scale multi-tenanted deployments of IPUs.
User guide for users of data centre clusters based on the Graphcore® Virtual-IPU™ (V-IPU). The V-IPU management software provides a control plane for large-scale multi-tenanted deployments of IPUs.
Open source software
TensorFlow 2 for IPU
Keras for TensorFlow 2 on the IPU
IPU TensorFlow Addons for TensorFlow 2 on the IPU
TensorFlow 1 for IPU
IPU TensorFlow Addons for TensorFlow 1 on the IPU
PopTorch
PopLibs libraries
Poprithms
A library of graph algorithms used by the ML frameworks