1. Introduction
The Graphcore Intelligence Processing Unit (IPU) is a highly parallel processor, specifically designed for machine learning and artificial intelligence applications. An IPU-based system, such as an IPU-POD™ or a Bow™ Pod, connects to a host computer which can execute code on one or more IPUs.
This programmer’s guide describes the architecture of the IPU, the type of programs it runs and how programs can use the features of the hardware.
The document is split into multiple sections:
- IPU hardware overview
This section describes how IPU hardware systems are structured. It discusses the parallel computing structure of the IPU, how the IPU memory is organised, how IPUs execute code and how IPUs transfer data to and from other processors.
- Programming model
This section describes what programs that run on IPUs look like and what features those programs have. Programs such as these run in the implementation of machine learning frameworks or any other application on the IPU. This section also describes how programs run on the IPU. This is particularly useful for debugging execution or for developing low-level IPU code.
- Programming tools
This section describes the programming tools available in the Poplar SDK to allow you to develop code to run on the IPU.
- Common algorithmic techniques for IPUs
This section describes common algorithms and algorithmic techniques that are used on the IPU. The techniques in this section are particularly applicable to machine learning (ML) algorithms and are used in various ML frameworks that have been ported to the IPU. This section gives an idea of the kind of techniques you can employ when using ML frameworks to run code efficiently on the IPU.
A glossary of terms used to describe the IPU architecture and programming model is available on the Graphcore documentation portal at docs.graphcore.ai.