Guides for deploying applications on the IPU
Command-line tool and API to support running distributed applications across multiple IPUs.
User guide explaining how to run applications on one or more physical IPUs using pre-built Graphcore Docker containers
A file format for exporting and importing models to run on the IPU, and a library for managing those files
A library built on the Poplar runtime to enable loading and running models stored in the Poplar Exchange Format (PopEF) on the IPU.
Information on the Poplar Triton backend: what it is used for, how to install it and how to use it
Information on exporting models from TensorFlow 2 and running them on IPUs using TensorFlow Serving
Information on exporting models from TensorFlow 1 and running them on IPUs using TensorFlow Serving
Kubernetes Operator support for IPUs.
Using Kubernetes with MACVLAN to provide access to the RDMA network interface.