Technical Notes and White Papers
An overview of the steps for implementing a custom op in each of the frameworks available in the Poplar SDK
A high-level description of the algorithmic design of the dynamic sparse matrix multiplication in the Graphcore PopSparse library
A practical guide to porting TensorFlow models to the IPU using the Poplar SDK.
Ways of parallelising TensorFlow models on IPU hardware
Optimising high-performance machine learning models running on the IPU
Using the “available memory proportion” option to optimise memory use or performance
Strategies to minimise recompilation when running code on the IPU
BERT-Large implementation on Graphcore IPU-POD systems, using both TensorFlow and PyTorch
A reference configuration of an IPU‑POD64 deployed with OpenStack management software
An example reference architecture, developed with Pure Storage, using FlashBlade storage with the IPU-POD
Using switched GW-Links to connect IPU-Machines in large-scale switched Pod systems
An example reference architecture has been developed in partnership with Weka using the Weka data platform for AI with a Graphcore Pod.
White paper that introduces the Graphcore Cloud Native Pod. Describes how a cloud native software-defined as-a-service, (Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS)) can be overlaid onto Graphcore’s Pod systems.
The IPU’s hardware and software architectures support fast and efficient training and inference of deep learning models using mixed precision arithmetic
A white paper from Cambrian AI Research examining the growing momentum of the Poplar software stack and ecosystem