Document Updates¶
The main updates to the documents are summarised below. In addition there will have been many incremental improvements and corrections.
Contents
May 2022¶
Added a set of Quick Starts to get started on Graphcloud.
March 2022¶
Addition of the documents for the Bow range of products.
Extended IPU Programmer’s Guide with much new material.
Tutorials and examples¶
The following tutorials are now available in Markdown and as Jupyter notebooks:
The following simple applications have been turned into tutorials, and are available in Markdown and as Jupyter notebooks:
January 2022¶
Scaling AI with Graphcore and Pure Storage describes an example reference architecture developed with Pure Storage.
December 2021¶
New documents¶
The Memory and Performance Optimisation on the IPU guide describes general techniques for developing high-performance machine learning models running on the IPU.
Graphcore OpenStack reference design for IPU-POD systems illustrates a reference configuration of a Graphcore IPU‑POD64 rack deployed with OpenStack.
SDK 2.4 release¶
For full details of all the changes in this release, see the SDK 2.4 Release Notes.
New gcipuinfo library for monitoring and gathering information about the IPUs available in a system, and the applications using them.
Tutorials and examples¶
There are two new examples demonstrating use of the Poplar distributed configuration library (PopDist) for:
The following changes have been made to the Poplar tutorials:
Tutorial 5 has been moved to simple_applications/poplar/mnist.
Tutorials 6 & 7 have been re-numbered and can now be found in tut5_matrix_vector and tut6_matrix_vector_opt, respectively.
The PyTorch tutorial on Fine-tuning BERT on the IPU is now available as a Jupyter notebook and a Python script.
The following tutorials are now available in Markdown and as Jupyter notebooks:
The PopART
pytorch_api
MNIST application has been deleted because it is not compatible with the current version of PopART.
October 2021¶
SDK 2.3 release¶
For full details of all the changes in this release, see the SDK 2.3 Release Notes.
Tutorials and examples¶
The TensorFlow 2 Graph recompilation tutorial was removed, as similar content is available in the technical note on optimising for the IPU in TensorFlow. This now covers both TensorFlow 1 and TensorFlow 2.
The Distributed MNIST with Horovod Training Demo was removed because it uses a feature (
HostAllReduce
) which is no longer supported. For distributed training, use PopRun and PopDist instead. The example is compatible with SDK 2.1 and SDK 2.2.
August 2021¶
SDK 2.2 release¶
Targeting the IPU from TensorFlow 2 has been revised to reflect the move to TensorFlow 2.4
The PopVision Analysis Library (libpva) User Guide and PopVision Trace Instrumentation Library documents have been split out from the Poplar & PopLibs User Guide
The Vertex/Assembler Programming Guide has been merged into the Poplar and PopLibs User Guide
For full details of all the changes in this release, see the SDK 2.2 Release Notes.
New tutorials and examples¶
July 2021¶
A new white paper on Graphcore’s AI-Float technology with hardware support for mixed-precision and stochastic rounding.
June 2021¶
SDK 2.1 release¶
For full details of all the changes in this release, see the SDK 2.1 Release Notes.
Other changes¶
The Graphcore GitHub repositories have been split into a section for tutorials and examples to help you get started, and another with sample applications for the IPU.
New tutorial on using the PopVision analysis library (libpva)
New example of using PopDist for distributed training
New technical note published on using the
availableMemoryProportion
option to optimise memory use: Optimising Temporary Memory Usage for Convolutions and Matmuls on the IPUUpdated TensorFlow 1 tutorial on half precision training:
an improved section on loss scaling
a new section on how to avoid numerical issues (overflow, underflow, unstable operations)
and an additional example using the IPUEstimator
Updated PyTorch tutorial on half precision training:
a new section on numerical stability (loss scaling, stochastic rounding)
a section on debugging floating-point exceptions
a section on PopTorch tracing
Updated Poplar profiling tutorial (tutorial 4):
code for the IPU Model and for running on hardware (previously only the IPU Model could be used)
an improved section on out-of-memory errors
an improved section on navigating a profile generated by PopVision Graph Analyser
May 2021¶
Published a new technical note on the implementation and performance of BERT-Large on Graphcore IPU-POD systems: Pre-Training and Fine-Tuning BERT for the IPU
The V-IPU Administrator Guide has been updated with a new chapter on monitoring application metrics using Prometheus and InfluxDB.
April 2021¶
A new technical note has been published with an overview of how to add custom operations in PyTorch, TensorFlow and PopART: Creating Custom Operations for the IPU.
The technical note Model parallelism with TensorFlow: sharding and pipelining was revised to be consistent with SDK 2.0, and with the addition of content on pipelining Keras models in TensorFlow 2.
The “Writing custom operations” chapter in the TensorFlow 1 and TensorFlow 2 user guides has been completely rewritten,
New tutorials have been added on using half and mixed-precision floating point in PopTorch and TensorFlow 1, and efficient data loading with PopTorch
March 2021¶
Hardware documents¶
The IPU-POD Direct Attach Build and Test Guide has been updated.
Published the IPU-POD16 Direct Attach Datasheet.
Updates since SDK 2.0¶
Since the SDK release, the following documents have been updated:
The Graphcore Command Line Tools document has been updated to 1.0.51. This adds a description of the information returned by the
gc-info --tile-overview
option and documents the newgc-exchangewritetest
command.The Targeting the IPU from TensorFlow 1 has been updated with an improved explanation of the role of XLA.
SDK 2.0 release¶
For full details of all the changes in this release, see the SDK 2.0 Release Notes.
The main documentation changes are:
New: PopDist and PopRun: User Guide documents the tool for running distributed programs and the associated PopDist library.
New and updated functions in the Poplar and PopLibs API Reference:
New libraries:
poplar/DebugContext.hpp
poplar/TensorRearranger.hpp
poplin/Cholesky.hpp
poplin/TriangularSolve.hpp
popnn/CTCLoss.hpp
popnn/LogSoftmax.hpp
popnn/Rnn.hpp
popops/CollectiveTypes.hpp
popops/SequenceSlice.hpp
popops/SortOrder.hpp
popops/TopK.hpp
popsparse/MatMulParams.hpp
poputil/OptionParsing.hpp
poputil/TensorMetaData.hpp
Changes:
Renamed
poplar/CycleEstimateFunc.hpp
topoplar/PerfEstimateFunc.hpp
Renamed
popops/Collectives.hpp
topopops/TensorCollectives.hpp
The PopVision trace instrumentation (PVTI) and PopVision analysis (PVA) libraries are now included in the in Poplar docs.
Other changes¶
There is now a contents page listing the examples and tutorials available on GitHub.
The IPU-M2000 DA system has been renamed to Getting Started with IPU-POD4 DA and IPU-POD16 DA and updated.
November 2020¶
This includes the release of Poplar SDK 1.4, the IPU-M2000, the IPU-POD reference design and the supporting software tools.
Poplar SDK 1.4
Updates for new features in the SDK 1.4 release (see the SDK release notes for details of changes and new features in the SDK)
PopART: New chapter on writing custom ops
PopTorch: extended PyTorch for the IPU: User Guide & new GitHub tutorial & how-to video
Keras: Introduction to using Keras in TensorFlow 2 on the IPU
TensorFlow: Added documentation and GitHub tutorial on Graph Recompilation in TensorFlow 2 (note: this tutorial was removed with SDK 2.3 so is not guaranteed to be compatible with this or later releases)
PopVision User Guide for the System Analyser and the trace instrumentation library (libpvti)
Additional video tutorials: Using Data Feeds & Training Loops, Evaluating Batch Sizes for IPUs and Fundamentals of BSP
New and updated documents for IPU-POD64 reference design
New documents for IPU-M2000 direct attach
Documents for hardware support tools: