The Graphcore Communication Library (GCL) contains functions for executing collective operations over data tensors, making the tensor data collective-friendly and allowing IO-tile allocation on the IPUs.
GCL enables high-performance scale-out for IPU systems. GCL utilises the IPU´s built-in hardware support for transferring data directly from the the memory of one IPU to another via the IPU-Fabric. The result is a low-overhead, high-throughput communication library, specifically targeted at systems such as the Pod128.
The library is used by other frameworks, such as TensorFlow, to implement functions such as data-parallel gradient reductions using all-reduce.
See also the Poplar and PopLibs User Guide for an introduction to the Poplar Graph Programming Framework.