2. Sharding
Sharding means partitioning a neural network, represented as a computational graph, across multiple IPUs, each of which computes a certain part of this graph.
For example, we plan to train a model on a single Dell DSS8440 server (see DSS8440 Server white paper) that has a total of eight C2 cards (Graphcore C2 card) and 16 IPUs. As shown in the figure below, the model we are training has 32 layers that cannot fit in one IPU. Logically, we shard the model into 16 subgraphs across these eight C2 cards, where each IPU computes two subgraphs. The direction of the arrows indicates the computation process of the entire model. IPUs communicate with each other through IPU-Links, with bidirectional bandwidth of 64 GB/s.
The figure below shows how we shard a neural network that is implemented in TensorFlow. Even though this model is evenly partitioned across two IPUs and each subgraph can only be visible to its assigned IPU, these two subgraphs are encapsulated within the same session instance and therefore can be trained in a distributed manner.
In the figure below, on the left is the computational graph we would like to execute. Let’s say we assign a part of the graph (P0, P1, P2, P3) to the CPU, and partition the rest into two shards across two IPUs. The original computational graph (shown on the left) is transformed into the graph on the right. When the variables required for computation in TensorFlow are distributed on different types of TensorFlow devices (such as CPU and IPU), TensorFlow will add Send and Recv nodes to the graph. If we use sharding, copy nodes will be added between pairs of IPU shards to exchange variables. Copy nodes are implemented with IPU-Link technology.
2.1. Graph sharding
We provide an API for manual graph sharding, allowing you to arbitrarily designate sharding points of the graph to achieve maximum flexibility.
2.1.1. API
The manual model sharding API:
tensorflow.python.ipu.scopes.ipu_shard(index)
Model sharding of a group of operations.
Parameters:
index
: IPU index indicates which IPU the operation group is partitioned onto.
2.1.2. Code example
A code example is shown below:
1import numpy as np
2import tensorflow.compat.v1 as tf
3from tensorflow.python import ipu
4from tensorflow.python.ipu.scopes import ipu_scope
5
6tf.disable_v2_behavior()
7
8NUM_IPUS = 4
9
10# Configure the IPU system
11cfg = ipu.utils.create_ipu_config()
12cfg = ipu.utils.auto_select_ipus(cfg, NUM_IPUS)
13ipu.utils.configure_ipu_system(cfg)
14
15# Create the CPU section of the graph
16with tf.device("cpu"):
17 pa = tf.placeholder(np.float32, [2], name="a")
18 pb = tf.placeholder(np.float32, [2], name="b")
19 pc = tf.placeholder(np.float32, [2], name="c")
20
21# Distribute the computation across four shards
22def sharded_graph(pa, pb, pc):
23 with ipu.scopes.ipu_shard(0):
24 o1 = pa + pb
25 with ipu.scopes.ipu_shard(1):
26 o2 = pa + pc
27 with ipu.scopes.ipu_shard(2):
28 o3 = pb + pc
29 with ipu.scopes.ipu_shard(3):
30 out = o1 + o2 + o3
31 return out
32
33# Create the IPU section of the graph
34with ipu_scope("/device:IPU:0"):
35 result = ipu.ipu_compiler.compile(sharded_graph, [pa, pb, pc])
36
37with tf.Session() as sess:
38 # sharded run
39 result = sess.run(result,
40 feed_dict={
41 pa: [1., 1.],
42 pb: [0., 1.],
43 pc: [1., 5.]
44 })
45 print(result)
auto_select_ipus
on line 12 selects four independent IPUs for this task. This function browses the current IPUs in the system in order to determine which IPUs are idle and subscribes to four available IPUs. sharded_graph()
on lines 22-31 defines a simple graph that consists of some simple additions. Most importantly, the entire graph is partitioned into four shards by calling the ipu.scopes.ipu_shard()
API four times.
2.2. Limitations of sharding
Sharding simply partitions the model and distributes it on multiple IPUs. Multiple shards execute in series and cannot fully utilise the computing resources of the IPUs. As shown in the figure below, the model is partitioned into four parts and executed on four IPUs in series. Only one IPU is working at any one time.
Sharding is a valuable method for use cases that need more control with replication, for example random forests, federated averaging and co-distillation. It can also be useful when developing/debugging a model, however for best performance pipelining should be used in most cases.