5. Scoping and determining unsupported operations

Porting a model to the IPU usually requires an explicit call to ipu_scope where you define which part of the graph goes onto the IPU. In its simplest form, this means taking the model definition and scoping it in its entirety on the IPU:

with ipu_scope('/device:IPU:0'):

or as defined in lines 73-75 of the ResNext example in Section 3.1, Abridged sample code:

74# Compiles graph and targets IPU(s)
75   with ipu.scopes.ipu_scope('/device:IPU:0'):
76      res = ipu.ipu_compiler.compile(resnext101_model, inputs=[])

The ipu_compiler.compile wrapper compiles the model graph and optimizes it for the IPU. The IPUEstimator, explained in Section 4, Training with the estimator API, uses the same approach internally and does not require explicit scoping or compilation.

In general, you can scope most facets of the model definition to run on the IPU, which leads to a simplified deployment process. If an underlying operation in the graph is unsupported, however, such an approach is not possible and TensorFlow will provide an error that elaborates on which operation cannot run on the IPU and which are the supported and unsupported data types. In certain cases, you will need to place a component of the graph on the host:

with tf.device('/device:CPU:0'):

You can also use the allow_soft_placement option to place unsupported operations on the CPU. If you use this option when constructing the Session then there’s no need to explicitly place operations on the host:

with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess: