2. Importing graphs

The PopART Session class creates the runtime environment for executing graphs on IPU hardware. It can read an ONNX graph from a serialised ONNX model protobuf (ModelProto), either directly from a file or from memory. A session object can be constructed either as an InferenceSession or a TrainingSession

Some metadata must be supplied to augment the data present in the ONNX graph in order to run it, as described below.

In the following example of importing a graph for inference, TorchVision is used to create a pre-trained AlexNet graph, with a 4 x 3 x 244 x 244 input. The graph has an ONNX output called out, and the DataFlow object contains an entry to fetch that anchor.

# Copyright (c) 2020 Graphcore Ltd. All rights reserved.
import popart

import torch.onnx
import torchvision

input_ = torch.FloatTensor(torch.randn(4, 3, 224, 224))
model = torchvision.models.alexnet(pretrained=True)

output_name = "output"

torch.onnx.export(model, input_, "alexnet.onnx", output_names=[output_name])

# Create a runtime environment
anchors = {output_name: popart.AnchorReturnType("All")}
dataFlow = popart.DataFlow(100, anchors)
device = popart.DeviceManager().createCpuDevice()

session = popart.InferenceSession("alexnet.onnx", dataFlow, device)

The DataFlow object is described in more detail in Executing graphs.

2.1. Creating a session

The Session class takes the name of a protobuf file, or the protobuf itself. It also takes a DataFlow object which has information about how to execute the graph:

  • The number of times to conduct a forward pass (and a backward pass, if training) of the graph on the IPU before returning to the host for more data.

  • The names of the tensors in the graph used to return the results to the host.

In some ONNX graphs, the sizes of input tensors might not be specified. In this case, the inputShapeInfo parameter can be used to specify the input shapes. The Poplar framework uses statically allocated memory buffers and so it needs to know the size of tensors before the compilation.

The patterns parameter allows the user to select a set of graph transformation patterns which will be applied to the graph. Without this parameter, a default set of optimisation transformations will be applied.

Other parameters to the Session object are used when you are training the network instead of performing inference. They describe the types of loss to apply to the network and the optimiser to use.

An example of creating a session object from an ONNX model is shown below.

# Copyright (c) 2020 Graphcore Ltd. All rights reserved.
import popart

import torch.onnx
import torchvision

input_ = torch.FloatTensor(torch.randn(4, 3, 224, 224))
model = torchvision.models.alexnet(pretrained=False)

output_name = "output"

torch.onnx.export(model, input_, "alexnet.onnx", output_names=[output_name])

# Create a runtime environment
anchors = {output_name: popart.AnchorReturnType("All")}
dataFlow = popart.DataFlow(100, anchors)

# Append an Nll loss operation to the model
builder = popart.Builder("alexnet.onnx")
labels = builder.addInputTensor("INT32", [4])
nlll = builder.aiGraphcore.nllloss([output_name, labels])

optimizer = popart.ConstSGD(0.001)

# Run session on CPU
device = popart.DeviceManager().createCpuDevice()
session = popart.TrainingSession(builder.getModelProto(),
                                 deviceInfo=device,
                                 dataFlow=dataFlow,
                                 loss=nlll,
                                 optimizer=optimizer)

In this example, when the Session object is asked to train the graph, an Nll loss node will be added to the end of the graph, and a ConstSGD optimiser will be used to optimise the parameters in the network.

2.2. Session control options

The userOptions parameter passes options to the session. The available options are listed in the PopART C++ API Reference. As well as options to control specific features of the PopART session, there are also some that allow you to pass options to the underlying Poplar functions:

  • engineOptions passes options to the Poplar Engine object created to run the graph.

  • convolutionOptions passes options to the PopLibs convolution functions.

  • reportOptions Controls the instrumentation and generation of profiling information.

See Retrieving profiling reports for examples of using some of these options.

Full details of the Poplar options can be found in the Poplar and PopLibs API Reference.