17. Exporting precompiled models for TensorFlow Serving

TensorFlow applications compiled for the IPU can be exported to standard TensorFlow SavedModel format and deployed to a TensorFlow Serving instance. The exported SavedModel contains the executable compiled for the IPU and a simple TensorFlow graph with IPU embedded application runtime operations which allow you to run the executable as part of the TensorFlow graph.

The Graphcore TensorFlow API for exporting models for TensorFlow Serving supports three different use cases:

  1. Models defined inside a function without using pipelining can be exported using the tensorflow.python.ipu.serving.export_single_step() function.

  2. Pipelined models defined as a list of functions can be exported using the tensorflow.python.ipu.serving.export_pipeline() function.

  3. Keras models can be exported using tensorflow.python.ipu.serving.export_keras() or the model’s export_for_ipu_serving() method. Both ways are functionally identical and support both pipelined and non-pipelined models.

General notes about using the Graphcore TensorFlow API for exporting models with TensorFlow Serving:

  1. Since the exported SavedModel contains custom IPU embedded application runtime operations, it can be used only with the Graphcore distribution of TensorFlow Serving.

  2. The exported SavedModel cannot be loaded back into a TensorFlow script and used as a regular model because, in the export stage, the model is compiled into an IPU executable. The exported TensorFlow graph contains only IPU embedded application runtime operations and has no information about specific layers, and so on.

  3. TensorFlow and TensorFlow Serving versions must always match. It means that you have to use the same version of TensorFlow Serving as the version of TensorFlow that was used to export the model. Moreover, Poplar versions must also match between the TensorFlow Serving instance and the version of TensorFlow that was used to export the model.

17.1. Exporting non-pipelined models defined inside a function

Exporting the forward pass of a non-pipelined model can be done with the tensorflow.python.ipu.serving.export_single_step() function. A function that defines the forward pass of the model is required as a first argument. Under the hood, the export_single_step() wraps that function into a while loop optimized for the IPU, with the iterations parameter denoting the number of loop iterations. You can use this parameter to tweak the model’s latency; its optimal value is use-case specific. Additionally, the function adds the infeed and outfeed queues, so you do not have to take care of it. Then the model is compiled into an executable and included as an asset in the SavedModel stored at the export_dir location.

To export such a model, the function’s input signature has to be defined. This can be accomplished in one of three ways:

All of the above methods are functionally equivalent and can be used interchangeably based on what you find more convenient.

17.1.1. Example of exporting non-pipelined model defined inside a function

This example exports a very simple model with embedded IPU program that doubles the input tensor.

 1import os
 2import shutil
 3
 4import numpy as np
 5
 6import tensorflow as tf
 7from tensorflow.python.ipu import ipu_strategy
 8from tensorflow.python.ipu import serving
 9from tensorflow.python.ipu import config
10
11# Directory where SavedModel will be written.
12saved_model_directory = './my_saved_model_ipu/001'
13# Directory should be empty or should not exist.
14if os.path.exists(saved_model_directory):
15  shutil.rmtree(saved_model_directory)
16
17
18# The function to export.
19@tf.function
20def my_net(x):
21  # Double the input - replace this with application body.
22  result = x * 2
23  return result
24
25
26# Configure the IPU for compilation.
27cfg = config.IPUConfig()
28cfg.auto_select_ipus = 1
29cfg.device_connection.enable_remote_buffers = True
30cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
31cfg.configure_ipu_system()
32
33input_shape = (4,)
34# Prepare the input signature.
35input_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
36# Export as a SavedModel.
37iterations = 16
38
39runtime_func = serving.export_single_step(my_net, saved_model_directory,
40                                          iterations, input_signature)
41print(f"SavedModel written to {saved_model_directory}")
42
43# You can test the exported executable using returned `runtime_func`.
44# This should print the even numbers 0 to 30.
45strategy = ipu_strategy.IPUStrategy()
46with strategy.scope():
47  for i in range(iterations):
48    input_data = np.ones(input_shape, dtype=np.float32) * i
49    print(runtime_func(input_data))

17.2. Exporting pipelined models defined as a list of functions

Exporting the forward pass of a pipelined models can be accomplished using tensorflow.python.ipu.serving.export_pipeline() function.

The use of that function is very similar to the creation of a pipeline op using the tensorflow.python.ipu.pipelining_ops.pipeline() function. You have to provide a list of functions that represent the pipeline’s computational stages.

Function tensorflow.python.ipu.serving.export_pipeline() also has an iteration argument. It denotes the number of times each pipeline stage is executed before the pipeline is restarted. Again, you can use it to tweak the model’s latency. This argument is sometimes called steps_per_execution, especially for Keras models.

Similarly to exporting non-pipelined models, to export a pipelined model the signature of the first computational stage has to be known. You can do this in the same three ways as non-pipelined models. It’s worth noting that for the first option—passing the input signature to the @tf.function decorator—you only need to do that for the first computational stage.

17.2.1. Pipeline example

This example exports a simple pipelined IPU program that performs 2x+3 function on the input.

 1import os
 2import shutil
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import ipu_strategy
 8from tensorflow.python.ipu import serving
 9from tensorflow.python.ipu import config
10
11# Directory where SavedModel will be written.
12saved_model_directory = './my_saved_model_ipu/002'
13# Directory should be empty or should not exist.
14if os.path.exists(saved_model_directory):
15  shutil.rmtree(saved_model_directory)
16
17
18# The pipeline's stages to export.
19def stage1(x):
20  # Double the input - replace this with 1st stage body.
21  output = x * 2
22  return output
23
24
25def stage2(x):
26  # Add 3 to the input - replace this with 2nd stage body.
27  output = x + 3
28  return output
29
30
31# Configure the IPU for compilation.
32cfg = config.IPUConfig()
33cfg.auto_select_ipus = 2
34cfg.device_connection.enable_remote_buffers = True
35cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
36cfg.configure_ipu_system()
37
38input_shape = (4,)
39# Prepare the input signature.
40input_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
41# Number of times each pipeline stage is executed.
42iterations = 16
43
44# Export as a SavedModel.
45runtime_func = serving.export_pipeline([stage1, stage2],
46                                       saved_model_directory,
47                                       iterations=iterations,
48                                       device_mapping=[0, 1],
49                                       input_signature=input_signature)
50print(f"SavedModel written to {saved_model_directory}")
51
52# You can test the exported executable using returned `runtime_func`.
53# This should print numbers from 3 to 33.
54strategy = ipu_strategy.IPUStrategy()
55with strategy.scope():
56  for i in range(iterations):
57    input_data = np.ones(input_shape, dtype=np.float32) * i
58    print(runtime_func(input_data))

17.3. Exporting Keras models

There are two ways of exporting Keras models for TensorFlow Serving, independent of whether they’re pipelined or not. Keras models can be exported using the tensorflow.python.ipu.serving.export_keras() function or the model’s export_for_ipu_serving() method.

See the Section 19, Keras with IPUs section for details and examples of exporting precompiled Keras models for TensorFlow Serving.

17.4. Running the model in TensorFlow Serving

To test the exported SavedModel you can just start a TensorFlow Serving instance and point it to the model’s location. Graphcore’s distribution of TensorFlow Serving can be run directly in the host system:

$ tensorflow_model_server --rest_api_port=8501 --model_name=my_model \
      --model_base_path="$(pwd)/my_saved_model_ipu"

And then you can start sending inference requests, for example:

$ curl -d '{"instances": [1.0, 2.0, 5.0, 7.0]}'   \
    -X POST http://localhost:8501/v1/models/my_model:predict

Graphcore does not distribute the TensorFlow Serving API package. If you want to use it, you need to install it from the official distribution using pip.