17. Exporting precompiled models for TensorFlow Serving

TensorFlow applications compiled for the IPU can be exported to the standard TensorFlow SavedModel format and deployed to a TensorFlow Serving instance. The exported SavedModel contains the executable compiled for the IPU, and a TensorFlow graph with IPU embedded application runtime operations which allows the executable to be run as part of the TensorFlow graph. The exported graph may contain optional preprocessing and postprocessing parts that are executed on the CPU.

The Graphcore TensorFlow API for exporting models for TensorFlow Serving supports two different use cases:

  1. Models defined inside a function without using pipelining can be exported using the tensorflow.python.ipu.serving.export_single_step() function.

  2. Pipelined models defined as a list of functions can be exported using the tensorflow.python.ipu.serving.export_pipeline() function.

General notes about using the Graphcore TensorFlow API for exporting models with TensorFlow Serving:

  1. Since the exported SavedModel contains custom IPU embedded application runtime operations, it can be used only with the Graphcore distribution of TensorFlow Serving.

  2. The exported SavedModel cannot be loaded back into a TensorFlow script and used as a regular model because, in the export stage, the model is compiled into an IPU executable. The exported TensorFlow graph contains only IPU embedded application runtime operations and has no information about specific layers, and so on.

  3. TensorFlow and TensorFlow Serving versions must always match. It means that you have to use the same version of TensorFlow Serving as the version of TensorFlow that was used to export the model. Moreover, Poplar versions must also match between the TensorFlow Serving instance and the version of TensorFlow that was used to export the model.

17.1. Exporting non-pipelined models defined inside a function

Exporting the forward pass of a non-pipelined model can be done with the tensorflow.python.ipu.serving.export_single_step() function. A function that defines the forward pass of the model is required as a first argument. Under the hood, export_single_step() wraps that function into a while loop optimized for the IPU, with the iterations parameter denoting the number of loop iterations. You can use this parameter to tweak the model’s latency; its optimal value is use-case specific. Additionally, the function adds the infeed and outfeed queues, so you do not have to take care of it. Then the model is compiled into an executable and included as an asset in the SavedModel stored at the export_dir location. The export_single_step() function adds the possibility of passing the preprocessing_step and postprocessing_step functions which will be included into the SavedModel graph and executed on the CPU on the server side. If all preprocessing and postprocessing operations are available on the IPU, preprocessing_step and postprocessing_step functions should be called inside the predict_step function. Then function bodies will be compiled together with the inference model.

To export such a model, the predict_step function’s input signature has to be defined. This can be accomplished in one of three ways:

All of the above methods are functionally equivalent and can be used interchangeably based on what you find more convenient.

You can also specify a variable_initializer function that performs an initialization of all the variables in the predict step graph. This function takes a tf.Session instance as the only argument. The example below shows how it can be used for restoring values of variables from a checkpoint.

def variable_initializer(session):
  saver = tf.train.Saver()
  ipu.utils.move_variable_initialization_to_cpu()
  init = tf.global_variables_initializer()
  session.run(init)
  saver.restore(session, 'path/to/checkpoint')

17.1.1. Example of exporting non-pipelined model defined inside a function

This example exports a very simple model with an embedded IPU program that doubles the input tensor.

 1import os
 2import sys
 3
 4import numpy as np
 5
 6import tensorflow as tf
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import serving
 9
10# Directory where SavedModel will be written.
11saved_model_directory = './my_saved_model_ipu/001'
12
13if os.path.exists(saved_model_directory):
14  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
15           "before running the example.")
16
17
18# The function to export.
19@tf.function
20def predict_step(x):
21  # Double the input - replace this with application body.
22  result = x * 2
23  return result
24
25
26# Configure the IPU for compilation.
27cfg = config.IPUConfig()
28cfg.auto_select_ipus = 1
29cfg.device_connection.enable_remote_buffers = True
30cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
31cfg.configure_ipu_system()
32
33input_shape = (4,)
34# Prepare the `predict_step` function signature.
35predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
36
37# Export as a SavedModel.
38iters = 16
39runtime_func = serving.export_single_step(
40    predict_step,
41    saved_model_directory,
42    iterations=iters,
43    predict_step_signature=predict_step_signature)
44print(f"SavedModel written to {saved_model_directory}")
45
46# You can test the exported executable using returned `runtime_func`.
47# This should print the even numbers 0 to 30.
48input_placeholder = tf.placeholder(dtype=np.float32, shape=input_shape)
49result_op = runtime_func(input_placeholder)
50
51with tf.Session() as sess:
52  for i in range(iters):
53    input_data = np.ones(input_shape, dtype=np.float32) * i
54    print(sess.run(result_op, {input_placeholder: input_data}))

17.1.2. Example of exporting non-pipelined model defined inside a function with additional preprocessing and postprocessing steps

This example exports a very simple model with an embedded IPU program, which doubles the input tensor. The model also performs a preprocessing step (on the IPU) to compute the absolute value of the input and a postprocessing step (on the IPU) to reduce the output.

 1import os
 2import sys
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import serving
 9
10# Directory where SavedModel will be written.
11saved_model_directory = './my_saved_model_ipu/005'
12
13if os.path.exists(saved_model_directory):
14  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
15           "before running the example.")
16
17
18# The preprocessing step is performed fully on the IPU.
19def preprocessing_step(x):
20  return tf.abs(x)
21
22
23# The postprocessing step is performed fully on the IPU.
24def postprocessing_step(x):
25  return tf.reduce_sum(x)
26
27
28def application_body(x):
29  # Double the input - replace this with your application body.
30  return x * 2
31
32
33# The function to export.
34def predict_step(x):
35  # preprocessing and postprocessing will be compiled and exported together
36  # with application body.
37  x = preprocessing_step(x)
38  x = application_body(x)
39  return postprocessing_step(x)
40
41
42# Configure the IPU for compilation.
43cfg = config.IPUConfig()
44cfg.auto_select_ipus = 1
45cfg.device_connection.enable_remote_buffers = True
46cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
47cfg.configure_ipu_system()
48
49input_shape = (4,)
50# Prepare the `predict_step` function signature.
51predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
52
53# Export as a SavedModel.
54iters = 10
55runtime_func = serving.export_single_step(
56    predict_step,
57    saved_model_directory,
58    iterations=iters,
59    predict_step_signature=predict_step_signature)
60print(f"SavedModel written to {saved_model_directory}")
61
62# You can test the exported executable using the returned `runtime_func`.
63input_placeholder = tf.placeholder(dtype=tf.float32, shape=input_shape)
64result_op = runtime_func(input_placeholder)
65
66with tf.Session() as sess:
67  for i in range(iters):
68    input_data = np.ones(input_shape, dtype=np.float32) * (-1.0 * i)
69    print(sess.run(result_op, {input_placeholder: input_data}))

This example exports a very simple model with an embedded IPU program, which doubles the input tensor. The model also performs a preprocessing step (on the CPU) to convert string tensors to floats and a postprocessing step (on the CPU) to compute the absolute value of the outputs.

 1import os
 2import sys
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import serving
 9
10# Directory where SavedModel will be written
11saved_model_directory = './my_saved_model_ipu/004'
12
13if os.path.exists(saved_model_directory):
14  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
15           "before running the example.")
16
17
18# The preprocessing step is performed fully on the CPU.
19def preprocessing_step(x):
20  def transform_fn(inp):
21    is_gc = lambda: tf.constant(1.0)
22    is_oth = lambda: tf.random.uniform(shape=[])
23    condition = tf.equal(inp, tf.constant("Graphcore", dtype=tf.string))
24    return tf.cond(condition, is_gc, is_oth)
25
26  return tf.stack([transform_fn(elem) for elem in tf.unstack(x)])
27
28
29# The postprocessing step is performed fully on the CPU.
30def postprocessing_step(x):
31  return tf.abs(x)
32
33
34# The function to export.
35def predict_step(x):
36  # Double the input - replace this with your application body.
37  return x * 2
38
39
40# Configure the IPU for compilation.
41cfg = config.IPUConfig()
42cfg.auto_select_ipus = 1
43cfg.device_connection.enable_remote_buffers = True
44cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
45cfg.configure_ipu_system()
46
47input_shape = (6,)
48# Prepare the `predict_step` function signature.
49predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
50
51# Prepare the `preprocessing_step` function signature.
52preprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
53                                              dtype=tf.string),)
54
55# Prepare the `postprocessing_step` function signature.
56postprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
57                                               dtype=np.float32),)
58# Export as a SavedModel.
59iters = 10
60runtime_func = serving.export_single_step(
61    predict_step,
62    saved_model_directory,
63    iterations=iters,
64    predict_step_signature=predict_step_signature,
65    preprocessing_step=preprocessing_step,
66    preprocessing_step_signature=preprocessing_step_signature,
67    postprocessing_step=postprocessing_step,
68    postprocessing_step_signature=postprocessing_step_signature)
69print(f"SavedModel written to {saved_model_directory}")
70
71# You can test the exported executable using returned `runtime_func`.
72input_placeholder = tf.placeholder(dtype=tf.string, shape=input_shape)
73result_op = runtime_func(input_placeholder)
74
75with tf.Session() as sess:
76  input_data = ["make", "AI", "breakthroughs", "with", "Graphcore", "IPUS"]
77  print(sess.run(result_op, {input_placeholder: input_data}))

17.2. Exporting pipelined models defined as a list of functions

Exporting the forward pass of a pipelined model can be accomplished using tensorflow.python.ipu.serving.export_pipeline() function.

The use of that function is very similar to the creation of a pipeline op using the tensorflow.python.ipu.pipelining_ops.pipeline() function. You have to provide a list of functions that represent the pipeline’s computational stages.

The function tensorflow.python.ipu.serving.export_pipeline() also has an iteration argument. It denotes the number of times each pipeline stage is executed before the pipeline is restarted. Again, you can use iteration to tweak the model’s latency. This argument is sometimes called steps_per_execution, especially for Keras models.

Similarly to exporting non-pipelined models, to export a pipelined model the signature of the first computational stage has to be known. You can do this using the same methods as for non-pipelined models (Section 17.1, Exporting non-pipelined models defined inside a function). It is worth noting that for the first option—passing the input signature to the @tf.function decorator—you only need to do this for the first computational stage.

17.2.1. Pipeline example

This example exports a simple pipelined IPU program that computes the function 2x+3 on the input.

 1import os
 2import sys
 3
 4import numpy as np
 5import tensorflow as tf
 6from tensorflow.python.ipu import config
 7from tensorflow.python.ipu import serving
 8
 9# Directory where SavedModel will be written.
10saved_model_directory = './my_saved_model_ipu/002'
11
12if os.path.exists(saved_model_directory):
13  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
14           "before running the example.")
15
16
17# The pipeline stages to export.
18@tf.function
19def stage1(x):
20  # Double the input - replace this with 1st stage body.
21  output = x * 2
22  return output
23
24
25@tf.function
26def stage2(x):
27  # Add 3 to the input - replace this with 2nd stage body.
28  output = x + 3
29  return output
30
31
32# Configure the IPU for compilation.
33cfg = config.IPUConfig()
34cfg.auto_select_ipus = 2
35cfg.device_connection.enable_remote_buffers = True
36cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
37cfg.configure_ipu_system()
38
39input_shape = (4,)
40# Prepare the input signature.
41predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
42
43# Export as a SavedModel.
44iters = 16
45predict_step = [stage1, stage2]
46runtime_func = serving.export_pipeline(
47    predict_step,
48    saved_model_directory,
49    iterations=iters,
50    device_mapping=[0, 1],
51    predict_step_signature=predict_step_signature)
52print(f"SavedModel written to {saved_model_directory}")
53
54# You can test the exported executable using returned `runtime_func`,
55# which should print numbers from 3 to 33.
56input_placeholder = tf.placeholder(dtype=np.float32, shape=input_shape)
57result_op = runtime_func(input_placeholder)
58
59with tf.Session() as sess:
60  for i in range(iters):
61    input_data = np.ones(input_shape, dtype=np.float32) * i
62    print(sess.run(result_op, {input_placeholder: input_data}))

17.2.2. Pipeline example with preprocessing and postprocessing steps

This example exports a simple pipelined IPU program that computes the function 2x+3 on the input. The model includes a preprocessing computational stage which computes the absolute value of the input and an IPU postprocessing step to reduce the output.

 1import os
 2import sys
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import serving
 9
10# Directory where SavedModel will be written.
11saved_model_directory = './my_saved_model_ipu/007'
12
13if os.path.exists(saved_model_directory):
14  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
15           "before running the example.")
16
17
18# The preprocessing stage is performed fully on the IPU.
19def preprocessing_stage(x):
20  return tf.abs(x)
21
22
23# The pipeline's stages to export.
24def stage1(x):
25  # Double the input - replace this with 1st stage body.
26  output = x * 2
27  return output
28
29
30def stage2(x):
31  # Add 3 to the input - replace this with 2nd stage body.
32  output = x + 3
33  return output
34
35
36# The postprocessing stage is performed fully on the IPU.
37def postprocessing_stage(x):
38  return tf.reduce_sum(x)
39
40
41# Configure the IPU for compilation.
42cfg = config.IPUConfig()
43cfg.auto_select_ipus = 4
44cfg.device_connection.enable_remote_buffers = True
45cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
46cfg.configure_ipu_system()
47
48input_shape = (4,)
49# Prepare the input signature.
50predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
51
52# Export as a SavedModel.
53iters = 8
54predict_step = [preprocessing_stage, stage1, stage2, postprocessing_stage]
55runtime_func = serving.export_pipeline(
56    predict_step,
57    saved_model_directory,
58    iterations=iters,
59    device_mapping=[0, 1, 2, 3],
60    predict_step_signature=predict_step_signature)
61print(f"SavedModel written to {saved_model_directory}")
62
63# You can test the exported executable using the returned `runtime_func`,
64input_placeholder = tf.placeholder(dtype=tf.float32, shape=input_shape)
65result_op = runtime_func(input_placeholder)
66
67with tf.Session() as sess:
68  for i in range(iters):
69    input_data = np.ones(input_shape, dtype=np.float32) * (-1.0 * i)
70    print(sess.run(result_op, {input_placeholder: input_data}))

This example exports a simple pipelined IPU program that computes the function 2x+3 on the input tensor. The model also performs a preprocessing step (on the CPU) to convert string tensors to floats and a postprocessing step (on the CPU) to compute the absolute value of the outputs.

 1import os
 2import sys
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import serving
 9
10# Directory where SavedModel will be written.
11saved_model_directory = './my_saved_model_ipu/006'
12
13if os.path.exists(saved_model_directory):
14  sys.exit(f"Directory '{saved_model_directory}' exists! Please delete it "
15           "before running the example.")
16
17
18# The preprocessing stage is performed fully on the CPU.
19def preprocessing_step(x):
20  # The preprocessing stage is performed on the cpu.
21  return tf.abs(x)
22
23
24# The pipeline stages to export.
25def stage1(x):
26  # Double the input - replace this with 1st stage body.
27  output = x * 2
28  return output
29
30
31def stage2(x):
32  # Add 3 to the input - replace this with 2nd stage body.
33  output = x + 3
34  return output
35
36
37# The postprocessing step is performed fully on the CPU.
38def postprocessing_step(x):
39  return tf.abs(x)
40
41
42# Configure the IPU for compilation.
43cfg = config.IPUConfig()
44cfg.auto_select_ipus = 2
45cfg.device_connection.enable_remote_buffers = True
46cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
47cfg.configure_ipu_system()
48
49input_shape = (4,)
50# Prepare the input signature.
51predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
52# Prepare the `preprocessing_step` function signature.
53preprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
54                                              dtype=tf.float32),)
55# Prepare the `postprocessing_step` function signature.
56postprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
57                                               dtype=np.float32),)
58
59# Export as a SavedModel.
60iters = 10
61predict_step = [stage1, stage2]
62runtime_func = serving.export_pipeline(
63    predict_step,
64    saved_model_directory,
65    iterations=iters,
66    device_mapping=[0, 1],
67    predict_step_signature=predict_step_signature,
68    preprocessing_step=preprocessing_step,
69    preprocessing_step_signature=preprocessing_step_signature,
70    postprocessing_step=postprocessing_step,
71    postprocessing_step_signature=postprocessing_step_signature)
72print(f"SavedModel written to {saved_model_directory}")
73
74# You can test the exported executable using returned `runtime_func`,
75input_placeholder = tf.placeholder(dtype=tf.float32, shape=input_shape)
76result_op = runtime_func(input_placeholder)
77
78with tf.Session() as sess:
79  for i in range(iters):
80    input_data = np.ones(input_shape, dtype=np.float32) * (-1.0 * i)
81    print(sess.run(result_op, {input_placeholder: input_data}))

17.3. Running the model in TensorFlow Serving

To test the exported SavedModel you can just start a TensorFlow Serving instance and point it to the model’s location. Graphcore’s distribution of TensorFlow Serving can be run directly in the host system:

$ tensorflow_model_server --rest_api_port=8501 --model_name=my_model \
      --model_base_path="$(pwd)/my_saved_model_ipu"

And then you can start sending inference requests, for example:

$ curl -d '{"instances": [1.0, 2.0, 5.0, 7.0]}'   \
    -X POST http://localhost:8501/v1/models/my_model:predict

Graphcore does not distribute the TensorFlow Serving API package. If you want to use it, you need to install it from the official distribution using pip.