17. Exporting precompiled models for TensorFlow Serving

TensorFlow applications compiled for the IPU can be exported to the standard TensorFlow SavedModel format and deployed to a TensorFlow Serving instance. The exported SavedModel contains the executable compiled for the IPU, and a TensorFlow graph with IPU embedded application runtime operations which allow you to run the executable as part of the TensorFlow graph. The exported graph may contain optional preprocessing and postprocessing parts that are executed on the CPU.

The Graphcore TensorFlow API for exporting models for TensorFlow Serving supports three different use cases:

  1. Models defined inside a function without using pipelining can be exported using the tensorflow.python.ipu.serving.export_single_step() function.

  2. Pipelined models defined as a list of functions can be exported using the tensorflow.python.ipu.serving.export_pipeline() function.

  3. Keras models can be exported using tensorflow.python.ipu.serving.export_keras() or the model’s export_for_ipu_serving() method. Both ways are functionally identical and support both pipelined and non-pipelined models.

General notes about using the Graphcore TensorFlow API for exporting models with TensorFlow Serving:

  1. Since the exported SavedModel contains custom IPU embedded application runtime operations, it can be used only with the Graphcore distribution of TensorFlow Serving.

  2. The exported SavedModel cannot be loaded back into a TensorFlow script and used as a regular model because, in the export stage, the model is compiled into an IPU executable. The exported TensorFlow graph contains only IPU embedded application runtime operations and has no information about specific layers, and so on.

  3. TensorFlow and TensorFlow Serving versions must always match. It means that you have to use the same version of TensorFlow Serving as the version of TensorFlow that was used to export the model. Moreover, Poplar versions must also match between the TensorFlow Serving instance and the version of TensorFlow that was used to export the model.

17.1. Exporting non-pipelined models defined inside a function

Exporting the forward pass of a non-pipelined model can be done with the tensorflow.python.ipu.serving.export_single_step() function. A function that defines the forward pass of the model is required as a first argument. Under the hood, export_single_step() wraps that function into a while loop optimized for the IPU, with the iterations parameter denoting the number of loop iterations. You can use this parameter to tweak the model’s latency; its optimal value is use-case specific. Additionally, the function adds the infeed and outfeed queues, so you do not have to take care of it. Then the model is compiled into an executable and included as an asset in the SavedModel stored at the export_dir location. The export_single_step() function adds the possibility of passing the preprocessing_step and postprocessing_step functions which will be included into the SavedModel graph and executed on the CPU on the server side. If all preprocessing and postprocessing operations are available on the IPU, preprocessing_step and postprocessing_step functions should be called inside the predict_step function. Then function bodies will be compiled together with the inference model.

To export such a model, the predict_step function’s input signature has to be defined. This can be accomplished in one of three ways:

All of the above methods are functionally equivalent and can be used interchangeably based on what you find more convenient.

17.1.1. Example of exporting non-pipelined model defined inside a function

This example exports a very simple model with embedded IPU program that doubles the input tensor.

 1import os
 2import shutil
 3
 4import numpy as np
 5
 6import tensorflow as tf
 7
 8from tensorflow.python.ipu import config
 9from tensorflow.python.ipu import ipu_strategy
10from tensorflow.python.ipu import serving
11
12# Directory where SavedModel will be written.
13saved_model_directory = './my_saved_model_ipu/001'
14# Directory should be empty or should not exist.
15if os.path.exists(saved_model_directory):
16  shutil.rmtree(saved_model_directory)
17
18
19# The function to export.
20@tf.function
21def predict_step(x):
22  # Double the input - replace this with application body.
23  result = x * 2
24  return result
25
26
27# Configure the IPU for compilation.
28cfg = config.IPUConfig()
29cfg.auto_select_ipus = 1
30cfg.device_connection.enable_remote_buffers = True
31cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
32cfg.configure_ipu_system()
33
34input_shape = (4,)
35# Prepare the `predict_step` function signature.
36predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
37# Export as a SavedModel.
38iterations = 10
39
40runtime_func = serving.export_single_step(predict_step, saved_model_directory,
41                                          iterations, predict_step_signature)
42print(f"SavedModel written to {saved_model_directory}")
43
44# You can test the exported executable using returned `runtime_func`.
45# This should print the even numbers 0 to 30.
46strategy = ipu_strategy.IPUStrategy()
47with strategy.scope():
48  for i in range(iterations):
49    input_data = np.ones(input_shape, dtype=np.float32) * i
50    print(runtime_func(input_data))

17.1.2. Example of exporting non-pipelined model defined inside a function with additional preprocessing and postprocessing steps

This example exports a very simple model with an embedded IPU program, which doubles the input tensor. The model also performs a preprocessing step (on the IPU) to compute the absolute value of the input and a postprocessing step (on the IPU) to reduce the output.

 1import os
 2import shutil
 3
 4import numpy as np
 5
 6import tensorflow as tf
 7
 8from tensorflow.python.ipu import config
 9from tensorflow.python.ipu import ipu_strategy
10from tensorflow.python.ipu import serving
11
12# Directory where SavedModel will be written.
13saved_model_directory = './my_saved_model_ipu/003'
14# Directory should be empty or should not exist.
15if os.path.exists(saved_model_directory):
16  shutil.rmtree(saved_model_directory)
17
18
19# The preprocessing step is performed fully on the IPU.
20def preprocessing_step(x):
21  return tf.abs(x)
22
23
24# The postprocessing step is performed fully on the IPU.
25def postprocessing_step(x):
26  return tf.reduce_sum(x)
27
28
29def application_body(x):
30  # Double the input - replace this with your application body.
31  return x * 2
32
33
34# The function to export.
35@tf.function
36def predict_step(x):
37  # preprocessing will be compiled and exported together with application body.
38  x = preprocessing_step(x)
39  x = application_body(x)
40  return postprocessing_step(x)
41
42
43# Configure the IPU for compilation.
44cfg = config.IPUConfig()
45cfg.auto_select_ipus = 1
46cfg.device_connection.enable_remote_buffers = True
47cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
48cfg.configure_ipu_system()
49
50input_shape = (4,)
51# Prepare the `predict_step` function signature.
52predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
53# Export as a SavedModel.
54iterations = 10
55
56runtime_func = serving.export_single_step(predict_step, saved_model_directory,
57                                          iterations, predict_step_signature)
58print(f"SavedModel written to {saved_model_directory}")
59
60# You can test the exported executable using returned `runtime_func`.
61# This should print the even numbers 0 to 30.
62strategy = ipu_strategy.IPUStrategy()
63with strategy.scope():
64  for i in range(iterations):
65    input_data = np.ones(input_shape, dtype=np.float32) * (-1.0 * i)
66    print(runtime_func(input_data))

This example exports a very simple model with an embedded IPU program, which doubles the input tensor. The model also performs a preprocessing step (on the CPU) to convert string tensors to floats and a postprocessing step (on the CPU) to compute the absolute value of the outputs.

 1import os
 2import shutil
 3
 4import numpy as np
 5
 6import tensorflow as tf
 7
 8from tensorflow.python.ipu import config
 9from tensorflow.python.ipu import ipu_strategy
10from tensorflow.python.ipu import serving
11
12# Directory where SavedModel will be written.
13saved_model_directory = './my_saved_model_ipu/004'
14# Directory should be empty or should not exist.
15if os.path.exists(saved_model_directory):
16  shutil.rmtree(saved_model_directory)
17
18
19# The preprocessing step is performed fully on the CPU.
20@tf.function
21def preprocessing_step(x):
22  transform_fn = lambda input: tf.constant(
23      1.0) if input == "graphcore" else tf.random.uniform(shape=tuple())
24
25  return tf.stack([transform_fn(elem) for elem in tf.unstack(x)])
26
27
28# The function to export.
29@tf.function
30def predict_step(x):
31  # preprocessing will be compiled and exported together with application body.
32  # Double the input - replace this with your application body.
33  return x * 2
34
35
36# The postprocessing step is performed fully on the CPU.
37@tf.function
38def postprocessing_step(x):
39  return tf.abs(x)
40
41
42# Configure the IPU for compilation.
43cfg = config.IPUConfig()
44cfg.auto_select_ipus = 1
45cfg.device_connection.enable_remote_buffers = True
46cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
47cfg.configure_ipu_system()
48
49input_shape = (6,)
50# Prepare the `predict_step` function signature.
51predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
52# Prepare the `preprocessing_step` function signature.
53preprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
54                                              dtype=tf.string),)
55# Prepare the `postprocessing_step` function signature.
56postprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
57                                               dtype=np.float32),)
58
59# Export as a SavedModel.
60iterations = 10
61
62runtime_func = serving.export_single_step(
63    predict_step,
64    saved_model_directory,
65    iterations,
66    predict_step_signature,
67    preprocessing_step=preprocessing_step,
68    preprocessing_step_signature=preprocessing_step_signature,
69    postprocessing_step=postprocessing_step,
70    postprocessing_step_signature=postprocessing_step_signature)
71print(f"SavedModel written to {saved_model_directory}")
72
73# You can test the exported executable using returned `runtime_func`.
74strategy = ipu_strategy.IPUStrategy()
75with strategy.scope():
76  print(
77      runtime_func(
78          tf.constant(
79              ["graphcore", "red", "blue", "yellow", "graphcore", "purple"],
80              dtype=tf.string)))
81  print(
82      runtime_func(
83          tf.constant([
84              "apple", "banana", "graphcore", "orange", "pineapple",
85              "graphcore"
86          ],
87                      dtype=tf.string)))

17.2. Exporting pipelined models defined as a list of functions

Exporting the forward pass of a pipelined models can be accomplished using tensorflow.python.ipu.serving.export_pipeline() function.

The use of that function is very similar to the creation of a pipeline op using the tensorflow.python.ipu.pipelining_ops.pipeline() function. You have to provide a list of functions that represent the pipeline’s computational stages.

Function tensorflow.python.ipu.serving.export_pipeline() also has an iteration argument. It denotes the number of times each pipeline stage is executed before the pipeline is restarted. Again, you can use it to tweak the model’s latency. This argument is sometimes called steps_per_execution, especially for Keras models.

Similarly to exporting non-pipelined models, to export a pipelined model the signature of the first computational stage has to be known. You can do this in the same three ways as non-pipelined models. It’s worth noting that for the first option—passing the input signature to the @tf.function decorator—you only need to do that for the first computational stage.

17.2.1. Pipeline example

This example exports a simple pipelined IPU program that performs 2x+3 function on the input.

 1import os
 2import shutil
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import ipu_strategy
 9from tensorflow.python.ipu import serving
10
11# Directory where SavedModel will be written.
12saved_model_directory = './my_saved_model_ipu/002'
13# Directory should be empty or should not exist.
14if os.path.exists(saved_model_directory):
15  shutil.rmtree(saved_model_directory)
16
17
18# The pipeline's stages to export.
19def stage1(x):
20  # Double the input - replace this with 1st stage body.
21  output = x * 2
22  return output
23
24
25def stage2(x):
26  # Add 3 to the input - replace this with 2nd stage body.
27  output = x + 3
28  return output
29
30
31# Configure the IPU for compilation.
32cfg = config.IPUConfig()
33cfg.auto_select_ipus = 2
34cfg.device_connection.enable_remote_buffers = True
35cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
36cfg.configure_ipu_system()
37
38input_shape = (4,)
39# Prepare the input signature.
40predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
41# Number of times each pipeline stage is executed.
42iterations = 10
43
44# Export as a SavedModel.
45predict_step = [stage1, stage2]
46runtime_func = serving.export_pipeline(
47    predict_step,
48    saved_model_directory,
49    iterations=iterations,
50    device_mapping=[0, 1],
51    predict_step_signature=predict_step_signature)
52print(f"SavedModel written to {saved_model_directory}")
53
54# You can test the exported executable using returned `runtime_func`.
55# This should print numbers from 3 to 33.
56strategy = ipu_strategy.IPUStrategy()
57with strategy.scope():
58  for i in range(iterations):
59    input_data = np.ones(input_shape, dtype=np.float32) * i
60    print(runtime_func(input_data))

17.2.2. Pipeline example with preprocessing and postprocessing steps

This example exports a simple pipelined IPU program that computes the function 2x+3 on the input. The model includes a preprocessing computational stage which computes the absolute value of the input and an IPU postprocessing step to reduce the output.

 1import os
 2import shutil
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import ipu_strategy
 9from tensorflow.python.ipu import serving
10
11# Directory where SavedModel will be written.
12saved_model_directory = './my_saved_model_ipu/005'
13# Directory should be empty or should not exist.
14if os.path.exists(saved_model_directory):
15  shutil.rmtree(saved_model_directory)
16
17
18# The preprocessing stage is performed fully on the IPU.
19def preprocessing_stage(x):
20  return tf.abs(x)
21
22
23# The pipeline's stages to export.
24def stage1(x):
25  # Double the input - replace this with 1st stage body.
26  output = x * 2
27  return output
28
29
30def stage2(x):
31  # Add 3 to the input - replace this with 2nd stage body.
32  output = x + 3
33  return output
34
35
36# The postprocessing stage is performed fully on the IPU.
37def postprocessing_stage(x):
38  return tf.reduce_sum(x)
39
40
41# Configure the IPU for compilation.
42cfg = config.IPUConfig()
43cfg.auto_select_ipus = 4
44cfg.device_connection.enable_remote_buffers = True
45cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
46cfg.configure_ipu_system()
47
48input_shape = (4,)
49# Prepare the input signature.
50predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
51# Number of times each pipeline stage is executed.
52iterations = 8
53
54# Export as a SavedModel.
55predict_step = [preprocessing_stage, stage1, stage2, postprocessing_stage]
56runtime_func = serving.export_pipeline(
57    predict_step,
58    saved_model_directory,
59    iterations=iterations,
60    device_mapping=[0, 1, 2, 3],
61    predict_step_signature=predict_step_signature)
62print(f"SavedModel written to {saved_model_directory}")
63
64# You can test the exported executable using returned `runtime_func`.
65# This should print numbers from 3 to 33.
66strategy = ipu_strategy.IPUStrategy()
67with strategy.scope():
68  for i in range(iterations):
69    input_data = np.ones(input_shape, dtype=np.float32) * (-1.0 * i)
70    print(runtime_func(input_data))

This example exports a simple pipelined IPU program that computes the function 2x+3 on the input tensor. The model also performs a preprocessing step (on the CPU) to convert string tensors to floats and a postprocessing step (on the CPU) to compute the absolute value of the outputs.

 1import os
 2import shutil
 3
 4import numpy as np
 5import tensorflow as tf
 6
 7from tensorflow.python.ipu import config
 8from tensorflow.python.ipu import ipu_strategy
 9from tensorflow.python.ipu import serving
10
11# Directory where SavedModel will be written.
12saved_model_directory = './my_saved_model_ipu/006'
13# Directory should be empty or should not exist.
14if os.path.exists(saved_model_directory):
15  shutil.rmtree(saved_model_directory)
16
17
18# The preprocessing stage is performed fully on the CPU.
19@tf.function
20def preprocessing_step(x):
21  transform_fn = lambda input: tf.constant(
22      1.0) if input == "graphcore" else tf.random.uniform(shape=tuple())
23
24  return tf.stack([transform_fn(elem) for elem in tf.unstack(x)])
25
26
27# The pipeline's stages to export.
28def stage1(x):
29  # Double the input - replace this with 1st stage body.
30  output = x * 2
31  return output
32
33
34def stage2(x):
35  # Add 3 to the input - replace this with 2nd stage body.
36  output = x + 3
37  return output
38
39
40# The postprocessing step is performed fully on the CPU.
41@tf.function
42def postprocessing_step(x):
43  return tf.abs(x)
44
45
46# Configure the IPU for compilation.
47cfg = config.IPUConfig()
48cfg.auto_select_ipus = 2
49cfg.device_connection.enable_remote_buffers = True
50cfg.device_connection.type = config.DeviceConnectionType.ON_DEMAND
51cfg.configure_ipu_system()
52
53input_shape = (6,)
54# Prepare the input signature.
55predict_step_signature = (tf.TensorSpec(shape=input_shape, dtype=np.float32),)
56# Prepare the `preprocessing_step` function signature.
57preprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
58                                              dtype=tf.string),)
59# Prepare the `postprocessing_step` function signature.
60postprocessing_step_signature = (tf.TensorSpec(shape=input_shape,
61                                               dtype=np.float32),)
62
63# Number of times each pipeline stage is executed.
64iterations = 10
65
66# Export as a SavedModel.
67predict_step = [stage1, stage2]
68runtime_func = serving.export_pipeline(
69    predict_step,
70    saved_model_directory,
71    iterations=iterations,
72    device_mapping=[0, 1],
73    predict_step_signature=predict_step_signature,
74    preprocessing_step=preprocessing_step,
75    preprocessing_step_signature=preprocessing_step_signature,
76    postprocessing_step=postprocessing_step,
77    postprocessing_step_signature=postprocessing_step_signature)
78print(f"SavedModel written to {saved_model_directory}")
79
80# You can test the exported executable using returned `runtime_func`.
81# This should print numbers from 3 to 33.
82strategy = ipu_strategy.IPUStrategy()
83with strategy.scope():
84  print(
85      runtime_func(
86          tf.constant(
87              ["graphcore", "red", "blue", "yellow", "graphcore", "purple"],
88              dtype=tf.string)))
89  print(
90      runtime_func(
91          tf.constant([
92              "apple", "banana", "graphcore", "orange", "pineapple",
93              "graphcore"
94          ],
95                      dtype=tf.string)))

17.3. Exporting Keras models

There are two ways of exporting Keras models for TensorFlow Serving, independent of whether they’re pipelined or not. Keras models can be exported using the tensorflow.python.ipu.serving.export_keras() function or the model’s export_for_ipu_serving() method.

See the Section 19, Keras with IPUs section for details and examples of exporting precompiled Keras models for TensorFlow Serving.

17.4. Running the model in TensorFlow Serving

To test the exported SavedModel you can just start a TensorFlow Serving instance and point it to the model’s location. Graphcore’s distribution of TensorFlow Serving can be run directly in the host system:

$ tensorflow_model_server --rest_api_port=8501 --model_name=my_model \
      --model_base_path="$(pwd)/my_saved_model_ipu"

And then you can start sending inference requests, for example:

$ curl -d '{"instances": [1.0, 2.0, 5.0, 7.0]}'   \
    -X POST http://localhost:8501/v1/models/my_model:predict

Graphcore does not distribute the TensorFlow Serving API package. If you want to use it, you need to install it from the official distribution using pip.