3. Quick start for beginners
This section provides more detail on the steps described in the Quick start for experts section.
Complete any necessary setup to use your IPU system (see Section 1.1, IPU systems) before the following steps.
3.1. Enable the Poplar SDK
Note
It is best if you use the latest version of the Poplar SDK.
On some systems you must explicitly enable the Poplar SDK before you can use PyTorch or TensorFlow for the IPU, or the Poplar Graph Programming Framework. On other systems, the SDK is enabled as part of the login process.
Table 3.1 defines whether you have to explicitly enable the SDK and where to find it.
System |
Enable SDK? |
SDK location |
---|---|---|
Pod system |
Yes |
The SDK is in the directory where you extracted the SDK tarball. |
Graphcloud |
Yes |
where |
Gcore Cloud |
No |
The SDK has been enabled as part of the login process. |
To enable the Poplar SDK:
For SDK versions 2.6 and later, there is a single enable
script that determines whether you are using Bash or Zsh and runs the appropriate scripts to enable both Poplar and PopTorch/PopART.
Source the single script as follows:
$ source [path_to_SDK]/enable
where [path_to_SDK]
is the location of the Poplar SDK on your system.
For SDK versions earlier than 2.6, there are only Bash scripts available and you have to source the Poplar and PopART scripts separately.
Note
You only have to source the PopART enable
script if you are using PopTorch or PopART.
Source the scripts as follows:
$ source [path_to_SDK]/poplar-ubuntu_[os_ver]-[poplar_ver]+[build]/enable.sh
$ source [path_to_SDK]/popart-ubuntu_[os_ver]-[poplar_ver]+[build]/enable.sh
where [path_to_SDK]
is the location of the Poplar SDK on your system. [os_ver]
is the version of Ubuntu on your system, [poplar_ver]
is the software version number of the Poplar SDK and [build]
is the build information.
Note
You must source the Poplar enable script for each new shell. You can add this source
command to your .bashrc
(or .zshrc
for SDK versions later than 2.6) to do this on a more permanent basis.
If you attempt to run any Poplar software without having first sourced this script, you will get an error from the C++ compiler similar to the following (the exact message will depend on your code):
fatal error: 'poplar/Engine.hpp' file not found
If you try to source the script after it has already been sourced, then you will get an error similar to:
ERROR: A Poplar SDK has already been enabled.
Path of enabled Poplar SDK: /opt/gc/poplar_sdk-ubuntu_20_04-3.2.0-7cd8ade3cd/poplar-ubuntu_20_04-3.2.0-7cd8ade3cd
If this is not wanted then please start a new shell.
You can verify that Poplar has been successfully set up by running:
$ popc --version
This will display the version of the installed software.
3.2. Create and enable a Python virtual environment
It is good practice to work in a different Python virtual environment for each framework or even for each application. This section describes how you create and activate a Python virtual environment.
Note
You must activate the Python virtual environment before you can start using it.
The virtual environment must be created for the Python version you will be using. This cannot be changed after creation. Create a new Python virtual environment with:
$ virtualenv -p python3 [venv_name]
where [venv_name]
is the location of the virtual environment.
Note
Make sure that the version of Python that is installed is compatible with the version of the Poplar SDK that you are using. See Supported tools in the Poplar SDK release notes for information about the supported operating systems and versions of tools.
To start using a virtual environment, activate it with:
$ source [venv_name]/bin/activate
where [venv_name]
is the location of the virtual environment.
Now all subsequent installations will be local to that virtual environment.
3.3. Install the TensorFlow 2 wheels and validate
In order to run applications in TensorFlow 2 on an IPU, you have to install Python wheel files for the Graphcore ports of TensorFlow 2 and Keras and also for TensorFlow 2 add-ons.
3.3.1. TensorFlow 2 wheel
There are two TensorFlow 2 wheels included in the Poplar SDK, one for AMD processors and one for Intel processors. Check which processor is used on your system by running:
$ lscpu | grep name
The wheel file has a name of the form:
tensorflow-[ver]+[platform].whl
where [ver]
is the version of the Graphcore port of TensorFlow 2 and [platform]
defines the server details (processor and operating system) for the TensorFlow build. An example of the TensorFlow 2 wheel file for an AMD processor for Poplar SDK 3.0 is:
tensorflow-2.6.3+gc3.0.0+236842+d084e493702+amd_znver1-cp38-cp38-linux_x86_64.whl
Install the Graphcore TensorFlow 2 distribution for an AMD processor with:
$ python -m pip install ${POPLAR_SDK_ENABLED?}/../tensorflow-2.*+amd_*.whl
Install the Graphcore TensorFlow 2 distribution for an Intel processor with:
$ python -m pip install ${POPLAR_SDK_ENABLED?}/../tensorflow-2.*+intel_*.whl
POPLAR_SDK_ENABLED
is the location of the Poplar SDK defined when the SDK was enabled. The ?
ensures that an error message is displayed if Poplar has not been enabled.
To confirm that TensorFlow 2 has been installed, you can use:
pip list | grep tensorflow
For the example wheel file, the output will be:
tensorflow 2.6.3
You can also confirm that the correct tensorflow
wheel has been installed by attempting to import tensorflow.python.ipu
in Python, for example:
$ python3 -c "from tensorflow.python import ipu"
If you get an “illegal instruction” or similar error, then you may have installed the wrong version of TensorFlow for your processor.
3.3.2. Keras wheel
In the TensorFlow 2.6 release, Keras was moved into a separate pip
package. In the Poplar SDK 2.6 release, which includes the Graphcore distribution of TensorFlow 2.6, there is a Graphcore distribution of Keras which includes IPU-specific extensions.
Note
The Keras wheel must be installed after the TensorFlow wheel, but before the TensorFlow Addons wheel.
The Keras wheel file has a name of the form:
keras-[tf-ver]*.whl
where [tf-ver]
is the TensorFlow 2 version. An example of the Keras wheel file for TensorFlow 2.6 for the IPU for Poplar SDK 3.0 is:
keras-2.6.0+gc3.0.0+236851+1744557f-py2.py3-none-any.whl
Install the Keras wheel using the following command:
$ python -m pip install --force-reinstall --no-deps ${POPLAR_SDK_ENABLED?}/../keras-2.*.whl
POPLAR_SDK_ENABLED
is the location of the Poplar SDK defined when the SDK was enabled. The ?
ensures that an error message is displayed if Poplar has not been enabled.
You can confirm that the
keras
package has been installed by importing it in Python, for example:$ python3 -c "import keras"
If you get an “illegal instruction” or similar error, then try to install the Keras wheel again.
3.3.3. TensorFlow 2 Addons wheel
IPU TensorFlow Addons is a collection of add-ons created for the Graphcore port of TensorFlow. These include layers and optimizers for Keras, as well as legacy TensorFlow layers. For more information, refer to the section on IPU TensorFlow Addons in the TensorFlow 2 user guide.
Note
The IPU TensorFlow 2 Addons wheel file is only available in Poplar SDK 2.4 and later.
There are separate Addons wheel files for TensorFlow 1 and TensorFlow 2.
The wheel file has a name of the form:
ipu_tensorflow_addons-[ver]+X+X+X-X-X-X.whl
where [ver]
is the version of the Graphcore port of TensorFlow 2. An example of the Addons wheel file for TensorFlow 2.6 for the IPU for Poplar SDK 3.0 is:
ipu_tensorflow_addons-2.6.3+gc3.0.0+236851+2e46901-py3-none-any.whl
Install the IPU TensorFlow 2 Addons wheel using the following command:
$ python -m pip install ${POPLAR_SDK_ENABLED?}/../ipu_tensorflow_addons-2.*.whl
POPLAR_SDK_ENABLED
is the location of the Poplar SDK defined when the SDK was enabled. The ?
ensures that an error message is displayed if Poplar has not been enabled.
You can confirm that the Addons module has been installed correctly by importing it in Python. For example:
$ python3 -c "from ipu_tensorflow_addons.keras import layers"
If you get an “illegal instruction” or similar error, confirm that you have installed the Addons wheel file for TensorFlow 2.
3.4. Clone the Graphcore examples
You may need to clone the Graphcore examples repository on some systems as detailed in Table 3.2.
If you don’t need to clone the examples repository, then go straight to Section 3.5, Define environment variable.
System |
Clone repos? |
Comment |
---|---|---|
Pod system |
Yes |
You can clone the tutorials and examples repos in any location. |
Graphcloud |
Yes |
You can clone the tutorials and examples repos in any location. |
Gcore Cloud |
No |
The tutorials and examples have already been cloned in |
You can clone the examples repository into a location of your choice.
To clone the examples repository for the latest version of the Poplar SDK:
$ cd ~/[base_dir]
$ git clone https://github.com/graphcore/examples.git
where [base_dir]
is a location of your choice. This will install the contents of the examples repository under ~/[base_dir]/examples
. The tutorials are in ~/[base_dir]/examples/tutorials
.
Note
If you are using a version of the Poplar SDK prior to version 3.2, then refer to Section A, Install examples and tutorials for older Poplar SDK versions for how to install examples and tutorials.
3.5. Define environment variable
In order to simplify running the tutorials, we define the environment variable POPLAR_TUTORIALS_DIR
that
points to the location of the cloned tutorials.
$ export POPLAR_TUTORIALS_DIR=~/[base_dir]/examples/tutorials
[base_dir]
is the location where you installed the Graphcore tutorials.
$ export POPLAR_TUTORIALS_DIR=~/[base_dir]/examples/tutorials
[base_dir]
is the location where you installed the Graphcore tutorials.
$ export POPLAR_TUTORIALS_DIR=~/graphcore/tutorials
3.6. Run the application
This section describes how to run a simple application, the MNIST example, using TensorFlow 2.
Install example requirements
You can now install the requirements that the model needs.
$ cd $POPLAR_TUTORIALS_DIR/simple_applications/tensorflow2/mnist/ $ pip install -r requirements.txt
Run example
You run the code with the command:
$ python3 mnist.pyThe example has no command line options.
If the code has run successfully, you should see an output similar to that in Listing 3.1.
2022-01-10 12:20:09.746730: I tensorflow/compiler/plugin/poplar/driver/poplar_platform.cc:44] Poplar version: 2.3.0 (d9e4130346) Poplar package: 88f485e763 2022-01-10 12:20:11.195463: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2022-01-10 12:20:11.435997: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes) 2022-01-10 12:20:11.436536: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2245780000 Hz 2022-01-10 12:20:12.922858: I tensorflow/compiler/plugin/poplar/driver/poplar_executor.cc:1714] Device /device:IPU:0 attached to IPU: 0 2022-01-10 12:20:13.609918: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2) Epoch 1/4 Compiling module a_inference_train_function_513__XlaMustCompile_true_config_proto___n_007_n_0...02_001_000__executor_type____.380: [##################################################] 100% Compilation Finished [Elapsed: 00:00:15.4] 2022-01-10 12:20:29.517778: I tensorflow/compiler/jit/xla_compilation_cache.cc:347] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 2000/2000 [==============================] - 18s 9ms/step - loss: 0.9729 Epoch 2/4 2000/2000 [==============================] - 1s 533us/step - loss: 0.3478 Epoch 3/4 2000/2000 [==============================] - 1s 610us/step - loss: 0.2876 Epoch 4/4 2000/2000 [==============================] - 1s 595us/step - loss: 0.2545
You have run an application that demonstrates how to use the IPU to train a simple 2-layer, fully-connected model on the MNIST dataset using TensorFlow 2.
3.7. Exit the virtual environment
When you are done, exit the Python virtual environment.
$ deactivate
3.8. Try out other applications
The examples repo contains other tutorials and applications you can try. See Section 4, Next steps for more information.