3. Getting started with V-IPU
After installing the
vipu tool, the next step is to establish connectivity
with the V-IPU controller. This assumes that the system administrator has created
a user account for you and allocated some IPUs to that user ID.
You will need the following information from the V-IPU administrator:
V-IPU controller host name or IP address
V-IPU controller port number
User ID and API access key
Run the following command to make the server report its version:
$ vipu --server-version --api-host 10.1.2.3 --api-port 9090 --api-user-id alice --api-access-key 685XK4uCzN
This example assumes a V-IPU controller running on host 10.1.2.3 with port 9090. You should use the details provided by your system administrator.
To avoid having to add options to each command, you can specify the server details in environment variables:
$ export VIPU_CLI_API_HOST=10.1.2.3 $ export VIPU_CLI_API_PORT=9090 $ export VIPU_CLI_API_USER_ID=alice $ export VIPU_CLI_API_ACCESS_KEY=685XK4uCzN $ vipu --server-version
Alternatively, you can add them to a configuration file:
$ cat ~/.vipu.hcl api-host=10.1.2.3 api-port=9090 api-user-id=alice api-access-key=685XK4uCzN $ vipu --server-version
The next step is to allocate some IPUs to run your software on.
3.1. Creating a partition
This section explains how to create a usable partition on the IPU system. A partition defines a set of IPUs used for running end-user applications.
The simplest way to get started is to create a “reconfigurable” partition. This makes a set of IPUs available to users in a flexible way as a number of single-IPU device IDs and a set of multi-IPU device IDs if the partition is larger than a single IPU.
You can do this with a command such as the following:
vipu create partition pt --size 4 --reconfigurable
This allocates four IPUs to a partition called “pt”.
You can see the IPUs that are now available using the
gc-info command that was
installed with the SDK:
$ gc-info -l -+- Id: , type: [Fabric], PCI Domain:  -+- Id: , type: [Fabric], PCI Domain:  -+- Id: , type: [Fabric], PCI Domain:  -+- Id: , type: [Fabric], PCI Domain:  -+- Id: , type: [Multi IPU] |--- PCIe Id: , DNC Id: , PCI Domain:  |--- PCIe Id: , DNC Id: , PCI Domain:  -+- Id: , type: [Multi IPU] |--- PCIe Id: , DNC Id: , PCI Domain:  |--- PCIe Id: , DNC Id: , PCI Domain:  -+- Id: , type: [Multi IPU] |--- PCIe Id: , DNC Id: , PCI Domain:  |--- PCIe Id: , DNC Id: , PCI Domain:  |--- PCIe Id: , DNC Id: , PCI Domain:  |--- PCIe Id: , DNC Id: , PCI Domain: 
Here, the four individual IPUs have the IDs 0 to 3.
The multi-IPU devices are listed below that. A multi-IPU device always represents a number of IPUs which is a power of two. Here there are three multi-IPU devices:
ID 4 contains IPUs 0 and 1
ID 5 contains IPUs 2 and 3
ID 6 contains all four IPUs
Different Poplar programs can make use of these devices concurrently. A device ID that is attached to by a Poplar program is removed from the list of available device IDs while that Poplar program is running.
When you create a partition, a file is created in the directory
This file contains information needed by Poplar to connect to the IPUs. Note that there
should only be one file in the directory, so you should delete the partition (with the
vipu remove partition partition-name command) before creating another one.
In a fully deployed system, you may want to define partitions containing sets of IPUs configured in specific ways.
See the V-IPU User Guide for full details of the V-IPU command-line software, allocating IPUs for your program, and running code on the IPU-POD.
3.2. Running a program on the IPU-POD
You can now run one of the example programs from the SDK. The program
adder_ipu.cpp builds a simple graph to add two vectors together and return the result.
Make a copy of the
poplar/examples/adderdirectory in your working directory
Compile the program:
Run the program.
It will produce the following output:$ ./adder_ipu Creating graph Building engine v1 sum = 10 v2 sum = 65
This simple example just runs on one IPU. Further examples and tutorials can be found in the Graphcore GitHub repositories examples and tutorials, such as a simple pipelining example using TensorFlow.
See Section 4, Next steps for more information.
3.3. Hardware and software monitoring
The IPU hardware, and any programs running on it, can be monitored and analysed in various ways.
3.3.1. Command line tools
The IPU driver software includes a number of software tools that provide information on the current status of the connected hardware. These include:
gc-info: determine which IPU devices are present in the system.
gc-monitor: passively monitor IPU activity and telemetry such as power draw, temperature and clock speed.
gc-reset: reset one or more IPU devices.
These are in the directory
poplar-os-version/bin under the Poplar SDK directory (where
os is the host operating system, and
version is the current software version number).
The use of these tools is described in the IPU Command Line Tools document.
3.3.2. Trace output
When running a program, you have the option to output a trace from the running code which allows you to see the phases that the graph compiler uses when preparing and compiling the graph processes to execute on the IPU. Going back to the simple adder example in Section 3.2, Running a program on the IPU-POD, try running:
$ POPLAR_LOG_LEVEL=DEBUG ./adder_ipu
You will see each stage of the process listed before the program output appears. The logging options are documented in the Poplar & PopLibs User Guide.
3.3.3. Execution profiling
The PopVision analysis tools are available on the PopVision tools web page.
The Graph Analyser provides a graphical view of the graph execution trace, showing memory use, tile mapping and other vital information to help you optimise your application. For more information see the PopVision Graph Analyser User Guide.
The System Analyser provides information about the behaviour of the host-side application code. It shows an interactive graphical view of the timeline of execution steps, helping you to identify any bottlenecks between the CPUs and IPUs. For more information see the PopVision System Analyser User Guide.