3. Getting started

This section describes how to install and configure the vipu software. It assumes you have access to suitable IPU hardware such as a Pod.

If you are developing or running machine learning software you will also need to install the Poplar SDK. See the Getting Started Guide for your system for more information.

3.1. Installing the V-IPU client

Note

You can skip this step if vipu is already installed on the system. You can check if the software is installed by running vipu --version.

You can download the V-IPU client software from the Graphcore software download portal. It can be installed on any computer that can communicate with the V-IPU controller.

Before installing the software, you will need the following information from the V-IPU administrator:

  • V-IPU controller host name or IP address

  • V-IPU controller port number

  • User ID and API access key (secure servers only)

Extract the contents of the downloaded archive:

$ tar xzvf vipu-$VERSION.tar.gz

Where $VERSION is the version of the software.

Add the directory containing the extracted package to the PATH environment variable:

$ export PATH=$PWD/vipu-$VERSION:$PATH

Now, confirm that the vipu executable is found and that it reports the expected version:

$ vipu --version

3.2. V-IPU configuration

After installing the vipu tool, the next step is to establish connectivity with the V-IPU controller.

Run the following command to make the server report its version:

$ vipu --server-version --api-host 10.1.2.3 --api-port 9090

This example assumes a V-IPU controller running on host 10.1.2.3 with port 9090. You should use the details provided by your system administrator.

For servers running in secure mode, you also need to provide the user credentials:

$ vipu --server-version --api-host 10.1.2.3 --api-port 9090 --api-user-id alice --api-access-key 685X_K4uCzN

Important

Working with an HTTP or HTTPS proxy

If the http_proxy and/or https_proxy environment variables have been set then you need to add the hostname or IP addresses of the V-IPU controller to the no_proxy environment variable in order to allow the V-IPU tools to access it.

For example, you could set the no_proxy environment variable using the following command:

$ export no_proxy="10.1.2.3:9090"

To avoid having to add options to each command, you can specify the server details in environment variables. For example:

$ export VIPU_CLI_API_HOST=10.1.2.3
$ export VIPU_CLI_API_PORT=9090
$ export VIPU_CLI_API_USER_ID=alice
$ export VIPU_CLI_API_ACCESS_KEY=685X_K4uCzN
$ vipu --server-version

See Section 8.2.2, Using environment variables for more information.

Alternatively, you can add them to a configuration file. This can be in JSON, TOML, YAML or HCL format. For example:

$ cat ~/.config/.vipu.hcl
api-host=10.1.2.3
api-port=9090
api-user-id=alice
api-access-key=685X_K4uCzN

$ vipu --server-version --config ~/.config/.vipu.hcl

See Section 8.2.2, Using environment variables for more information.

3.3. Creating partitions

With the vipu tool set up and connectivity established with the server, it is time to assign some IPUs. In the following examples we assume a user, Fernando, has been provided the following .hcl configuration file from the system administrator:

$ cat ~/fernando.vipu.hcl
secure = true
api-user-id = "fernando"
api-access-key = "super-secret-password"
api-host = "127.0.0.1"
api-port = "9001"

Make sure that the file has restricted access:

$ chmod 0600 ~/fernando.vipu.hcl

To avoid having to set the --config option for each command, you can set the configuration path as an environment variable:

$ export VIPU_CLI_CONFIG=~/fernando.vipu.hcl

Alternatively, you can move the file to the default location used for the V-IPU configuration file:

$ cp --preserve=mode ~/fernando.vipu.hcl ~/.vipu-cli.hcl

Now, let’s inspect what resources Fernando has access to:

$ vipu list allocations
------------------------
 Allocation ID | Agents
------------------------
 al0           | ag0
------------------------

From the output, it can be seen that this user has access to one allocation, “al0”, with one agent, “ag0”.

Confirm that none of the four IPUs are currently in use and that there are no pre-existing partitions:

$ vipu list partitions
list partitions: failed: No partition has been added yet! Use "create partition" command to add partitions.

Since no partitions already exist, the user has four free, unassigned IPUs. You can now proceed with assigning all four IPUs to a partition “prt0”. That will allocate them for work and reserve them from use by other partitions:

$ vipu create partition prt0 --size 4 --reconfigurable
create partition (prt0): success.

If the system is running an instance of the Poplar SDK, you can use the gc-info tool to confirm that the partition is recognised and ready to be used as part of Poplar computations:

$ gc-info -l
-+- Id: [0], type:    [Fabric], PCI Domain: [3]
-+- Id: [1], type:    [Fabric], PCI Domain: [2]
-+- Id: [2], type:    [Fabric], PCI Domain: [1]
-+- Id: [3], type:    [Fabric], PCI Domain: [0]
-+- Id: [4], type: [Multi IPU]
 |--- PCIe Id: [0], DNC Id: [0], PCI Domain: [3]
 |--- PCIe Id: [1], DNC Id: [1], PCI Domain: [2]
-+- Id: [5], type: [Multi IPU]
 |--- PCIe Id: [2], DNC Id: [2], PCI Domain: [1]
 |--- PCIe Id: [3], DNC Id: [3], PCI Domain: [0]
-+- Id: [6], type: [Multi IPU]
 |--- PCIe Id: [0], DNC Id: [0], PCI Domain: [3]
 |--- PCIe Id: [1], DNC Id: [1], PCI Domain: [2]
 |--- PCIe Id: [2], DNC Id: [2], PCI Domain: [1]
 |--- PCIe Id: [3], DNC Id: [3], PCI Domain: [0]

For more information about using partitions, see Section 5, Partitions.

See the Graphcore Command Line Tools document for details of gc-info and related tools.