4. Virtualised Pods
Virtualised Pods (vPODs) are implemented as several separate instances, in an isolated virtual network. Section 8.9, vPOD logical networks contains diagrams and further details.
4.1. Poplar instances
Poplar instances are end-user instances where users can run Poplar jobs. Each vPOD has one or more Poplar instances. They can be either bare-metal or virtual instances.
Each Poplar instance is connected to the following networks (see Section 8.4, IP addressing and VLANs for more details):
vPOD IPU data network: Uses 100 GbE VLAN and RoCE.
vPOD storage network: Uses 100 GbE VLAN. The storage appliance has a dedicated VLAN for this network.
vPOD control network: A virtual 1 GbE network which also provides a default route to the internet. External connections come in via this network.
In an Ironic bare-metal node, this is the only 1 GbE network connection as only one network connection is supported for each physical interface.
4.1.1. RNIC port configuration
To connect to the IPUs using the Graphcore IPU-over-Fabric (IPUoF) protocol, access needs to be via an RDMA enabled network. To achieve this from within the VM, SR-IOV will be used. The virtual function (VF) that is passed through to a VM will only have access to a single IPU VLAN. Moreover, because Mellanox Connect-X 5 is used, the VF can be connected via a bond (called VF LAG) to the RDMA enabled network. In this case, both members of the bond are connected to a single ToR switch in the IPU Pod rack.
4.1.2. Disaggregation
The Poplar instances are usually running on the physical servers which share the 100 GbE and 1 GbE local switches with the IPU-Machines in the vPOD to ensure minimal shared traffic with other vPODs. However, the network infrastructure is fully connected so the Poplar instance could be located on any IPU Pod or in the general compute clusters. This allows for a fully disaggregated setup (if required) but also allows for other hosts to be used in the case of a hardware failure within an individual IPU Pod.
4.2. Control instance
The control instance is for vPOD management and is not normally accessible to end users. It is always a virtual machine and requires only one vCPU. More vCPUs can be provided to speed-up activities such as IPU-Machine upgrades which run one IPU-Machine upgrade per available vCPU in parallel.
The control instance is connected to the following networks (see Section 8.4, IP addressing and VLANs for more details):
Shared IPU-Machine BMC network.
Shared IPU-Machine IPU-Gateway network.
vPOD control network: A virtual network which also provides a default route to the internet. External connections come in via this network.
vPOD storage network: Uses 100 GbE VLAN, simple TCP/IP only.
vPOD IPU data network: Uses 100 GbE VLAN and RoCE.
Standard OS Node exporter for monitoring and alerts.
The control instance runs:
The vipu-server, accessed over the vPOD control network from the Poplar hosts.
A Prometheus instance which collects data for the whole vPOD (see Section 7.1, Monitoring).
When required, IPU-Machine maintenance and upgrades.
The control instance stores Graphcore rack_tool
configurations for the IPU-Machines and is normally located on a general compute cluster as there are low performance requirements for its connectivity.