1. Overview

Graphcore’s IPU-M2000 IPU-Machine is designed to support scale-up and scale-out machine intelligence compute. The IPU-POD reference designs, based on the IPU-M2000, deliver scalable building blocks for the IPU-POD systems range of products: IPU‑POD16 (4 IPU-M2000 machines directly attach to a single host server), IPU‑POD64 (16 IPU-M2000 machines in a switched system with 1-4 host servers), IPU‑POD128 (32 IPU-M2000 machines in a switched system with 2-8 host servers), and IPU‑POD256 (64 IPU-M2000 machines in a switched system with 4-16 host servers).

Virtualization and provisioning software allow the AI compute resources to be elastically allocated to users and be grouped for both model-parallel and data-parallel AI compute in all IPU-POD systems, supporting multiple users and mixed workloads as well as single systems for large models.

IPU-POD system level products, including IPU-M2000 machines, host servers and network switches, are available from Graphcore channel partners globally. Customers can select their preferred server brand from a range of leading server vendors. There are multiple host servers from different vendors approved for use in IPU-POD systems, see the approved server list for details. The disaggregated host architecture allows for different server requirements based on workload.

_images/IPU-Machine.png