5.1. PopTorch


New features

  • Upgraded supported torch version from 1.13.1 to 2.0.1.

  • Added support for the following ops:

    • torch.nn.Mish

    • torch.bucketize

    • torch.cdist

    • torch.sort

    • torch.take_along_dim

    • torch.bincount

  • Added support for the following torch_cluster ops:

    • radius

    • grid

    • knn

  • Added drop-in replacements for the following torch_cluster operations (replace the torch_cluster ops with poptorch ops ones): * fps * nearest

  • Added a cond op for inference models only. This op conditionally executes one of two branches depending on the input condition.

  • Extended the API for copying data structures to the IPU. copyNamedBuffersToDevice allows for a named buffer to be copied.

  • FixedSizeCollator can now pad to different node and edge types.

  • The PopTorch wheel now has dependencies on torchvision and torchaudio. This will prevent upgrade to an unsupported PyTorch version when installing other third-party packages that depend on torchvision or torchaudio.

  • Added support for the largest=False` option in the torch.topk op.

  • Added the compilationTime function to extract the total compilation time from the compiled PopTorch model.

Bug Fixes

  • Fixed compilation of models with torch.norm inside the for_loop op.

  • Fixed the clamp dtype mismatch error. torch.clamp used to raise an incompatible type error.

  • torch.var used to raise an error when an input dimension was negative. The fix converts the input negative integer to a positive integer so it can be used as an index of the input shape vector.

  • Fixed the torch.round behaviour in PopTorch to use a “round half down” method to match the behaviour in PyTorch. PopTorch previously used a “round half up” method.

  • Fixed the Int32-Float32 op not being processed.

Other improvements

Known issues


Compatibility changes

The versions of the following dependencies have been updated:










>= 3.8





>= 3.7


New features

  • Upgrade supported torch version from 1.13.0 to 1.13.1.

  • Added support for automatic fusion of scatter operations into a grouped scatter operation to improve performance.

  • Support for batch_sampler in poptorch.DataLoader.

  • Support for torch.linalg.norm operations:

    • torch.linalg.norm: partial support

      2-norm and nuclear norm are unsupported for matrices.

    • torch.linalg.matrix_norm: partial support

      2-norm and nuclear norm are unsupported.

    • torch.linalg.vector_norm: supported

  • Update support for latest PyTorch norm op implementation from torch.linalg.norm.

  • Add support for torch.Tensor.index_reduce.

  • Add poptorch.dynamic_update function.

  • Add HeteroData support in DataLoaders.

  • Allow the values of poptorch.Options to be set via an environment variable.

Bug Fixes

  • Calling the loadFromFile method twice on the same poptorch.Options object now has well-defined behaviour.

  • PopTorch replica-sharded variables fail when copying optimiser state to host.

  • Cannot access the data pointer of a Tensor that doesn’t have storage.

  • Fix dataloader rebatched size in async mode when batch size is equal 1.

  • Fix the implementation of scatter_reduce to match the PyTorch implementation on the CPU.

Other improvements

  • Add torch_scatter to compatibility table in PopTorch documentation.

Known issues


Compatibility changes