torchversion from 1.13.1 to 2.0.1.
Added support for the following ops:
Added support for the following
Added drop-in replacements for the following
torch_clusteroperations (replace the
poptorchops ones): *
condop for inference models only. This op conditionally executes one of two branches depending on the input condition.
Extended the API for copying data structures to the IPU.
copyNamedBuffersToDeviceallows for a named buffer to be copied.
FixedSizeCollatorcan now pad to different node and edge types.
The PopTorch wheel now has dependencies on
torchaudio. This will prevent upgrade to an unsupported PyTorch version when installing other third-party packages that depend on
Added support for the
largest=False`option in the
compilationTimefunction to extract the total compilation time from the compiled PopTorch model.
Fixed compilation of models with
clampdtype mismatch error.
torch.clampused to raise an incompatible type error.
torch.varused to raise an error when an input dimension was negative. The fix converts the input negative integer to a positive integer so it can be used as an index of the input shape vector.
torch.roundbehaviour in PopTorch to use a “round half down” method to match the behaviour in PyTorch. PopTorch previously used a “round half up” method.
Fixed the Int32-Float32 op not being processed.
torch.compileis not supported in PopTorch. This has been documented in PyTorch for the IPU: User Guide.
The versions of the following dependencies have been updated:
torchversion from 1.13.0 to 1.13.1.
Added support for automatic fusion of scatter operations into a grouped scatter operation to improve performance.
torch.linalg.norm: partial support
2-norm and nuclear norm are unsupported for matrices.
torch.linalg.matrix_norm: partial support
2-norm and nuclear norm are unsupported.
Update support for latest PyTorch
normop implementation from
Add support for
HeteroDatasupport in DataLoaders.
Allow the values of
poptorch.Optionsto be set via an environment variable.
loadFromFilemethod twice on the same
poptorch.Optionsobject now has well-defined behaviour.
PopTorch replica-sharded variables fail when copying optimiser state to host.
Cannot access the data pointer of a Tensor that doesn’t have storage.
Fix dataloader rebatched size in async mode when batch size is equal 1.
Fix the implementation of
scatter_reduceto match the PyTorch implementation on the CPU.
Add torch_scatter to compatibility table in PopTorch documentation.