5.1. PopTorch
3.3.0
New features
Upgraded supported
torch
version from 1.13.1 to 2.0.1.Added support for the following ops:
torch.nn.Mish
torch.bucketize
torch.cdist
torch.sort
torch.take_along_dim
torch.bincount
Added support for the following
torch_cluster
ops:radius
grid
knn
Added drop-in replacements for the following
torch_cluster
operations (replace thetorch_cluster
ops withpoptorch
ops ones): *fps
*nearest
Added a
cond
op for inference models only. This op conditionally executes one of two branches depending on the input condition.Extended the API for copying data structures to the IPU.
copyNamedBuffersToDevice
allows for a named buffer to be copied.FixedSizeCollator
can now pad to different node and edge types.The PopTorch wheel now has dependencies on
torchvision
andtorchaudio
. This will prevent upgrade to an unsupported PyTorch version when installing other third-party packages that depend ontorchvision
ortorchaudio
.Added support for the
largest=False`
option in thetorch.topk
op.Added the
compilationTime
function to extract the total compilation time from the compiled PopTorch model.
Bug Fixes
Fixed compilation of models with
torch.norm
inside thefor_loop
op.Fixed the
clamp
dtype mismatch error.torch.clamp
used to raise an incompatible type error.torch.var
used to raise an error when an input dimension was negative. The fix converts the input negative integer to a positive integer so it can be used as an index of the input shape vector.Fixed the
torch.round
behaviour in PopTorch to use a “round half down” method to match the behaviour in PyTorch. PopTorch previously used a “round half up” method.Fixed the Int32-Float32 op not being processed.
Other improvements
torch.compile
is not supported in PopTorch. This has been documented in PyTorch for the IPU: User Guide.
Known issues
None
Compatibility changes
The versions of the following dependencies have been updated:
|
|
|
|
|
---|---|---|---|---|
3.3 |
2.0.1 |
0.15.2 |
2.0.1 |
>= 3.8 |
3.2 |
1.13.1 |
0.14.1 |
0.13.1 |
>= 3.7 |
3.2.0
New features
Upgrade supported
torch
version from 1.13.0 to 1.13.1.Added support for automatic fusion of scatter operations into a grouped scatter operation to improve performance.
Support for
batch_sampler
inpoptorch.DataLoader
.Support for
torch.linalg.norm
operations:torch.linalg.norm
: partial support2-norm and nuclear norm are unsupported for matrices.
torch.linalg.matrix_norm
: partial support2-norm and nuclear norm are unsupported.
torch.linalg.vector_norm
: supported
Update support for latest PyTorch
norm
op implementation fromtorch.linalg.norm
.Add support for
torch.Tensor.index_reduce
.Add
poptorch.dynamic_update
function.Add
HeteroData
support in DataLoaders.Allow the values of
poptorch.Options
to be set via an environment variable.
Bug Fixes
Calling the
loadFromFile
method twice on the samepoptorch.Options
object now has well-defined behaviour.PopTorch replica-sharded variables fail when copying optimiser state to host.
Cannot access the data pointer of a Tensor that doesn’t have storage.
Fix dataloader rebatched size in async mode when batch size is equal 1.
Fix the implementation of
scatter_reduce
to match the PyTorch implementation on the CPU.
Other improvements
Add torch_scatter to compatibility table in PopTorch documentation.
Known issues
None
Compatibility changes
None