5.1.5. PopTorch changelog
2.6.0+74275
New features
Improved performance of
torch.gatherin some cases where the index tensor has come from anexpandorexpand_as.Improved error message when trying to apply bitwise ops to unsupported scalar types.
Support for upsample bicubic mode.
Support for
zero_infinityintorch.nn.CTCLoss.Experimental support for Torch’s dispatcher as an alternative to
torch.jit.trace()(See Dispatcher support.)Improved performance by compiling built-in custom ops at install time.
Bug Fixes
Fixed remaining in-place operations on slices.
Fixed einsum transpose error.
Fixed floating point exception in
torch.Tensor.exponential_andtorch.distributions.Exponential.Improved support for
torch.int16tensors.
2.5.0
New features
Support for
torch.varSupport for
torch.stdSupport for
torch.var_meanSupport for
torch.std_meanSupport for
col2im(used bytorch.nn.Fold)Support for
torch.argsortSupport for
torch.nn.RNNSupport for
torch.nn.utils.weight_normSupport for
torch.randpermSupport for
torch.nn.functional.cosine_similarityandtorch.nn.CosineSimilaritySupport for
torch.all,torch.any,torch.Tensor.allandtorch.Tensor.anySupport for
torch.Tensor.exponential_andtorch.distributions.Exponential
Bug fixes
Fix thread safety issue in LogContext
Fix
torch.clampwith integer tensorsFix in-place modification of slices
Fix
torch.index_put_when operating on slicesFix
torch.chunkwhen dim size is indivisible by the specified number of chunksFix cases where
tensor.half()was in-placeFix tracing with half buffers
Fix for loops with in-place ops
Fix
torch.flipwith negative indicesFix masked fill when using tensor indexing syntax
Fix some cases where use of
serializedMatMulwas ignored or resulted in errors
Other improvements
Ignore missing values when reloading an Optimizer state
Support saving Optimizer states when compiling offline
Also save the random number generator’s state and the seed when saving a model
Improve error message of
aten::index,aten::index_put_when indexing with boolean tensor masksAdd support for
__repr__in PoplarExecutorFor models annotated with
BeginBlock, show the IPU blocks inrepr(model)Improve implementation of
torch.scatter_add