5.1.5. PopTorch changelog
2.6.0+74275
New features
Improved performance of
torch.gather
in some cases where the index tensor has come from anexpand
orexpand_as
.Improved error message when trying to apply bitwise ops to unsupported scalar types.
Support for upsample bicubic mode.
Support for
zero_infinity
intorch.nn.CTCLoss
.Experimental support for Torch’s dispatcher as an alternative to
torch.jit.trace()
(See Dispatcher support.)Improved performance by compiling built-in custom ops at install time.
Bug Fixes
Fixed remaining in-place operations on slices.
Fixed einsum transpose error.
Fixed floating point exception in
torch.Tensor.exponential_
andtorch.distributions.Exponential
.Improved support for
torch.int16
tensors.
2.5.0
New features
Support for
torch.var
Support for
torch.std
Support for
torch.var_mean
Support for
torch.std_mean
Support for
col2im
(used bytorch.nn.Fold
)Support for
torch.argsort
Support for
torch.nn.RNN
Support for
torch.nn.utils.weight_norm
Support for
torch.randperm
Support for
torch.nn.functional.cosine_similarity
andtorch.nn.CosineSimilarity
Support for
torch.all
,torch.any
,torch.Tensor.all
andtorch.Tensor.any
Support for
torch.Tensor.exponential_
andtorch.distributions.Exponential
Bug fixes
Fix thread safety issue in LogContext
Fix
torch.clamp
with integer tensorsFix in-place modification of slices
Fix
torch.index_put_
when operating on slicesFix
torch.chunk
when dim size is indivisible by the specified number of chunksFix cases where
tensor.half()
was in-placeFix tracing with half buffers
Fix for loops with in-place ops
Fix
torch.flip
with negative indicesFix masked fill when using tensor indexing syntax
Fix some cases where use of
serializedMatMul
was ignored or resulted in errors
Other improvements
Ignore missing values when reloading an Optimizer state
Support saving Optimizer states when compiling offline
Also save the random number generator’s state and the seed when saving a model
Improve error message of
aten::index
,aten::index_put_
when indexing with boolean tensor masksAdd support for
__repr__
in PoplarExecutorFor models annotated with
BeginBlock
, show the IPU blocks inrepr(model)
Improve implementation of
torch.scatter_add