5.1.5. PopTorch changelog

2.6.0+74275

New features

  • Improved performance of torch.gather in some cases where the index tensor has come from an expand or expand_as.

  • Improved error message when trying to apply bitwise ops to unsupported scalar types.

  • Support for upsample bicubic mode.

  • Support for zero_infinity in torch.nn.CTCLoss.

  • Experimental support for Torch’s dispatcher as an alternative to torch.jit.trace() (See Dispatcher support.)

  • Improved performance by compiling built-in custom ops at install time.

Bug Fixes

  • Fixed remaining in-place operations on slices.

  • Fixed einsum transpose error.

  • Fixed floating point exception in torch.Tensor.exponential_ and torch.distributions.Exponential.

  • Improved support for torch.int16 tensors.

2.5.0

New features

  • Support for torch.var

  • Support for torch.std

  • Support for torch.var_mean

  • Support for torch.std_mean

  • Support for col2im (used by torch.nn.Fold)

  • Support for torch.argsort

  • Support for torch.nn.RNN

  • Support for torch.nn.utils.weight_norm

  • Support for torch.randperm

  • Support for torch.nn.functional.cosine_similarity and torch.nn.CosineSimilarity

  • Support for torch.all, torch.any, torch.Tensor.all and torch.Tensor.any

  • Support for torch.Tensor.exponential_ and torch.distributions.Exponential

Bug fixes

  • Fix thread safety issue in LogContext

  • Fix torch.clamp with integer tensors

  • Fix in-place modification of slices

  • Fix torch.index_put_ when operating on slices

  • Fix torch.chunk when dim size is indivisible by the specified number of chunks

  • Fix cases where tensor.half() was in-place

  • Fix tracing with half buffers

  • Fix for loops with in-place ops

  • Fix torch.flip with negative indices

  • Fix masked fill when using tensor indexing syntax

  • Fix some cases where use of serializedMatMul was ignored or resulted in errors

Other improvements

  • Ignore missing values when reloading an Optimizer state

  • Support saving Optimizer states when compiling offline

  • Also save the random number generator’s state and the seed when saving a model

  • Improve error message of aten::index, aten::index_put_ when indexing with boolean tensor masks

  • Add support for __repr__ in PoplarExecutor

  • For models annotated with BeginBlock, show the IPU blocks in repr(model)

  • Improve implementation of torch.scatter_add