5.1. PopTorch
3.1.0
New features
- Upgraded from PyTorch 1.10 to 1.13. 
- Added support for variables being sharded across replicas. 
- poptorch.set_overlap_for_inputand- poptorch.set_overlap_for_outputcan now be applied to tuples, lists, and dicts of tensors.
- PopTorch now catches - aten::lstmdirectly when compiling with dispatch for PopART, allowing- set_available_memoryto work with it.
- Added support for - aten::index_fill_.int_Scalar.
- Added support for dict inputs. 
- Added support for torch.count_nonzero. 
- Support the - tanhapproximation for GELU.
- Added support for - torch.scatter_reduce operation.
Bug Fixes
- Fixed - clamp_maxin cases where the max is large.
- Fixed shape inference failing on PopART for argsort, GRU and norm ops. 
- Fixed shape inference for strided slices. 
- Fixed casting of groupnorm. 
- Fixed an issue where the alpha and beta arguments were flipped for - torch.addmm.
- Fixed a “not representable” error when using - BCEWithLogitsLosswith a dtype of- half.
- Fixed intermittent compilation hang caused by tqdm (progress bar). 
Other improvements
- Fixed in-place modification of slice regions. 
- Documentation typo fixes and clarifications. 
- Improved error message when encountering CPU tensors. 
- Use the - IPU- DispatchKeyinstead of the- XLA- DispatchKey, which means that error messages will now mention IPU rather XLA.
Known issues
None
Compatibility changes
- Dropped support for Python 3.6 (in order to upgrade to PyTorch 1.13). 
- Removed support for - torch.jit.trace(). For help on migration issues when using the dispatcher frontend, see the Legacy tracing frontend section in the 3.0.0 version of the PyTorch for the IPU: User Guide.
- Removed support for building on CentOS 7.x. 
- Removed the - AutocastAPI (this was only available when using the tracing frontend).
3.0.0
New features
- Enable the dispatcher by default. - The dispatcher frontend replaces the tracing frontend and provides many benefits, including better performance and fewer and simpler PopTorch-specific coding requirements. - The dispatcher frontend is now enabled by default, so no special changes are required to use it. See the Legacy tracing frontend section in the 3.0.0 version of the PyTorch for the IPU: User Guide for help on specific migration issues. 
- Add support for interactive querying of - poptorch.Options.
- Support - Tensor.unfoldin PyTorch.
- Improve error message for - aten::itemin static graphs.
- Support dictionary input in PopTorch. 
- Improve error reporting for lists and tuples not being supported for - set_overlap_for_input.
- Support for - tensor.new_full.
- Improved performance of - index_putwhen the indices are a one dimensional vector.
- Support for - torch_scatter.
Bug Fixes
- Fixed - BatchNormrunning statistics. It now usies the unbiased estimator to update- running_varat training time.
- Scalar tensor inputs to the graph now working. 
- Fixed expand, when the desired shape contained both added dimensions and -1. 
- Fixed a bug where - torch.nn.functional.kl_divwould produce NaN.
Other improvements
None
Known issues
None
Compatibility changes
None