13. Changelog
13.1. v2.2 (Poplar SDK 2.2)
13.1.1. New features
Migrated to PyTorch version 1.9.0
Support for
torch.rollSupport for
torch.cloneAdd modelName session option that can be passed to PopART
Support List inputs to a model
Tuples/Lists of constants can now be returned by a model
Add
enableProfilingconvenience method inpoptorch.Optionsto enable profile report generationFix bug with
torch.Tensor.repeatwhen applied to an input during trainingFix bug with
aten::towhen applied to a constant used as an input to another nodeImproved error message when encountering untraceable types during compilation
Support for
torch.gather. Please note: this operator is known to cause long compilation times. Consider using a onehot-based solution instead ortorch.index_selectif appropriate.Using a convolution layer op with the value of
paddinggreater than or equal tokernel_size`is now supported.Support for
torch.Tensor.new_onesandtorch.Tensor.new_zerosSupport for Poplar recoverable and unrecoverable errors.
Support for
torch.flip.
13.1.2. API changes
Removed
accumulationReductionTypewhich was deprecated in 2.1 in favour ofaccumulationAndReplicationReductionTypeinpoptorch.Options.TrainingRemoved
runningVarianceAlwaysFloatwhich was deprecated in 2.1 and replaced byrunningStatisticsAlwaysFloatinpoptorch.Options.Precision,
13.2. v2.1 (Poplar SDK 2.1)
13.2.1. New features
Support for
torch.unbindAdd option to set
poptorch.Optionsusing options specified in a config file.Add
mode=poptorch.DataLoaderMode.AsyncRebatchedSupport for PopART name scopes via
poptorch.NameScopeAdd mixed precision automatic casting
Support for
torch.crossSupport for
torch.functional.one_hotSupport for
torch.int8data typesSupport for
torch.medianSupport for
torch.index_selectSupport for
torch.scatter_addAdd
poptorch.Options.Precision.enableFloatingPointExceptionsto control floating point exception behaviorSupport for inplace changes to inputs.
Add option to log the number of IPU cycles used in executing the main graph
Support for
torch.nn.GRUAdd automatic loss scaling option which can be enabled via
poptorch.Options.Training.setAutomaticLossScaling.Add
poptorch.BlockFunctiondecorating for assigning an existing function to a block.Add mechanism for inspecting arbitrary tensors
Add custom operator for CTC beam search decoding:
poptorch.ctc_beam_search_decoderAdd a separate tensor variant (now default) to the SGD optimiser.
Add a TensorFlow variant to the RMSProp optimiser.
13.2.2. API changes
Removed
Options.Popartwhich was deprecated in v2.0 and replaced withOptions._PopartRemoved
MultiConvPartialsTypewhich was deprecated in v2.0Deprecated
poptorch.Options.Training.accumulationReductionTypein favour ofpoptorch.Options.Training.accumulationAndReplicationReductionTypeDeprecated
runningVarianceAlwaysFloatin favour ofrunningStatisticsAlwaysFloatinpoptorch.Options.Precision, as this new option computes both the running mean and variance in FP32 when this option is set toTrue.Use of SGD via PyTorch’s or PopTorch’s API now results in use of the new separate tensor variant by default. To revert to the previous default variant, use
poptorch.optim.SGDwithuse_combined_accum=True.
13.2.3. Known issues
Using a convolution layer op with the value of
paddinggreater than or equal tokernel_size`results in an error when training. Use a constant pad layer instead of the excess padding prior to the convolution.
13.3. v2.0 (Poplar SDK 2.0)
13.3.1. New features
Support for the following activation functions:
torch.nn.acosh
torch.nn.asinh
torch.nn.atanh
torch.nn.Hardshrink
torch.nn.SiLU
torch.nn.Softplus
torch.nn.Softshrink
torch.nn.Threshold
Support for the following random sampling operations:
torch.bernoulli
torch.distributions.Bernoulli
Experimental support for torch.nn.CTCLoss
Add Adam optimizer
Support for
torch.nn.AdaptiveAvgPool1d,torch.nn.AdaptiveAvgPool3dMigrated to PyTorch version 1.7.1
Support for
aten::index,aten::index_put_Support for
torch.zeros_like,torch.ones_likeAllow the user to specify which Optimizer attributes are constant or not.
Allow the user to specify
mode=poptorch.DataLoaderMode.Asyncinpoptorch.DataLoaderconstructor instead of explicitly creating an AsynchronousDataAccessorSupport for
torch.nn.EmbeddingBagSupport for
torch.clamp_maxandtorch.clamp_minSupport for
torch.min(tensor, dim=.*, keepdim=.*)andtorch.max(tensor, dim=.*, keepdim=.*)overloads.Support for
poptorch.isRunningOnIpu. This function returnsTruewhen executing on IPU andFalsewhen executing the model outside IPU scope.Support for
torch.amaxandtorch.aminSupport for attributes in custom ops.
Support for precompilation and reloading exported executables (
poptorch.PoplarExecutor.compileAndExportandpoptorch.load)Support for slices with variable start index (slice size must be constant).
Add
ipuHardwareVersionfunction to read the version of the IPU hardware present on the system.Changed default targetd Ipu version for the model and offline compilation to
2.Changed
accumulationReductionType(reduction)option to now apply to replication reduction as wellAdd environment variable
POPTORCH_CACHE_DIRSupport for
torch.fmod, andtorch.remainderSupport for
torch.addcdivSupport for
torch.bitwise_not
13.3.2. API changes
Deprecated
Options.Popart,Options._Popartmay be used experimentally.
13.4. v1.0 (Poplar SDK 1.4)
13.4.1. New features
Support for torch.nn.InstanceNorm1d, torch.nn.InstanceNorm2d and torch.nn.InstanceNorm3d
Fixed issue with torch.nn.GroupNorm where only 4-dimensional inputs could be used
Replaced Adam with AdamW optimizer.
Support for the following loss functions:
torch.nn.KLDivLoss
torch.nn.PoissonNLLLoss
torch.nn.HingeEmbeddingLoss
torch.nn.BCEWithLogitsLoss
torch.nn.SmoothL1Loss
torch.nn.SoftMarginLoss
torch.nn.CosineEmbeddingLoss
torch.nn.MarginRankingLoss
torch.nn.TripletMarginLoss
torch.nn.NLLLoss for aten::nll_loss2d
Support for torch.optim.RMSprop optimizer
Support for bool inputs to models
Improved support for half type models and inputs.
Using a mix of float 16 and float 32 inputs is now supported. Please see the documentation for cases in which a model might use different types compared to when run natively with PyTorch.
Support for serialized matrix multiplications (poptorch.serializedMatMul)
Support for
POPTORCH_IPU_MODEL_VERSIONenvironment variable.Support for torch.cumsum
Support for pipelined / phased / sharded execution.
Add PoplarExecutor.compile() to compile the model without executing it.
Use sphinx-build to generate the documentation.
Use Miniconda as build environment.
Support for torch.meshgrid
Support for torch.cartesian_prod
Optimized torch.matmul implementation with limitations
Fused its input 0’s batch dimensions with the row dimension to avoid ReduceSum in its backward pass, for performance purpose
Partial support for torch.einsum
Diagonals and ellipsis notation is unsupported
Support for executable caching: poptorch.Options.enableExecutableCaching()
Add optional title argument to poptorch.ipu_print_tensor
Add len() method to poptorch.AsynchronousDataLoader
Support for LAMB optimizer
Support for recomputationCheckpoint()
Support for torch.tensordot
Support for rounding up the number of IPU used to allow models which specify of number of IPUs which is not a power of 2: poptorch.Options.autoRoundNumIPUs(True) NB, this will reserve but not use IPUs and so it is preferable to specify the model to use a number of IPUs which is a power of two
Optimized torch.matmul implementation with limitations
Fused its input 0’s batch dimensions with the row dimension to avoid ReduceSum in its backward pass, for performance purpose
Support for multi-convolutions with poptorch.MultiConv
Support for PopART batch serialization settings
These can be set via poptorch.Options().Popart.set()
Support for PopVision System Analyser added: tracing can be enabled by setting
PVTI_OPTIONS='{"enable":"true"}'
13.4.2. Known issues
Race condition in
poptorch.DataLoaderwhen using several workers resulting in the iteration sometimes finishing one element early.Workaround: set
num_workersto 0 or 1.
poptorch.custom_op()doesn’t allow the user to set attributes.Workaround: hardcode the attributes in the custom operation or pass them as regular inputs.
Graphs containing block annotations (
poptorch.Blockorpoptorch.BeginBlock) cannot be exported usingtorch.save()Workaround: Make a soft copy of the model that doesn’t contain Blocks and use it to save /load the weights. (The weights should be shared between the two models).
Lists of tensors are not supported as inputs.
Workaround: Use tuples instead.
# Use a tuple assert inference_model((t1, t2)) # instead of [t1, t2]
13.5. v0.1 (Poplar SDK 1.3)
13.5.1. New features
PopTorch now exposes PopART anchor options to choose how much data to return from a model. These are passed into the model wrapper via anchor_mode. options are Sum, All, Final and EveryN.
Support for batched LSTM and batch first
An Options object can now be passed to poptorch.trainingModel / poptorch.inferenceModel to configure the session and select IPUs
The ‘profile’ option has been removed, instead profiling can be enabled by setting the environment variable
POPLAR_ENGINE_OPTIONS='{autoReport.all:true, autoReport.directory:.}'Support for
POPTORCH_IPU_MODELandPOPTORCH_WAIT_FOR_IPUenvironment variables.Support for the torch comparisons operations:
torch.eq
torch.ge
torch.gt
torch.le
torch.lt
torch.max
torch.min
torch.ne
torch.isnan
torch.topk
torch.min and torch.max only support (tensor, tensor) and (tensor) overloads. They do not support the (tensor, dim=, keepdim=) overload.
torch.topk only supports sorted=False and Largest=True
Automatically synchronise the weights back to the Host after using the IPU for training. (i.e no need to explicitly call copyWeightsToHost() anymore)
Support for non-linear activations torch.nn.PReLU and torch.nn.Hardtanh
Support for Adam optimizer.
Support for half type models and inputs.
Models that require operations on input tensors of mixed precision are not currently supported. For example:
def forward(self, x, y): x # Half y # Float32 return x + y # Not supported.
Support for
tensor.fill_,torch.full,torch.full_likeSupport for user provided custom operations. See PopART documentation for information on how to write them. They are exposed by
poptorch.custom_opthis takes in a list of input tensors, strings for the PopART op name and domain, the domain version, and a list of tensors the same shape and size as the expected output tensors. This is to ensure the pytorch trace remains valid as it traces on CPU so won’t actually execute the operation when building the graph.Support for torch.nn.Conv1D / torch.nn.Conv2D / torch.nn.Conv3D
Support for torch.nn.Upsample (‘nearest’ mode only)
Support for tensor.size
Support for the following random sampling operations.
torch.randtorch.uniform_torch.distributions.Uniformtorch.randntorch.normaltorch.normal_
For repeatable random number generation use the
randomSeedmethod ofpoptorch.OptionsSupport for torch.clamp
Adds poptorch.DataLoader
Adds optimized poptorch.AsynchronousDataAccessor which allows for a dataloader to be offloaded to a background thread asynchronously.
Support for torch.norm
Upgraded from torch 1.5.0 to torch 1.6.0
Experimental support for single host distributed execution
Add torch.where and tensor.masked_fill