Poplar and PopLibs
GroupNorm.hpp File Reference

Group normalization operations. More...

#include "poplar/Program.hpp"
#include "poplar/Tensor.hpp"
#include <poplar/OptionFlags.hpp>
#include <utility>

Go to the source code of this file.

Namespaces

namespace  popnn
 Functions used in neural networks.
 

Functions

std::pair< poplar::Tensor, poplar::Tensorpopnn::gn::groupNormStatistics (poplar::Graph &graph, const poplar::Tensor acts, float eps, poplar::program::Sequence &prog, unsigned numGroups, bool unbiasedVarEstimate, bool stableAlgo=false, const poplar::Type &partialsType=poplar::FLOAT, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Estimate mean and inverse of standard deviation of activations. More...
 
poplar::Tensor popnn::gn::groupNormWhiten (poplar::Graph &graph, const poplar::Tensor &acts, const poplar::Tensor &mean, const poplar::Tensor &invStdDev, poplar::program::Sequence &prog, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Whiten activations given the mean and standard deviation. More...
 
std::pair< poplar::Tensor, poplar::Tensorpopnn::gn::groupNormalise (poplar::Graph &graph, const poplar::Tensor &acts, const poplar::Tensor &gamma, const poplar::Tensor &beta, const poplar::Tensor &mean, const poplar::Tensor &invStdDev, poplar::program::Sequence &prog, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Group normalise activations given the mean, standard deviation and group norm parameters. More...
 
std::pair< poplar::Tensor, poplar::Tensorpopnn::gn::groupNormParamGradients (poplar::Graph &graph, const poplar::Tensor &acts, const poplar::Tensor &gradsIn, const poplar::Tensor &mean, const poplar::Tensor &iStdDev, poplar::program::Sequence &prog, const poplar::Type &partialsType=poplar::FLOAT, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Compute gradients with respect to parameters for parameter update. More...
 
std::pair< poplar::Tensor, poplar::Tensorpopnn::gn::groupNormParamGradients (poplar::Graph &graph, const poplar::Tensor &actsWhitened, const poplar::Tensor &gradsIn, poplar::program::Sequence &prog, const poplar::Type &partialsType=poplar::FLOAT, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Compute gradients with respect to parameters for parameter update. More...
 
poplar::Tensor popnn::gn::groupNormGradients (poplar::Graph &graph, const poplar::Tensor &acts, const poplar::Tensor &gradsIn, const poplar::Tensor &mean, const poplar::Tensor &invStdDev, const poplar::Tensor &gamma, poplar::program::Sequence &prog, const poplar::Type &partialsType=poplar::FLOAT, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Compute gradients with respect to input activations for the group norm layer. More...
 
poplar::Tensor popnn::gn::groupNormGradients (poplar::Graph &graph, const poplar::Tensor &actsWhitened, const poplar::Tensor &gradsIn, const poplar::Tensor &invStdDev, const poplar::Tensor &gamma, poplar::program::Sequence &prog, const poplar::Type &partialsType=poplar::FLOAT, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Compute gradients with respect to input activations for the group norm layer. More...
 
void popnn::gn::groupNormParamUpdate (poplar::Graph &graph, const poplar::Tensor &gammaDelta, const poplar::Tensor &betaDelta, float scale, poplar::Tensor &gamma, poplar::Tensor &beta, poplar::program::Sequence &prog, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Update parameters for the group norm layer. More...
 
void popnn::gn::groupNormParamUpdate (poplar::Graph &graph, const poplar::Tensor &gammaDelta, const poplar::Tensor &betaDelta, const poplar::Tensor &scale, poplar::Tensor &gamma, poplar::Tensor &beta, poplar::program::Sequence &prog, const poplar::DebugContext &debugContext={}, const poplar::OptionFlags &options={})
 Update parameters for the group norm layer. More...
 

Detailed Description

Group normalization operations.

Function Documentation

◆ groupNormalise()

std::pair< poplar::Tensor, poplar::Tensor > popnn::gn::groupNormalise ( poplar::Graph graph,
const poplar::Tensor acts,
const poplar::Tensor gamma,
const poplar::Tensor beta,
const poplar::Tensor mean,
const poplar::Tensor invStdDev,
poplar::program::Sequence prog,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Group normalise activations given the mean, standard deviation and group norm parameters.

Group normalisation options

  • groupNormStridedChannelGrouping (true, false) [=true]

    Select groups of channels for group normalisation with a stride between channels. This makes the implementation more efficient but is unconventional. Among other things this will mean that using pre-trained weights would not be possible if not produced with this unconventional implementation.

    If we have numGroups groups then the channels in the group groups[groupIdx] are given by:

    • Strided channel grouping: channelInGroupIdx * numGroups + groupIdx
    • Otherwise: channelInGroupIdx + channelsPerGroup * groupIdx

    In the case of instanceNormalise() and layerNormalise() (which use group norm in their implementation) this option will have no effect.

Parameters
graphThe graph that the normalisation operation is added to.
actsThe input activations to whiten and normalise, with shape [B][C][..F..]
where:
  • B is the batch size
  • C is the number of channels
  • ..F.. are the dimensions of an N-dimensional field.
gammaThe gamma weights to multiply by when normalising the whitened activations.
betaThe beta weights to add when normalising the whitened activations.
meanThe mean to subtract when whitening the activations.
invStdDevThe inverse standard deviation to multiply by when whitening the activations.
progThe program sequence to add the operation to.
debugContextOptional debug information.
optionsGroup normalisation options.
Returns
Two tensors containing:
  • normalised activations
  • whitened activations

◆ groupNormGradients() [1/2]

poplar::Tensor popnn::gn::groupNormGradients ( poplar::Graph graph,
const poplar::Tensor acts,
const poplar::Tensor gradsIn,
const poplar::Tensor mean,
const poplar::Tensor invStdDev,
const poplar::Tensor gamma,
poplar::program::Sequence prog,
const poplar::Type partialsType = poplar::FLOAT,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Compute gradients with respect to input activations for the group norm layer.

Gradients are propagated through the complete layer including statistics computation.

Parameters
graphThe graph that the normalisation operation is added to.
actsThe forward-pass activation inputs to this layer.
gradsInThe gradient with respect to the output of this layer.
meanThe mean of the acts tensor, typically calculated using groupNormStatistics().
invStdDevThe inverse standard deviation of the acts tensor, typically calculated using groupNormStatistics().
gammaThe gamma weights to multiply by when normalising the whitened activations.
progThe program sequence to add the operation to.
partialsTypePoplar type used for intermediate values. If the type specified is smaller than the input/output type then partialsType is ignored and the input/output type is used instead.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A tensor containing the gradients with respect to the input activations for this layer.

◆ groupNormGradients() [2/2]

poplar::Tensor popnn::gn::groupNormGradients ( poplar::Graph graph,
const poplar::Tensor actsWhitened,
const poplar::Tensor gradsIn,
const poplar::Tensor invStdDev,
const poplar::Tensor gamma,
poplar::program::Sequence prog,
const poplar::Type partialsType = poplar::FLOAT,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Compute gradients with respect to input activations for the group norm layer.

Gradients are propagated through the complete layer including statistics computation.

Parameters
graphThe graph that the normalisation operation is added to.
actsWhitenedThe forward-pass activation inputs to this layer.
gradsInThe gradient with respect to the output of this layer.
invStdDevThe inverse standard deviation of the acts tensor, typically calculated using groupNormStatistics().
gammaThe gamma weights to multiply by when normalising the whitened activations.
progThe program sequence to add the operation to.
partialsTypePoplar type used for intermediate values. If the type specified is smaller than the input/output type then partialsType is ignored and the input/output type is used instead.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A tensor containing the gradients with respect to the input activations for this layer.

◆ groupNormParamGradients() [1/2]

std::pair< poplar::Tensor, poplar::Tensor > popnn::gn::groupNormParamGradients ( poplar::Graph graph,
const poplar::Tensor acts,
const poplar::Tensor gradsIn,
const poplar::Tensor mean,
const poplar::Tensor iStdDev,
poplar::program::Sequence prog,
const poplar::Type partialsType = poplar::FLOAT,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Compute gradients with respect to parameters for parameter update.

Parameters
graphThe graph that the normalisation operation is added to.
actsThe forward-pass activation inputs to this layer.
gradsInThe gradient with respect to the output of this layer.
meanThe mean of the acts tensor, typically calculated using groupNormStatistics().
iStdDevThe inverse standard deviation of the acts tensor, typically calculated using groupNormStatistics().
progThe program sequence to add the operation to.
partialsTypePoplar type used for intermediate values. If the type specified is smaller than the input/output type then partialsType is ignored and the input/output type is used instead.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A pair of tensors, gammaDelta and betaDelta which are the gradients with respect to gamma and beta.

◆ groupNormParamGradients() [2/2]

std::pair< poplar::Tensor, poplar::Tensor > popnn::gn::groupNormParamGradients ( poplar::Graph graph,
const poplar::Tensor actsWhitened,
const poplar::Tensor gradsIn,
poplar::program::Sequence prog,
const poplar::Type partialsType = poplar::FLOAT,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Compute gradients with respect to parameters for parameter update.

Parameters
graphThe graph that the normalisation operation is added to.
actsWhitenedThe forward-pass whitened activation inputs to this layer.
gradsInThe gradient with respect to the output of this layer.
progThe program sequence to add the operation to.
partialsTypePoplar type used for intermediate values. If the type specified is smaller than the input/output type then partialsType is ignored and the input/output type is used instead.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A pair of tensors, gammaDelta and betaDelta which are the gradients with respect to gamma and beta.

◆ groupNormParamUpdate() [1/2]

void popnn::gn::groupNormParamUpdate ( poplar::Graph graph,
const poplar::Tensor gammaDelta,
const poplar::Tensor betaDelta,
const poplar::Tensor scale,
poplar::Tensor gamma,
poplar::Tensor beta,
poplar::program::Sequence prog,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Update parameters for the group norm layer.

Gradients are propagated through the complete layer including statistics computation.

The gamma and beta parameters are updated as follows:

  • gamma += gammaDelta * scale
  • beta += betaDelta * scale

scale is a tensor and therefore variable.

Parameters
graphThe graph that the normalisation operation is added to.
gammaDeltaValue used to update gamma.
betaDeltaValue used to update beta.
scaleScale factor for gammaDelta and betaDelta.
gammaThe gamma weights to multiply by when normalising the activations.
betaThe beta weights to add when normalising the activations.
progThe program sequence to add the operation to.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().

◆ groupNormParamUpdate() [2/2]

void popnn::gn::groupNormParamUpdate ( poplar::Graph graph,
const poplar::Tensor gammaDelta,
const poplar::Tensor betaDelta,
float  scale,
poplar::Tensor gamma,
poplar::Tensor beta,
poplar::program::Sequence prog,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Update parameters for the group norm layer.

Gradients are propagated through the complete layer including statistics computation.

The gamma and beta parameters are updated as follows:

  • gamma += gammaDelta * scale
  • beta += betaDelta * scale

scale is a float and therefore constant.

Parameters
graphThe graph that the normalisation operation is added to.
gammaDeltaValue used to update gamma.
betaDeltaValue used to update beta.
scaleScale factor for gammaDelta and betaDelta.
gammaThe gamma weights to multiply by when normalising the activations.
betaThe beta weights to add when normalising the activations.
progThe program sequence to add the operation to.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().

◆ groupNormStatistics()

std::pair< poplar::Tensor, poplar::Tensor > popnn::gn::groupNormStatistics ( poplar::Graph graph,
const poplar::Tensor  acts,
float  eps,
poplar::program::Sequence prog,
unsigned  numGroups,
bool  unbiasedVarEstimate,
bool  stableAlgo = false,
const poplar::Type partialsType = poplar::FLOAT,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Estimate mean and inverse of standard deviation of activations.

Parameters
graphThe graph that the normalisation operation is added to.
actsThe activations for which the mean and variance are estimated.
epsThe epsilon value added to the variance to avoid division by zero.
progThe program sequence to add the operation to.
numGroupsThe number of groups to split the channel dimension into when calculating group norm statistics. The groupNormStridedChannelGrouping option defines how the split is made.
unbiasedVarEstimateIf true, an unbiased variance estimate will be computed.
stableAlgoIf true, computes the mean first then subtracts the activations from it before computing the variance. The implementation with this flag set to true is slower than when set to false.
partialsTypePoplar type used for intermediate values. If the type specified is smaller than the input/ output type then partialsType is ignored and the input/output type is used instead.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A vector pair with mean and inverse standard deviation.

◆ groupNormWhiten()

poplar::Tensor popnn::gn::groupNormWhiten ( poplar::Graph graph,
const poplar::Tensor acts,
const poplar::Tensor mean,
const poplar::Tensor invStdDev,
poplar::program::Sequence prog,
const poplar::DebugContext debugContext = {},
const poplar::OptionFlags options = {} 
)

Whiten activations given the mean and standard deviation.

Parameters
graphThe graph that the normalisation operation is added to.
actsThe input activations that will be whitened.
meanThe previously calculated mean to subtract from the activations. Typically calculated using groupNormStatistics().
invStdDevThe previously calculated inverse standard deviation to multiply the activations by. Typically calculated using groupNormStatistics().
progThe program sequence to add the operation to.
debugContextOptional debug information.
optionsGroup normalisation options. See groupNormalise().
Returns
A new tensor with the whitened activations.