PyTorch for the IPU: User Guide
Version: 1.0.0
1. Introduction
2. Installation
3. Features
4. Efficient data batching
5. IPU supported operations
6. Examples
7. Experimental features
8. Index
9. Trademarks & copyright
10. Changelog
PyTorch for the IPU: User Guide
PyTorch for the IPU: User Guide
1. Introduction
2. Installation
2.1. Using a Python virtual environment
2.2. Setting the environment variables
2.3. Validating the setup
3. Features
3.1. Options
3.2. Model wrapping functions
3.2.1. poptorch.trainingModel
3.2.2. poptorch.inferenceModel
3.2.3. poptorch.PoplarExecutor
3.3. Parallel execution
3.3.1. Annotation tools
poptorch.Block and poptorch.BeginBlock
poptorch.Stage and poptorch.AutoStage
poptorch.Phase
Advanced annotation with strings
3.3.2. Parallel execution strategies
poptorch.ShardedExecution
poptorch.PipelinedExecution
Phased execution
3.4. Optimizers
3.4.1. Loss scaling
3.4.2. Velocity scaling (SGD only)
3.5. Custom ops
3.5.1. poptorch.ipu_print_tensor
3.5.2. poptorch.identity_loss
3.5.3. poptorch.MultiConv
3.5.4. poptorch.custom_op
3.5.5. poptorch.nop
3.5.6. poptorch.serializedMatMul
3.5.7. poptorch.set_available_memory
3.6. Miscellaneous functions
3.7. Half / float 16 support
3.8. Profiling
3.9. Environment variables
3.9.1. Logging level
3.9.2. Profiling
3.9.3. IPU Model
3.9.4. Wait for an IPU to become available
4. Efficient data batching
4.1. poptorch.DataLoader
4.2. poptorch.AsynchronousDataAccessor
4.2.1. Example
4.3. poptorch.Options.deviceIterations
4.3.1. Example
4.4. poptorch.Options.replicationFactor
4.5. poptorch.Options.Training.gradientAccumulation
4.5.1. Example with parallel execution
5. IPU supported operations
5.1. Torch operations
5.1.1. Tensor operations
Creation ops
Indexing, Slicing, Joining, Mutating Ops
Random Samplers
5.1.2. Math operations
Pointwise Ops
Reduction Ops
Comparison Ops
Other Ops
BLAS and LAPACK Operations
5.2. Torch.nn operations
5.2.1. Containers
5.2.2. Convolution layers
5.2.3. Pooling layers
5.2.4. Padding layers
5.2.5. Activations
5.2.6. Normalization layers
5.2.7. Recurrent layers
5.2.8. Linear layers
5.2.9. Dropout
5.2.10. Sparse layers
5.2.11. Loss functions
5.2.12. Vision Layers
5.3. Float 16 operations
5.3.1. Casting
5.3.2. Creation functions
6. Examples
6.1. MNIST example
7. Experimental features
7.1. Distributed execution
8. Index
9. Trademarks & copyright
10. Changelog
10.1. v1.0 (Poplar SDK 1.4)
10.1.1. New features
10.1.2. Known issues
10.2. v0.1 (Poplar SDK 1.3)
10.2.1. New features