Logo
Targeting the IPU from TensorFlow 2
Version: 2.0.0
  • 1. Introduction
  • 2. Targeting the Poplar XLA device
  • 3. Compiling and pre-compiling executables
  • 4. Support for TensorFlow 2
  • 5. TensorFlow 2 examples
  • 6. Training a model
  • 7. Efficient IPU I/O
  • 8. Example using IPUEstimator
  • 9. Example using IPUPipelineEstimator
  • 10. Distributed training
  • 11. Half-precision floating point and stochastic rounding
  • 12. IPU-optimised operations
  • 13. IPU Outlined Functions
  • 14. Custom IPU operations
  • 15. IPU host embeddings
  • 16. Retrieving information about compilation and execution
  • 17. API changes
  • 18. Python API
  • 19. TensorFlow operators supported by the IPU
  • 20. Resources
  • 21. Index
  • 22. Trademarks & copyright
Targeting the IPU from TensorFlow 2

22. Trademarks & copyright

Graphcore® and Poplar® are registered trademarks of Graphcore Ltd.

AI-Float™, Colossus™, Exchange Memory™, Graphcloud™, In-Processor-Memory™, IPU-Core™, IPU-Exchange™, IPU-Fabric™, IPU-Link™, IPU-M2000™, IPU-Machine™, IPU-POD™, IPU-Tile™, PopART™, PopLibs™, PopVision™, PopTorch™, Streaming Memory™ and Virtual-IPU™ are trademarks of Graphcore Ltd.

All other trademarks are the property of their respective owners.

Copyright © 2016-2021 Graphcore Ltd. All rights reserved.

Previous

Revision f4540921.