Logo
Switching from GPUs to IPUs for Machine Learning Models
Version: latest
  • 1. Using IPUs for machine learning
  • 2. Poplar SDK
  • 3. Training on IPUs
    • 3.1. PyTorch models
    • 3.2. PyTorch Geometric models
    • 3.3. TensorFlow and Keras models
    • 3.4. ONNX models
  • 4. Fine-tuning on IPUs
  • 5. Performance profiling on IPUs
  • 6. Inference on IPUs
    • 6.1. PyTorch models
    • 6.2. PyTorch Geometric models
    • 6.3. Triton Inference Server
    • 6.4. TensorFlow Serving
    • 6.5. ONNX models
  • 7. Distributed systems
  • 8. Using IPUs in the Cloud
  • 9. Hugging Face models
  • 10. Out of memory errors
  • 11. CUDA code
  • 12. Tutorials and examples
  • 13. Performance benchmarks
  • 14. Useful resources
    • 14.1. GitHub repositories
    • 14.2. Documentation
    • 14.3. Other resources
    • 14.4. IPU-powered Jupyter notebooks
  • 15. Trademarks and copyright
Switching from GPUs to IPUs for Machine Learning Models

Search help

Note: Searching from the top-level index page will search all documents. Searching from a specific document will search only that document.

  • Find an exact phrase: Wrap your search phrase in "" (double quotes) to only get results where the phrase is exactly matched. For example "PyTorch for the IPU" or "replicated tensor sharding"
  • Prefix query: Add an * (asterisk) at the end of any word to indicate a prefix query. This will return results containing all words with the specific prefix. For example tensor*
  • Fuzzy search: Use ~N (tilde followed by a number) at the end of any word for a fuzzy search. This will return results that are similar to the search word. N specifies the “edit distance” (fuzziness) of the match. For example Polibs~1
  • Words close to each other: ~N (tilde followed by a number) after a phrase (in quotes) returns results where the words are close to each other. N is the maximum number of positions allowed between matching words. For example "ipu version"~2
  • Logical operators. You can use the following logical operators in a search:
    • + signifies AND operation
    • | signifies OR operation
    • - negates a single word or phrase (returns results without that word or phrase)
    • () controls operator precedence

Switching from GPUs to IPUs for Machine Learning Models

  • 1. Using IPUs for machine learning
  • 2. Poplar SDK
  • 3. Training on IPUs
    • 3.1. PyTorch models
    • 3.2. PyTorch Geometric models
    • 3.3. TensorFlow and Keras models
    • 3.4. ONNX models
  • 4. Fine-tuning on IPUs
  • 5. Performance profiling on IPUs
  • 6. Inference on IPUs
    • 6.1. PyTorch models
    • 6.2. PyTorch Geometric models
    • 6.3. Triton Inference Server
    • 6.4. TensorFlow Serving
    • 6.5. ONNX models
  • 7. Distributed systems
  • 8. Using IPUs in the Cloud
  • 9. Hugging Face models
  • 10. Out of memory errors
  • 11. CUDA code
  • 12. Tutorials and examples
  • 13. Performance benchmarks
  • 14. Useful resources
    • 14.1. GitHub repositories
    • 14.2. Documentation
    • 14.3. Other resources
    • 14.4. IPU-powered Jupyter notebooks
  • 15. Trademarks and copyright
Next

Revision 0bad11a6.