Mixed-Precision Arithmetic for AI: A Hardware Perspective
Version: latest
1. Introduction
2. Floating-point number representations
3. The IPU 16-bit floating point format
4. Lower precision for higher power efficiency
5. Mixed-precision arithmetic
6. Mixed-precision training
7. Deterministic versus stochastic rounding
8. Loss scaling
9. Summary
10. Trademarks & copyright
Mixed-Precision Arithmetic for AI: A Hardware Perspective
»
Mixed-Precision Arithmetic for AI: A Hardware Perspective
Edit on GitHub
Mixed-Precision Arithmetic for AI: A Hardware Perspective
ΒΆ
1. Introduction
2. Floating-point number representations
3. The IPU 16-bit floating point format
4. Lower precision for higher power efficiency
5. Mixed-precision arithmetic
6. Mixed-precision training
7. Deterministic versus stochastic rounding
8. Loss scaling
9. Summary
10. Trademarks & copyright
Read the Docs
v: latest
Versions
latest
2.4.0
1.1.0
preview_1.0.0
Downloads
pdf
On Read the Docs
Project Home
Builds