2.3. BERT Fine-tuning on the IPU
This tutorial demonstrates how to fine-tune a pre-trained BERT model with PyTorch on the Graphcore IPU-POD16 system. It uses a BERT-Large model and fine-tunes it on the SQuADv1 Question/Answering task. The tutorial is in Fine-tuning-BERT.ipynb.
Fine-tuning-BERT.ipynbThe tutorial jupyter notebook;
Fine-tuning-BERT.pyPython script conversion of the jupyter notebook;
squad_preprocessing.pyContains a number of utility functions to prepare the data;
tests/test_finetuning_notebook.pyScript for testing this tutorial;
tests/requirements.txtRequired packages for the tests;
LICENSEApache 2.0 license file (applies only to
How to use this demo
Prepare the PopTorch environment.
Install the Poplar SDK following the instructions in the Getting Started guide for your IPU system. Make sure to run the enable.sh script for Poplar and activate a Python virtualenv with a PopTorch wheel from the Poplar SDK installed (use the version appropriate to your operating system).
Install the required packages:
pip install -r requirements.txt
Run the jupyter notebook and connect to it with your browser. You may need to use an SSH tunnel to tunnel
jupyterback to your local machine using:
ssh -L 8888:localhost:8888 [REMOTE-IPU-MACHINE] -N.
squad_preprocessing.py is based on code from Hugging Face licensed
under Apache 2.0 so is distributed under the same license (see the LICENSE
file in this directory for more information).
The rest of the code in this example is licensed under the MIT license - see the LICENSE file at the top level of this repository.