Fine Tuning With Validation
On this page, you’ll cover configuring and executing a fine-tuning run with some upstream validation. More specifically you’ll be fine-tuning a LLaMA3 8B model here as an example. By the end, you should be comfortable kicking off your own fine-tuning run for the model of your choice.
Prerequisites
-
You must have installed the Cerebras Model Zoo (click here if you haven’t).
-
You must be familiar with the Trainer and YAML format
-
Please ensure you have read Checkpointing
-
Please ensure you have read LLaMA3 8B pre-training
Configuring the Run
This page will cover the two main flows you can employ to perform fine-tuning. One using a YAML configuration file and a training script that is packaged in the Cerebras ModelZoo. The other using pure Python to run on your own. They will be presented side-by-side so that you can compare the two flows as you progress through this tutorial.
Start with the Trainer configured in LLaMA3 8B pre-training.You will only need to make a few changes to this configuration to accomodate a fine-tuning run.
Fine-Tuning Using a Pre-trained Checkpoint
To perform fine-tuning, a checkpoint from a previous training run is required. These checkpoints can be generated from previous runs or downloaded from online databases. For more information on porting a checkpoint from HuggingFace see Port a Hugging Face model to Cerebras Model Zoo. In this tutorial you will assume a checkpoint has already been generated after finishing Pretraining with Upstream Validation. For simplicty, let’s assume the checkpoint saved after the final step has the path:
Configure Checkpoint State Loading
To enable fine-tuning, you want to only load the model state from the checkpoint. Other checkpoint states such as the optimizer state or the training step should be reset.
Load From a Checkpoint#
You now need to configure the trainer to load a checkpoint from a given path.
Putting It All Together
After the above adjustments, you should have a configuration that looks like this.
Start Fine-Tuning#
Now that you have a fully configured Trainer, all there is to do now is to kick off the run and start fine-tuning.
Monitoring the Run#
Once compilation finishes and the Wafer-Scale Cluster is programmed for execution, you should start seeing progress logs that look like
Note
The performance numbers that you get will vary depending on how many Cerebras systems you are using and which generation systems you are using.
If you open up the TensorBoard you can more closely monitor the run be observing the trends in the graphs of the various logged metrics.
As can be seen above, the screenshots were taken at around step 8000. At this point you can observe that so far, the run seems to progressing well. The losses appear to be trending downwards and the model wise gradient norms don’t appear overly abnormal.
Porting the Model to Hugging Face
Once the fine-tuning run has finished, you can port the model and checkpoint to Hugging Face.
To learn more about how to do this, see Port a trained and fine-tuned model to Hugging Face.
Conclusion
With that, you have completed your first fine-tuning run with validation on the Cerebras Wafer-Scale Cluster using the ModelZoo Trainer!
By now, you should understand how to write your own Trainer configuration and how to kick off a training job from a checkpoint on the Cerebras Wafer-Scale Cluster. You can now take this knowledge and fine-tune your very own model.