Pretraining With Upstream Validation
On this page, you’ll learn how to configure and execute a pre-training run with some upstream validation. More specifically you’ll be pre-training a LLaMA3 8B model here as an example.
By the end, you should be comfortable kicking off your own pre-training run for the model of your choice.
Prerequisites
Please ensure that you have installed the Cerebras Model Zoo package by going through the installation guide.
Make sure to have read through Trainer Overview and Trainer Configuration Overview which provide the basic overview of how to run Model Zoo models. In this document, you will be using the tools and configurations outlined in those pages.
Configuring the Run
This page will cover the two main flows you can employ to perform pre-training. One using a YAML configuration file and a training script that is packaged in the Cerebras ModelZoo. The other using pure Python to run on your own. They will be presented side-by-side so that you can compare the two flows as you progress through this tutorial.
If you aren’t interested in seeing the break down of the configuration, you can skip ahead to the Putting It All Together section to see the full configuration.
Configure the Wafer-Scale Cluster
Let’s first figure out how much resources you’ll want to use for this pre-training job.
In this example, let’s use a 16 node Wafer-Scale Cluster. To configure this, you can specify the number of Cerebras systems to use.
Notice how you can change cluster configuration parameters like num_csx
to scale the run without making any changes to the model itself.
Configure the Model
Here you will be pre-training the LLaMA3 model class that comes packaged inside of the Cerebras ModelZoo.
The LLaMA3 model by default will compute the accuracy and perplexity metrics during upstream validation.
-
YAML: To configure the LLaMA3 8B model, you specify the following parameters to the model key.
-
Python: To configure the LLaMA3 8B model, you construct the model inside a lambda to take advantage of the Trainer’s efficient weight initialization feature. LLaMA3 is just a configuration of GPT2, hence why you are importing and initializing a
Gpt2Model
class.
Configure the Optimizer
Here you will be using the AdamW
optimizer to optimize our model during pre-training.
-
YAML: To configure the
AdamW
key. Note, you don’t specify a learning rate here as you will configure a learning rate scheduler, just below. -
Python: Note, you specified a placeholder learning rate of
0.01
here as you will configure a learning rate scheduler, just below.
Configure a Learning Rate Scheduler
Here you will be using a CosineDecayLR
learning rate scheduler.
To configure the CosineDecayLR
key.
Configure Mixed Precision and Gradient Scaling
To get better performance, let’s use mixed precision in the run. More specifically, let’s configure the cluster to use cbfloat16
as the lower precision type (see CB16 Half-Precision for more details on the cbfloat16
data format).
Since a lower precision is being used for activations, you’ll want to scale the gradients to prevent underflowing. Let’s use dynamic loss scaling for this run.
In addition, to prevent gradients from exploding, let’s also clip the gradients based on its norm.
-
YAML: To configure the precision type and gradient scaling, you can specify the following parameters to the precision key.
-
Python: To configure the precision type and gradient scaling, you can construct a
MixedPrecision
object as follows.
Configure the Training/Validation Loop
For this tutorial, let’s pre-train the model for 10k steps and run validation every 1k steps.
-
YAML: To configure the number of training and validation steps you can specify the following parameters to the loop key.
-
Python: To configure the number of training and validation steps, you can construct a
TrainingLoop
object as follows.
Configure Checkpointing
In case you want to restart training from some point in the middle with different hyperparameters, let’s save a checkpoint every 1000 steps of training. This conveniently lines up nicely with the validation frequency you specified above so that you’ll know how well the model was performing at each checkpoint.
-
YAML: To configure how often checkpoints are taken, you specify the following parameters to the checkpoint key.
-
Python: To configure how often checkpoints are taken, you can construct a
Checkpoint
object as follows.
Configure Callbacks
The following steps are completely optional.
For this pre-training run, let’s keep track of the gradient norms to make sure that the model numerics are stable.
In addition, let’s ensure that the loss values that the model is outputting are valid (i.e. not NaN
or inf
).
Finally, since upstream validation is being run, let’s make sure that the validation metrics that are computed are being logged.
-
YAML: To configure these checks, you can specify the following callbacks to the callbacks key.
-
Python: To configure these checks, you can construct the following callbacks and pass them to the trainer.
Configure Loggers
To keep track of the progress of our run, let’s also employ the use of the progress logger as well as the TensorBoard logger.
-
YAML: To configure these loggers, you can specify the following to the loggers key.
-
Python: To configure these loggers, you can construct the following and pass them to the trainer.
Reproducibility
In order to make the pre-training run reproducible, you must set the Trainer’s seed.
-
YAML: You can do this by specifying the [seed](/model-zoo/trainer-configuration-overview key.
-
Python: You can do this by specifying the seed argument to the Trainer’s constructor as follows.
Setting different seeds across different runs of the same model may cause multiple compiles.
Configuring the DataLoaders
Now that you’ve constructed the Trainer
object, you’re almost ready to start the pre-training run.
One of the only things left to do is to configure the training and validation dataloaders you’ll be using.
-
YAML: To configure the training and validation dataloaders, you can specify the following to the [train_dataloader](/model-zoo/trainer-configuration-overview key).
-
Python: To configure the training and validation dataloaders, you can construct
DataLoader
objects and pass them into the Trainer’sfit
method as follows.
As can be seen, the specification of the training and validation dataloaders are very similar. The only difference is that you have the option of specifying multiple validation dataloaders to run validation over multiple datasets.
Please make sure to change the data_dir
arguments to point to the actual directories containing the data.
Putting It All Together
That is all there is to configuring the pre-training run!
Let’s take a moment to step back and look at the full configuration that you’ve put together thus far.
Start Pre-Training
Now that you have a fully configured Trainer, all there is to do now is to kick off the run and start pre-training.
-
YAML: Let’s assume that the YAML configuration that you put together above is written to a file called
./pretrain_llama_8b.yaml
. Then, to run pre-training using the training script that comes packaged as part of ModelZoo, you can run the following on the command line -
Python: Let’s assume that the python code that you put together above is written to a file called
./pretrain_llama_8b.py
. Then, to run pre-training, all there is to do is to execute that python script.
Monitor the Run
Once compilation finishes and the Wafer-Scale Cluster is programmed for execution, you should start seeing progress logs that look like
The performance numbers that you get will vary depending on how many Cerebras systems you are using and which generation systems you are using.
If you open up the TensorBoard you can more closely monitor the run be observing the trends in the graphs of the various logged metrics.
As can be seen above, the screenshots were taken at around step 5800. At this point you can observe that so far, the run seems to progressing well. The losses appear to be trending downwards and the model wise gradient norms don’t appear overly abnormal.
Porting the Model to Hugging Face
Once the pre-training run has finished, you can port the model and checkpoint to Hugging Face.
To learn more about how to do this, see Port a trained and fine-tuned model to Hugging Face.
Conclusion
With that, you have completed your first pre-training run with validation on the Cerebras Wafer-Scale Cluster using the ModelZoo Trainer!
By now, you should understand how to write your own Trainer configuration and how to kick off a training job on the Cerebras Wafer-Scale Cluster. You can now take this knowledge and pre-train your very own model.