On this page, you will learn how to set up Multi-Phase training using the Trainer class. Multi-Phase training allows you to combine multiple training phases with different batch sizes or max sequence lengths in a single config file or python script.

Prerequisites

Please ensure that you have read through the next tutorials beforehand:

The rest of this page assumes that you already have at least a cursory understanding of what the Cerebras Model Zoo Trainer is and how to use the Python API.

Multi-Phase Training

In Multi-Phase training, you may want to define several distinct training phases. For example, the training pipeline for the Llama-3 model might involve varying batch sizes or max sequence lengths across different phases. Each of these phases is defined by an instance of the Trainer.

Let’s consider an example. In the Pretraining with Upstream Validation, you’ve learned how to construct the Trainer for the Llama-3 model. Now, let’s add a new training phase with a different batch size and new max sequence length.

To define each phase you need to construct a separate Trainer instance. For example:

The number of Trainer instances is not limited and each Trainer can have different parameters, so you can construct arbitrary training/validation pipelines including different models, dataloders, etc.

For each phase we define different batch size and different max sequence lengths.

It’s important to note that when using YAML, you have to construct a Trainer instance for each phase, which adds some overhead to your run due to time spent on compile and weights transfer. If you are using Python API, you can construct a single Trainer object and call fit using different DataLoader objects.

Multi-Phase Training (Advanced)


A more advanced example of Multi-Phase training involves changing model parameters between training phases. For instance, you might want to switch the learning rate scheduler from CosineDecayLR to ConstantLR. To accomplish this, you need to create two instances of the Trainer and carefully manage checkpoint loading between phases to account for the changes in model parameters.

In the example below, please note that the model, optimizer, and other parameters are similar to those in the previous example. These parameters have been omitted to simplify the example.

In this example, each Trainer constructs and compiles a model where in the second phase we changed the scheduler to ConstantLR, so to avoid any issues with checkpoint loading we specify which parameters needs to be loaded. For further reading please follow Checkpointing.

Caveats


When running Multi-Phase training using Python API, you may hit an issue:

RuntimeError: Cannot instantiate multiple backends. A backend with type CSX has already been instantiated.

Please ensure that when you construct a Trainer, you only instantiate a single backend. For example:

backend = cstorch.backend(
    "CSX",
    ...
)

trainer1 = Trainer(
    backend=backend,
    ...
)

trainer2 = Trainer(
    backend=backend,
    ...
)

Conclusion


This tutorial showcases some of the use cases where Multi-Phase training can be applied. However, you are not limited to these examples and can construct as many Trainers as you need, combining different models, schedulers, optimizers, dataloaders, and more.