On this page, you will learn about how to configure the checkpointing behavior of theDocumentation Index
Fetch the complete documentation index at: https://training-docs.cerebras.ai/llms.txt
Use this file to discover all available pages before exploring further.
Trainer with a Checkpoint object. By the end you should have a cursory understanding on how to use the Checkpoint class in conjunction with the Trainerclass.
Prerequisites
- You must have installed the Cerebras Model Zoo (click here if you haven’t).
- You must be familiar with the Trainer.
Configure Trainer Checkpoint Behavior
Primary checkpointing functionality is done using theCheckpoint core callback. You can control the cadence at which you save checkpoints, the naming convention of checkpoints saved, and various other useful functionalities. For details on all options, see Checkpoint.
An example of a checkpoint configuration is shown here:
Automatically Loading from the Most Recent Checkpoint
Theautoload_last_checkpoint can be used to autoload the most recent checkpoint from model_dir. If you have the following checkpoints in model_dir:
autoload_last_checkpoint like in the example below, the run will automatically load from the checkpoint with the largest step value, in this case "checkpoint_20000.mdl".
Checkpoint Loading Strictness
Thedisable_strict_checkpoint_loading option can be used to loosen the validation done when loading a checkpoint. If True, the model will not raise an error if the checkpoint contains keys that are not present in the model.
Selective Checkpoint State Saving
You can specify which individual checkpoint states to be saved using theSaveCheckpointState callback, which allows us to:
- Save an alternative checkpoint with a subset of states to conserve storage space.
- Can be used to bypass checkpoint deletion policies.
"model" state.
k in SaveCheckpointState refers to taking an alterative checkpoint every k checkpoint steps, not every k steps.Selective Checkpoint State Loading
You can specify which individual checkpoint states to be loaded using theLoadCheckpointStates callback. The LoadCheckpointStates callback allows us to:
- Perform fine-tuning, by loading the model state but starting the optimizer state from scratch and the global step from 0.
"model" state from any checkpoint.
Checkpoint Deletion Policy
For long runs with limited storage space, it is important to have a way to control how checkpoints are deleted or retained. To control the number of checkpoints retained, useKeepNCheckpoints. The KeepNCheckpoints callback allows us to: - Constrain the amount of storage space checkpoints take up while still allowing for recent restart points in case a run is interrupted. - If you want to still keep long-term checkpoints over a larger cadence for validation purposes, checkpoints generated by SaveCheckpointState are ignored by KeepNCheckpoints (see Selective Checkpoint State Saving for more details).
In the example below, only the 5 most recent checkpoints will be retained.
What’s next?
To learn how to use advanced checkpointing to do a fine-tuning run, see see Fine-Tuning with Validation.Further Reading
To learn about how you can configure aTrainer instance using a YAML configuration file, you can check out:
- Trainer YAML Overview
Trainer in some core workflows, you can check out:
To learn more about how you can extend the capabilities of the Trainer class, you can check out: