On this page, you’ll build on the Pretraining with Upstream Validation guide to also configure downstream validation as part of your pre-training run.

The example will be for pre-training Llama-3-8B model. For downstream validation, you will use the external frameworks Eleuther Eval Harness (EEH) and BigCode Eval Harness (BCEH).

By the end of this guide, you should be comfortable kicking off your own pre-training run for the model of your choice, combining both upstream and downstream validation.

Prerequisites

Please ensure that you have installed the Cerebras Model Zoo package by going through the installation guide.

Make sure to have read through Trainer Overview and Trainer Configuration Overview which provide the basic overview of how to run Model Zoo models.

Please also make sure to read Pretraining with Upstream Validation since this page directly builds on the walkthrough there.

Lastly, please read though Downstream Validation using Eleuther Eval Harness and Downstream Validation using BigCode Eval Harness).

Specifically, this guide presupposes your understanding of the BigCodeEvalHarness callbacks.

Configuring the Run

Similar to Pretraining with Upstream Validation, this page will present the YAML configuration file as well as the equivalent pure Python setup side-by-side for your ease of comparison.

You will add downstream validation to the pre-training configuration set up in Pretraining with Upstream Validation for Llama-3-8B. Recall the full configuration you put together from that tutorial:

Configure EEH

Let’s add downstream validation on a single EEH multiple-choice task winogrande as part of the pre-training run. To do this, you will need to augment the configuration with the EleutherEvalHarness callback as such:

Simply add the callback to the list of callbacks in the YAML.

trainer:
  init:
    backend:  # CSX
      ...
    model:  # llama
      ...
    optimizer:  # AdamW
      ...
    schedulers:  # CosineDecayLR
      ...
    precision:  # DLS
      ...
    loop:
      ...
    checkpoint:
      ...
    callbacks:
      ...
      - EleutherEvalHarness:
        # Eleuther Eval Harness settings
        eeh_args:
          tasks: winogrande
          num_fewshot: 0
        # CSX-specific eval harness settings
        keep_data_dir: false
        # Dataloader settings
        batch_size: 4
        shuffle: false
        max_sequence_length: 8192
        num_workers: 1
        data_dir: <path_to_mounted_dir>
        tokenizer_file_path: <path_to_llama3_tokenizer_json_file>
        eos_id: 128001
        pretrained_model_name_or_path: null
    loggers:
      ...
    seed: 2024
    ...

And that is all! As part of your pre-training run’s configuration, you have now set up downstream validation on EEH task winogrande.

  1. The eval_frequency specified as part of the trainer’s loop (YAML) or in the TrainingLoop object (Python) also controls the frequency of downstream validation; i.e., for your example above, validation on EEH task winogrande will be run every 1K steps.

  2. Update the tasks argument to configure downstream validation for more EEH tasks. Note that only a single generative EEH task may be specified per callback.

Configure BCEH

Configuring downstream validation using BCEH is no different than it is for EEH. For example, if you want to configure the pre-training run on the code generative task humaneval, please augment the YAML configuration file with the the [BigCodeEvalHarness](cerebras.modelzoo.trainer.extensions.bigcode.BigCodeEvalHarness “cerebras.modelzoo.trainer.extensions.bigcode.BigCodeEvalHarness”) callback as such:

  • YAML: Simply add the callback to the list of callbacks in the YAML. Don’t forget to include the inference settings under model configuration!

  • Python: Construct a BigCodeEvalHarness callback object and pass it to the Trainer’s constructor as follows. Note that the BCEH arguments are passed to the callback via the BigCodeCLIArgs object, comprising the list of supported BCEH command line arguments.

And that is all! As part of your pre-training run’s configuration, you have now set up downstream validation on BCEH task humaneval.

  1. Since only running one generative eval harness task is supported per callback, please create a separate BigCodeEvalHarness callback to run downstream validation for more BCEH tasks.

  2. To obtain the final eval metrics for BCEH, please run the code execution and evaluation flow separately using the Downstream Validation using BigCode Eval Harness guide.

Configure EEH and BCEH

Configuring downstream validation for both EEH and BCEH is also straightforward via the use of both the BigCodeEvalHarness callbacks.

Let’s augment the full YAML configuration file to run downstream validation on EEH tasks hellaswag, gsm8k and winogrande, and BCEH task mbpp with the callbacks as follows:

  • YAML: Simply add both callbacks to the list of callbacks in the YAML. Since you are running generative eval harness tasks, don’t forget to include the inference settings under model configuration!

  • Python: Construct BigCodeEvalHarness objects, respectively.

And that is all! As part of your pre-training run’s configuration, you have now set up downstream validation on both BCEH and EEH tasks.

Start Pre-Training

Once you have a fully configured Trainer, with your choice of downstream validation, all you need to do now is to kick off the run and start pre-training.

  • YAML: Let’s assume that the YAML configuration that you put together above is written to a file called ./pretrain_downstream_llama_8b.yaml. Then, to run pre-training using the training script that comes packaged as part of ModelZoo, you can run the code below on the command line.

  • Python: Let’s assume that the python code that you put together above is written to a file called ./pretrain_downstream_llama_8b.py. Then, to run pre-training, all there is to do is to execute that python script.

Conclusion

With that, you have augmented your pre-training run with downstream validation on the Cerebras Wafer-Scale Cluster using the ModelZoo Trainer!

Now you have what it takes to write your own Trainer configuration to set up training jobs on your choice of models as well as downstream validation tasks on the Cerebras Wafer-Scale Cluster.