Trainer
in the model directory.
Prerequisites
Make sure to have read through Trainer Overview and Trainer Configuration Overview which provide the basic overview of how to run Model Zoo models. In this document, you will be using the tools and configurations outlined in those pages.Configure the Model Directory
Configuring the model directory at which the trainer writes artifacts to is as simple as passing in themodel_dir
argument to the Trainer
’s constructor.
Model Directory Structure
The following is an overview of the structure of the model directory.TensorBoardLogger
. So, you can see the event files that were written by the TensorBoard writer.
If you open tensorboard to the model directory, the runs will nicely by grouped together by run.
cerebras_logs
directory in which various logs and artifacts from the compilation and execution are stored. These logs/artifacts are also divided up by datetime (the same datetime as the above mentioned subdirectory) so that you know which logs/artifacts belong to which run.
Finally, you can see that checkpoints taken during the run are saved in the model directory. These are stored in the base model directory so that future runs with checkpoint autoloading enabled can easily pick them up (see Checkpointing for more details).
Conclusion
That covers the various logs and artifacts that are outputted by the Trainer. Hopefully, you have a better understanding of what the model directory contains and how to find the logs and artifacts that you need to monitor your run.Further Reading
To learn more about how you can use theTrainer
in some core workflows, you can check out:
To learn more about how you can extend the capabilities of the Trainer
class, you can check out: