cerebras.pytorch.backend is the configuration of the device and other settings used during a run. The device is what hardware the workflow will run on.
Prerequisites
Make sure to have read through Trainer Overview and Trainer Configuration Overview which provide the basic overview of how to run Model Zoo models. In this document, you will be using the tools and configurations outlined in those pages.Configure the Device
Configuring the device used by theTrainer can be done by simply specifying one of "CSX", "CPU", or "GPU".
Setting
device still creates a cerebras.pytorch.backend instance just with default settings. To configure anything about the backend, you must specify those parameters via the backend key instead.Limitations
Once a device is set, any otherTrainer instances must also use the same device type as well. You cannot mix device types. For example, a configuration like this:
Configure the Backend
Configuring the backend used by theTrainer can be done by creating a cerebras.pytorch.backend instance.
The configuration is expected to be a dictionary whose keys will be used to construct a cerebras.pytorch.backend instance.
In the Python script, construct a cerebras.pytorch.backend instance and pass it to the backend argument.
Limitations
Multiple backend instantiations with different devices is not supported. You will see this error:Trainer instances, you must ensure you only instantiate backends of a single device type. However you can change other backend parameters between Trainer instances.
The configuration is expected to be a dictionary whose keys will be used to construct a cerebras.pytorch.backend instance.
In the Python script, construct a cerebras.pytorch.backend instance and pass it to the backend argument.
For example:
Mutual Exclusivity
Thedevice and backend arguments are mutually exclusive. It is expected when initializing a Trainer to set one of them but not both. If both are set, you will see an error that looks like this:
Further Reading
To learn more about how you can use theTrainer in some core workflows, you can check out:
To learn more about how you can extend the capabilities of the Trainer class, you can check out:

