We provide Cerebras-compatible metrics that can be used during evaluation to measure how well the model has been trained. These metrics can be found in the metrics.Metric module. For example:Documentation Index
Fetch the complete documentation index at: https://training-docs.cerebras.ai/llms.txt
Use this file to discover all available pages before exploring further.
Writing Custom Metrics
To define a Cerebras compliant metrics, create a subclass ofcerebras.pytorch.metrics.Metric.
For example,
Metric class expects one argument. Namely, the metric name.
In addition, there are three abstract methods that must be overridden:
-
resetThis method resets (or defines if its the first time its called) the metrics’ internal state.States can be registered via calls toregister_state -
updateThis method is used to update the metric’s registered states.Note that to remain Cerebras compliant, no tensor may be evaluated/inspected here. The update call is intended to be fully traced. -
computeThis method is used to compute the final accumulated metric value using the state that was updated inupdate