Evaluation Metrics
Learn to use and create metrics in Cerebras for evaluating PyTorch models, including predefined metrics like AccuracyMetric and custom metrics tailored to specific evaluation needs.
We provide Cerebras-compatible metrics that can be used during evaluation to measure how well the model has been trained. These metrics can be found in the metrics.Metric module.
For example:
Writing Custom Metrics
To define a Cerebras compliant metrics, create a subclass of cerebras.pytorch.metrics.Metric
.
For example,
As can be seen in the above example, the base Metric
class expects one argument. Namely, the metric name.
In addition, there are three abstract methods that must be overridden:
-
reset
This method resets (or defines if its the first time its called) the metrics’ internal state.States can be registered via calls toregister_state
-
update
This method is used to update the metric’s registered states.Note that to remain Cerebras compliant, no tensor may be evaluated/inspected here. The update call is intended to be fully traced. -
compute
This method is used to compute the final accumulated metric value using the state that was updated inupdate