Metrics
AccuracyMetric
Bases: Scorer
A class to compute and summarize accuracy-related metrics for model outputs.
This class extends the weave.Scorer
and provides operations to score
individual predictions and summarize the results across multiple predictions.
It calculates the accuracy, precision, recall, and F1 score based on the
comparison between predicted outputs and true labels.
Source code in guardrails_genie/metrics.py
score(output, label)
Evaluate the correctness of a single prediction.
This method compares a model's predicted output with the true label to determine if the prediction is correct. It checks if the 'safe' field in the output dictionary, when converted to an integer, matches the provided label.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
output
|
dict
|
A dictionary containing the model's prediction, specifically the 'safe' key which holds the predicted value. |
required |
label
|
int
|
The true label against which the prediction is compared. |
required |
Returns:
Name | Type | Description |
---|---|---|
dict |
A dictionary with a single key 'correct', which is True if the |
prediction matches the label, otherwise False.
Source code in guardrails_genie/metrics.py
summarize(score_rows)
Summarize the accuracy-related metrics from a list of prediction scores.
This method processes a list of score dictionaries, each containing a 'correct' key indicating whether a prediction was correct. It calculates several metrics: accuracy, precision, recall, and F1 score, based on the number of true positives, false positives, and false negatives.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
score_rows
|
list
|
A list of dictionaries, each with a 'correct' key indicating the correctness of individual predictions. |
required |
Returns:
Type | Description |
---|---|
Optional[dict]
|
Optional[dict]: A dictionary containing the calculated metrics: 'accuracy', 'precision', 'recall', and 'f1_score'. If no valid data is present, all metrics default to 0. |