lenskit.metrics.MAE#
- class lenskit.metrics.MAE(missing_scores='error', missing_truth='error')#
Bases:
PredictMetric,ListMetricCompute MAE (mean absolute error). This is computed as:
\[\sum_{r_{ui} \in R} \left|r_{ui} - s(i|u)\right|\]This metric does not do any fallbacks; if you want to compute MAE with fallback predictions (e.g. usign a bias model when a collaborative filter cannot predict), generate predictions with
FallbackScorer.- Stability:
- Caller (see Stability Levels).
- Parameters:
- __init__(missing_scores='error', missing_truth='error')#
Methods
__init__([missing_scores, missing_truth])align_scores(predictions[, truth])Align prediction scores and rating values, applying the configured missing dispositions.
extract_list_metrics(data, /)Return the given per-list metric result.
measure_list(predictions[, test])Compute measurements for a single list.
summarize(values, /)Summarize per-list metric values
Attributes
defaultlabelThe metric's default label in output.
missing_scoresmissing_truth