lenskit.metrics.MAE#
- class lenskit.metrics.MAE(missing_scores='error', missing_truth='error')#
Bases:
PredictMetric,ListMetric,DecomposedMetricCompute MAE (mean absolute error). This is computed as:
\[\sum_{r_{ui} \in R} \left|r_{ui} - s(i|u)\right|\]This metric does not do any fallbacks; if you want to compute MAE with fallback predictions (e.g. usign a bias model when a collaborative filter cannot predict), generate predictions with
FallbackScorer.- Stability:
- Caller (see Stability Levels).
- Parameters:
- __init__(missing_scores='error', missing_truth='error')#
Methods
__init__([missing_scores, missing_truth])align_scores(predictions[, truth])Align prediction scores and rating values, applying the configured missing dispositions.
compute_list_data(output, test)Compute measurements for a single list.
extract_list_metric(metric)Extract a single-list metric from the per-list measurement result (if applicable).
extract_list_metrics(data, /)Return the given per-list metric result.
global_aggregate(values)Aggregate list metrics to compute a global value.
measure_list(predictions[, test])Compute measurements for a single list.
summarize(values, /)Summarize per-list metric values
Attributes
defaultlabelThe metric's default label in output.
missing_scoresmissing_truth- measure_list(predictions, test=None, /)#
Compute measurements for a single list.
- compute_list_data(output, test)#
Compute measurements for a single list.
Use measure_list in Metric for new implementations.
- extract_list_metric(metric)#
Extract a single-list metric from the per-list measurement result (if applicable).
- Returns:
The per-list metric, or
Noneif this metric does not compute per-list metrics.
Implement
Metric.extract_list_metrics()in new implementations.
- global_aggregate(values)#
Aggregate list metrics to compute a global value.
Implement
Metric.summarize()in new implementations.