lenskit.metrics.RMSE#

class lenskit.metrics.RMSE(missing_scores='error', missing_truth='error')#

Bases: PredictMetric, ListMetric

Compute RMSE (root mean squared error). This is computed as:

\[\sum_{r_{ui} \in R} \left(r_{ui} - s(i|u)\right)^2\]

This metric does not do any fallbacks; if you want to compute RMSE with fallback predictions (e.g. usign a bias model when a collaborative filter cannot predict), generate predictions with FallbackScorer.

Stability:
Caller (see Stability Levels).
Parameters:
  • missing_scores (Literal['error', 'ignore'])

  • missing_truth (Literal['error', 'ignore'])

__init__(missing_scores='error', missing_truth='error')#
Parameters:
  • missing_scores (Literal['error', 'ignore'])

  • missing_truth (Literal['error', 'ignore'])

Methods

__init__([missing_scores, missing_truth])

align_scores(predictions[, truth])

Align prediction scores and rating values, applying the configured missing dispositions.

extract_list_metrics(data, /)

Return the given per-list metric result.

measure_list(predictions[, test])

Compute measurements for a single list.

summarize(values, /)

Summarize per-list metric values

Attributes

default

label

The metric's default label in output.

missing_scores

missing_truth

measure_list(predictions, test=None, /)#

Compute measurements for a single list.

Returns:

  • A float for simple metrics

  • Intermediate data for decomposed metrics

  • A dict mapping metric names to values for multi-metric classes

Parameters:
Return type:

float