lenskit.metrics.Precision#

class lenskit.metrics.Precision(n=None, *, k=None)#

Bases: ListMetric, RankingMetricBase

Compute recommendation precision. This is computed as:

\[\frac{|L \cap I_u^{\mathrm{test}}|}{|L|}\]

In the uncommon case that k is specified and len(recs) < k, this metric uses len(recs) as the denominator.

Stability:
Caller (see Stability Levels).
Parameters:
  • n (int | None)

  • k (int | None)

__init__(n=None, *, k=None)#
Parameters:
  • n (int | None)

  • k (int | None)

Methods

__init__([n, k])

extract_list_metrics(data, /)

Return the given per-list metric result.

measure_list(recs, test)

Compute the metric value for a single result list.

summarize(values, /)

Summarize per-list metric values

truncate(items)

Truncate an item list if it is longer than n.

Attributes

default

k

label

The metric's default label in output.

n

The maximum length of rankings to consider.

set_k

property label#

The metric’s default label in output. The base implementation returns the class name by default.

measure_list(recs, test)#

Compute the metric value for a single result list.

Individual metric classes need to implement this method.

Parameters:
Return type:

float