lenskit.flexmf.FlexMFImplicitConfig#

class lenskit.flexmf.FlexMFImplicitConfig#

Bases: lenskit.flexmf._base.FlexMFConfigBase

Configuration for FlexMFImplicitScorer. It inherits base model options from FlexMFConfigBase.

Stability:

Experimental

preset: Literal['bpr', 'warp', 'lightgcn'] | None = None#

Select preset defaults to mimic a particular model’s original presentation.

loss: ImplicitLoss = 'logistic'#

The loss to use for model training.

negative_strategy: NegativeStrategy | None = None#

The negative sampling strategy. The default is "misranked" for WARP loss and "uniform" for other losses.

negative_count: pydantic.PositiveInt = 1#

The number of negative items to sample for each positive item in the training data. With BPR loss, the positive item is compared to each negative item; with logistic loss, the positive item is treated once per learning round, so this setting effectively makes the model learn on _n_ negatives per positive, rather than giving positive and negative examples equal weight.

positive_weight: pydantic.PositiveFloat = 1.0#

A weighting multiplier to apply to the positive item’s loss, to adjust the relative importance of positive and negative classifications. Only applies to logistic loss.

user_bias: bool | None = None#

Whether to learn a user bias term. If unspecified, the default depends on the loss function (False for pairwise and True for logistic).

item_bias: bool = True#

Whether to learn an item bias term.

convolution_layers: pydantic.NonNegativeInt = 0#

The number of LightGCN convolution layers to use. 0 (the default) configures for standard matrix factorization.

selected_negative_strategy()#
Return type:

NegativeStrategy

classmethod apply_preset(data)#
check_strategies()#