lenskit.flexmf.FlexMFExplicitConfig#

class lenskit.flexmf.FlexMFExplicitConfig#

Bases: lenskit.flexmf._base.FlexMFConfigBase

Configuration for FlexMFExplicitScorer. This class overrides certain base class defaults for better explicit-feedback performance.

Stability:

Experimental

regularization: float = 0.1#

The regularization strength.

Note

The explicit-feedback model uses a different default strength.

reg_method: Literal['AdamW', 'L2'] | None = 'L2'#

The regularization method to use.

With the default AdamW regularization, training will use the AdamW optimizer with weight decay. With L2 regularization, training will use sparse gradients and the torch.optim.SparseAdam optimizer.

Note

The explicit-feedback model defaults this setting to "L2".

None

Use no regularization.

"L2"

Use L2 regularization on the parameters used in each training batch. The strength is applied to the _mean_ norms in a batch, so that the regularization term scale is not dependent on the batch size.

"AdamW"

Use torch.optim.AdamW with the specified regularization strength. This configuration does not use sparse gradients, but training time is often comparable.

Note

Regularization values do not necessarily have the same range or meaning for the different regularization methods.