lenskit.flexmf.FlexMFConfigBase#
- class lenskit.flexmf.FlexMFConfigBase#
Bases:
lenskit.config.common.EmbeddingSizeMixin,pydantic.BaseModelCommon configuration for all FlexMF scoring components.
- Stability:
Experimental
- embedding_size: pydantic.PositiveInt = 64#
The dimension of user and item embeddings (number of latent features to learn).
- regularization: float = 0.01#
The regularization strength.
Note
The explicit-feedback model uses a different default strength.
- reg_method: Literal['AdamW', 'L2'] | None = 'AdamW'#
The regularization method to use.
With the default AdamW regularization, training will use the
AdamWoptimizer with weight decay. With L2 regularization, training will use sparse gradients and thetorch.optim.SparseAdamoptimizer.Note
The explicit-feedback model defaults this setting to
"L2".NoneUse no regularization.
"L2"Use L2 regularization on the parameters used in each training batch. The strength is applied to the _mean_ norms in a batch, so that the regularization term scale is not dependent on the batch size.
"AdamW"Use
torch.optim.AdamWwith the specified regularization strength. This configuration does not use sparse gradients, but training time is often comparable.
Note
Regularization values do not necessarily have the same range or meaning for the different regularization methods.