lenskit.flexmf.FlexMFConfigBase#

class lenskit.flexmf.FlexMFConfigBase#

Bases: lenskit.config.common.EmbeddingSizeMixin, pydantic.BaseModel

Common configuration for all FlexMF scoring components.

Stability:

Experimental

embedding_size: pydantic.PositiveInt = 64#

The dimension of user and item embeddings (number of latent features to learn).

batch_size: int = 8192#

The training batch size.

learning_rate: float = 0.01#

The learning rate for training.

epochs: int = 10#

The number of training epochs.

regularization: float = 0.01#

The regularization strength.

Note

The explicit-feedback model uses a different default strength.

reg_method: Literal['AdamW', 'L2'] | None = 'AdamW'#

The regularization method to use.

With the default AdamW regularization, training will use the AdamW optimizer with weight decay. With L2 regularization, training will use sparse gradients and the torch.optim.SparseAdam optimizer.

Note

The explicit-feedback model defaults this setting to "L2".

None

Use no regularization.

"L2"

Use L2 regularization on the parameters used in each training batch. The strength is applied to the _mean_ norms in a batch, so that the regularization term scale is not dependent on the batch size.

"AdamW"

Use torch.optim.AdamW with the specified regularization strength. This configuration does not use sparse gradients, but training time is often comparable.

Note

Regularization values do not necessarily have the same range or meaning for the different regularization methods.