lenskit.flexmf.FlexMFExplicitConfig =================================== .. py:class:: lenskit.flexmf.FlexMFExplicitConfig :canonical: lenskit.flexmf._explicit.FlexMFExplicitConfig Bases: :py:obj:`lenskit.flexmf._base.FlexMFConfigBase` Configuration for :class:`FlexMFExplicitScorer`. This class overrides certain base class defaults for better explicit-feedback performance. :Stability: Experimental .. py:attribute:: regularization :type: float :value: 0.1 The regularization strength. .. note:: The explicit-feedback model uses a different default strength. .. py:attribute:: reg_method :type: Literal['AdamW', 'L2'] | None :value: 'L2' The regularization method to use. With the default AdamW regularization, training will use the :class:`~torch.optim.AdamW` optimizer with weight decay. With L2 regularization, training will use sparse gradients and the :class:`torch.optim.SparseAdam` optimizer. .. note:: The explicit-feedback model defaults this setting to ``"L2"``. ``None`` Use no regularization. ``"L2"`` Use L2 regularization on the parameters used in each training batch. The strength is applied to the _mean_ norms in a batch, so that the regularization term scale is not dependent on the batch size. ``"AdamW"`` Use :class:`torch.optim.AdamW` with the specified regularization strength. This configuration does *not* use sparse gradients, but training time is often comparable. .. note:: Regularization values do not necessarily have the same range or meaning for the different regularization methods.