lenskit.graphs.lightgcn.LightGCNTrainer#

class lenskit.graphs.lightgcn.LightGCNTrainer(scorer, data, options)#

Bases: lenskit.training.ModelTrainer

Protocol implemented by iterative trainers for models. Models that implement UsesTrainer will return an object implementing this protocol from their create_trainer() method.

This protocol only defines the core aspects of training a model. Trainers should also implement ParameterContainer to allow training to be checkpointed and resumed.

It is also a good idea for the trainer to be pickleable, but the parameter container interface is the primary mechanism for checkpointing.

Stability:
Full (see Stability Levels).
Parameters:
scorer: LightGCNScorer#
data: lenskit.data.Dataset#
options: lenskit.training.TrainingOptions#
log: structlog.stdlib.BoundLogger#
rng: numpy.random.Generator#
device: str#
model: torch_geometric.nn.LightGCN#
matrix: lenskit.data.MatrixRelationshipSet#
coo: lenskit.data.matrix.COOStructure#
user_base: int#
edges: torch.Tensor#
optimizer: torch.optim.Optimizer#
epochs_trained: int = 0#
train_epoch()#

Perform one epoch of the training process, optionally returning metrics on the training behavior. After each training iteration, the mmodel must be usable.

Return type:

dict[str, float] | None

finalize()#

Finish the training process, cleaning up any unneeded data structures and doing any finalization steps to the model.

The default implementation does nothing.

abstractmethod batch_loss(mb_edges, scores)#
Parameters:
Return type:

torch.Tensor