On this page
SequentialLR
class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1, verbose=False)
[source]-
Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given epoch.
- Parameters
Example
>>> # Assuming optimizer uses lr = 1. for all groups >>> # lr = 0.1 if epoch == 0 >>> # lr = 0.1 if epoch == 1 >>> # lr = 0.9 if epoch == 2 >>> # lr = 0.81 if epoch == 3 >>> # lr = 0.729 if epoch == 4 >>> scheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2) >>> scheduler2 = ExponentialLR(self.opt, gamma=0.9) >>> scheduler = SequentialLR(self.opt, schedulers=[scheduler1, scheduler2], milestones=[2]) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
get_last_lr()
-
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
[source]-
Loads the schedulers state.
- Parameters
-
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict()
.
print_lr(is_verbose, group, lr, epoch=None)
-
Display the current learning rate.
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.optim.lr_scheduler.SequentialLR.html