pytorch / 2 / generated / torch.optim.lr_scheduler.sequentiallr.html

SequentialLR

class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=-1, verbose=False) [source]

Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given epoch.

Parameters
  • optimizer (Optimizer) – Wrapped optimizer.
  • schedulers (list) – List of chained schedulers.
  • milestones (list) – List of integers that reflects milestone points.
  • last_epoch (int) – The index of last epoch. Default: -1.
  • verbose (bool) – Does nothing.

Example

>>> # Assuming optimizer uses lr = 1. for all groups
>>> # lr = 0.1     if epoch == 0
>>> # lr = 0.1     if epoch == 1
>>> # lr = 0.9     if epoch == 2
>>> # lr = 0.81    if epoch == 3
>>> # lr = 0.729   if epoch == 4
>>> scheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)
>>> scheduler2 = ExponentialLR(self.opt, gamma=0.9)
>>> scheduler = SequentialLR(self.opt, schedulers=[scheduler1, scheduler2], milestones=[2])
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()
get_last_lr()

Return last computed learning rate by current scheduler.

load_state_dict(state_dict) [source]

Loads the schedulers state.

Parameters

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose, group, lr, epoch=None)

Display the current learning rate.

state_dict() [source]

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The wrapped scheduler states will also be saved.

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.optim.lr_scheduler.SequentialLR.html