On this page
Rprop
class torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50), foreach=None, maximize=False)
[source]-
Implements the resilient backpropagation algorithm.
For further details regarding the algorithm we refer to the paper A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm.
- Parameters:
-
- params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
- lr (float, optional) – learning rate (default: 1e-2)
- etas (Tuple[float, float], optional) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors (default: (0.5, 1.2))
- step_sizes (Tuple[float, float], optional) – a pair of minimal and maximal allowed step sizes (default: (1e-6, 50))
- foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)
- maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
add_param_group(param_group)
-
Add a param group to the
Optimizer
sparam_groups
.This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizer
as training progresses.- Parameters:
-
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
load_state_dict(state_dict)
-
Loads the optimizer state.
- Parameters:
-
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict()
.
state_dict()
-
Returns the state of the optimizer as a
dict
.It contains two entries:
-
- state - a dict holding current optimization state. Its content
-
differs between optimizer classes.
-
- param_groups - a list containing all parameter groups where each
-
parameter group is a dict
-
step(closure=None)
[source]-
Performs a single optimization step.
- Parameters:
-
closure (Callable, optional) – A closure that reevaluates the model and returns the loss.
zero_grad(set_to_none=False)
-
Sets the gradients of all optimized
torch.Tensor
s to zero.- Parameters:
-
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)
followed by a backward pass,.grad
s are guaranteed to be None for params that did not receive a gradient. 3.torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/generated/torch.optim.Rprop.html