pytorch / 1 / generated / torch.optim.adagrad.html

Adagrad

class torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10, foreach=None, *, maximize=False) [source]

Implements Adagrad algorithm.

input : γ (lr) , θ 0 (params) , f ( θ ) (objective) , λ (weight decay) , τ (initial accumulator value) , η (lr decay) initialize : s t a t e _ s u m 0 0 for t = 1 to do g t θ f t ( θ t 1 ) γ ~ γ / ( 1 + ( t 1 ) η ) if λ 0 g t g t + λ θ t 1 s t a t e _ s u m t s t a t e _ s u m t 1 + g t 2 θ t θ t 1 γ ~ g t s t a t e _ s u m t + ϵ r e t u r n θ t \begin{aligned} &\rule{110mm}{0.4pt} \\ &\textbf{input} : \gamma \text{ (lr)}, \: \theta_0 \text{ (params)}, \: f(\theta) \text{ (objective)}, \: \lambda \text{ (weight decay)}, \\ &\hspace{12mm} \tau \text{ (initial accumulator value)}, \: \eta\text{ (lr decay)}\\ &\textbf{initialize} : state\_sum_0 \leftarrow 0 \\[-1.ex] &\rule{110mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm}g_t \leftarrow \nabla_{\theta} f_t (\theta_{t-1}) \\ &\hspace{5mm} \tilde{\gamma} \leftarrow \gamma / (1 +(t-1) \eta) \\ &\hspace{5mm} \textbf{if} \: \lambda \neq 0 \\ &\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1} \\ &\hspace{5mm}state\_sum_t \leftarrow state\_sum_{t-1} + g^2_t \\ &\hspace{5mm}\theta_t \leftarrow \theta_{t-1}- \tilde{\gamma} \frac{g_t}{\sqrt{state\_sum_t}+\epsilon} \\ &\rule{110mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{110mm}{0.4pt} \\[-1.ex] \end{aligned}

For further details regarding the algorithm we refer to Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.

Parameters:
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
  • lr (float, optional) – learning rate (default: 1e-2)
  • lr_decay (float, optional) – learning rate decay (default: 0)
  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-10)
  • foreach (bool, optional) – whether foreach implementation of optimizer is used (default: None)
  • maximize (bool, optional) – maximize the params based on the objective, instead of minimizing (default: False)
add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.

Parameters:

param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.

load_state_dict(state_dict)

Loads the optimizer state.

Parameters:

state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

state_dict()

Returns the state of the optimizer as a dict.

It contains two entries:

  • state - a dict holding current optimization state. Its content

    differs between optimizer classes.

  • param_groups - a list containing all parameter groups where each

    parameter group is a dict

step(closure=None) [source]

Performs a single optimization step.

Parameters:

closure (Callable, optional) – A closure that reevaluates the model and returns the loss.

zero_grad(set_to_none=False)

Sets the gradients of all optimized torch.Tensor s to zero.

Parameters:

set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/generated/torch.optim.Adagrad.html