On this page
torch.nn.utils.clip_grad_norm_
torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False)
[source]-
Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
- Parameters:
-
- parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized
- max_norm (float or int) – max norm of the gradients
- norm_type (float or int) – type of the used p-norm. Can be
'inf'
for infinity norm. - error_if_nonfinite (bool) – if True, an error is thrown if the total norm of the gradients from
parameters
isnan
,inf
, or-inf
. Default: False (will switch to True in the future)
- Returns:
-
Total norm of the parameter gradients (viewed as a single vector).
- Return type:
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/generated/torch.nn.utils.clip_grad_norm_.html