On this page
no_grad
class torch.no_grad
[source]-
Context-manager that disabled gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call
Tensor.backward()
. It will reduce memory consumption for computations that would otherwise haverequires_grad=True
.In this mode, the result of every computation will have
requires_grad=False
, even when the inputs haverequires_grad=True
.This context manager is thread local; it will not affect computation in other threads.
Also functions as a decorator. (Make sure to instantiate with parenthesis.)
Note
No-grad is one of several mechanisms that can enable or disable gradients locally see Locally disabling gradient computation for more information on how they compare.
Note
This API does not apply to forward-mode AD. If you want to disable forward AD for a computation, you can unpack your dual tensors.
- Example::
-
>>> x = torch.tensor([1.], requires_grad=True) >>> with torch.no_grad(): ... y = x * 2 >>> y.requires_grad False >>> @torch.no_grad() ... def doubler(x): ... return x * 2 >>> z = doubler(x) >>> z.requires_grad False
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/generated/torch.no_grad.html