On this page
torch.autograd.forward_ad.make_dual
torch.autograd.forward_ad.make_dual(tensor, tangent, *, level=None)
[source]-
Associates a tensor value with a forward gradient, the tangent, to create a “dual tensor”, which is used to compute forward AD gradients. The result is a new tensor aliased to
tensor
withtangent
embedded as an attribute as-is if it has the same storage layout or copied otherwise. The tangent attribute can be recovered withunpack_dual()
.This function is backward differentiable.
Given a function
f
whose jacobian isJ
, it allows one to compute the Jacobian-vector product (jvp
) betweenJ
and a given vectorv
as follows.Example:
>>> with dual_level(): ... inp = make_dual(x, v) ... out = f(inp) ... y, jvp = unpack_dual(out)
Please see the forward-mode AD tutorial for detailed steps on how to use this API.
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.autograd.forward_ad.make_dual.html