pytorch / 2 / generated / torch.autograd.forward_ad.make_dual.html

torch.autograd.forward_ad.make_dual

torch.autograd.forward_ad.make_dual(tensor, tangent, *, level=None) [source]

Associates a tensor value with a forward gradient, the tangent, to create a “dual tensor”, which is used to compute forward AD gradients. The result is a new tensor aliased to tensor with tangent embedded as an attribute as-is if it has the same storage layout or copied otherwise. The tangent attribute can be recovered with unpack_dual().

This function is backward differentiable.

Given a function f whose jacobian is J, it allows one to compute the Jacobian-vector product (jvp) between J and a given vector v as follows.

Example:

>>> with dual_level():
...     inp = make_dual(x, v)
...     out = f(inp)
...     y, jvp = unpack_dual(out)

Please see the forward-mode AD tutorial for detailed steps on how to use this API.

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.autograd.forward_ad.make_dual.html