On this page
torch.nn.modules.module.register_module_full_backward_pre_hook
torch.nn.modules.module.register_module_full_backward_pre_hook(hook)
[source]-
Registers a backward pre-hook common to all the modules.
Warning
This adds global state to the
nn.module
module and it is only intended for debugging/profiling purposes.The hook will be called every time the gradients for the module are computed. The hook should have the following signature:
hook(module, grad_output) -> Tensor or None
The
grad_output
is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place ofgrad_output
in subsequent computations. Entries ingrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Global hooks are called before hooks registered with
register_backward_pre_hook
- Returns
-
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type
-
torch.utils.hooks.RemovableHandle
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.modules.module.register_module_full_backward_pre_hook.html