On this page
torch.nn.modules.module.register_module_full_backward_hook
torch.nn.modules.module.register_module_full_backward_hook(hook)[source]-
Registers a backward hook common to all the modules.
Warning
This adds global state to the
nn.modulemodule and it is only intended for debugging/profiling purposes.The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or NoneThe
grad_inputandgrad_outputare tuples. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_inputin subsequent computations.grad_inputwill only correspond to the inputs given as positional arguments and all kwarg arguments will not appear in the hook. Entries ingrad_inputandgrad_outputwill beNonefor all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Global hooks are called before hooks registered with
register_backward_hook- Returns
-
a handle that can be used to remove the added hook by calling
handle.remove() - Return type
-
torch.utils.hooks.RemovableHandle
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.modules.module.register_module_full_backward_hook.html