On this page
torch.nn.functional.torch.nn.parallel.data_parallel
torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)
[source]-
Evaluates module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
- Parameters
-
- module (Module) – the module to evaluate in parallel
- inputs (Tensor) – inputs to the module
- device_ids (list of int or torch.device) – GPU ids on which to replicate module
- output_device (list of int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0])
- Returns
-
a Tensor containing the result of module(input) located on output_device
- Return type
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.functional.torch.nn.parallel.data_parallel.html