On this page
torch.cuda.get_allocator_backend
torch.cuda.get_allocator_backend()
[source]-
Returns a string describing the active allocator backend as set by
PYTORCH_CUDA_ALLOC_CONF
. Currently available backends arenative
(PyTorch’s native caching allocator) andcudaMallocAsync`
(CUDA’s built-in asynchronous allocator).Note
See Memory management for details on choosing the allocator backend.
- Return type
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.cuda.get_allocator_backend.html