pytorch / 2 / generated / torch.cuda.max_memory_reserved.html

torch.cuda.max_memory_reserved

torch.cuda.max_memory_reserved(device=None) [source]

Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.

By default, this returns the peak cached memory since the beginning of this program. reset_peak_memory_stats() can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak cached memory amount of each iteration in a training loop.

Parameters

device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Return type

int

Note

See Memory management for more details about GPU memory management.

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.cuda.max_memory_reserved.html