On this page
torch.cuda.max_memory_allocated
torch.cuda.max_memory_allocated(device=None)[source]- 
    
Returns the maximum GPU memory occupied by tensors in bytes for a given device.
By default, this returns the peak allocated memory since the beginning of this program.
reset_peak_memory_stats()can be used to reset the starting point in tracking this metric. For example, these two functions can measure the peak allocated memory usage of each iteration in a training loop.- Parameters
 - 
      
device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by
current_device(), ifdeviceisNone(default). - Return type
 
Note
See Memory management for more details about GPU memory management.
 
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
 https://pytorch.org/docs/2.1/generated/torch.cuda.max_memory_allocated.html