On this page
torch.Storage
torch.Storage is an alias for the storage class that corresponds with the default data type (torch.get_default_dtype()). For instance, if the default data type is torch.float, torch.Storage resolves to torch.FloatStorage.
The torch.<type>Storage and torch.cuda.<type>Storage classes, like torch.FloatStorage, torch.IntStorage, etc., are not actually ever instantiated. Calling their constructors creates a torch.TypedStorage with the appropriate torch.dtype and torch.device. torch.<type>Storage classes have all of the same class methods that torch.TypedStorage has.
A torch.TypedStorage is a contiguous, one-dimensional array of elements of a particular torch.dtype. It can be given any torch.dtype, and the internal data will be interpreted appropriately. torch.TypedStorage contains a torch.UntypedStorage which holds the data as an untyped array of bytes.
Every strided torch.Tensor contains a torch.TypedStorage, which stores all of the data that the torch.Tensor views.
Warning
All storage classes except for torch.UntypedStorage will be removed in the future, and torch.UntypedStorage will be used in all cases.
class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
bfloat16()[source]- 
      
Casts this storage to bfloat16 type
 
bool()[source]- 
      
Casts this storage to bool type
 
byte()[source]- 
      
Casts this storage to byte type
 
char()[source]- 
      
Casts this storage to char type
 
clone()[source]- 
      
Returns a copy of this storage
 
complex_double()[source]- 
      
Casts this storage to complex double type
 
complex_float()[source]- 
      
Casts this storage to complex float type
 
copy_(source, non_blocking=None)[source]
cpu()[source]- 
      
Returns a CPU copy of this storage if it’s not already on the CPU
 
cuda(device=None, non_blocking=False, **kwargs)[source]- 
      
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
 - 
        
- device (int) – The destination GPU id. Defaults to the current device.
 - non_blocking (bool) – If 
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. 
 - Return type
 - 
        
T
 
 
data_ptr()[source]
property device
double()[source]- 
      
Casts this storage to double type
 
dtype: dtype
element_size()[source]
fill_(value)[source]
float()[source]- 
      
Casts this storage to float type
 
classmethod from_buffer(*args, **kwargs)[source]
classmethod from_file(filename, shared=False, size=0) → Storage[source]- 
      
If
sharedisTrue, then memory is shared between all processes. All changes are written to the file. IfsharedisFalse, then the changes on the storage do not affect the file.sizeis the number of elements in the storage. IfsharedisFalse, then the file must contain at leastsize * sizeof(Type)bytes (Typeis the type of storage). IfsharedisTruethe file will be created if needed. 
half()[source]- 
      
Casts this storage to half type
 
hpu(device=None, non_blocking=False, **kwargs)[source]- 
      
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
 - 
        
- device (int) – The destination HPU id. Defaults to the current device.
 - non_blocking (bool) – If 
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. 
 - Return type
 - 
        
T
 
 
int()[source]- 
      
Casts this storage to int type
 
property is_cuda
property is_hpu
is_pinned(device='cuda')[source]- 
      
Determine whether the CPU TypedStorage is already pinned on device.
- Parameters
 - 
        
device (str or torch.device) – The device to pin memory on. Default:
'cuda' - Returns
 - 
        
A boolean variable.
 
 
is_sparse = False
long()[source]- 
      
Casts this storage to long type
 
nbytes()[source]
pickle_storage_type()[source]
pin_memory(device='cuda')[source]- 
      
Copies the CPU TypedStorage to pinned memory, if it’s not already pinned.
- Parameters
 - 
        
device (str or torch.device) – The device to pin memory on. Default:
'cuda'. - Returns
 - 
        
A pinned CPU storage.
 
 
resize_(size)[source]
- 
      
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
 
short()[source]- 
      
Casts this storage to short type
 
size()[source]
tolist()[source]- 
      
Returns a list containing the elements of this storage
 
type(dtype=None, non_blocking=False)[source]- 
      
Returns the type if
dtypeis not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
 - 
        
- dtype (type or string) – The desired type
 - non_blocking (bool) – If 
True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. Theasyncarg is deprecated. 
 - Return type
 
 
untyped()[source]- 
      
Returns the internal
torch.UntypedStorage 
 
class torch.UntypedStorage(*args, **kwargs)[source]- 
    
bfloat16()- 
      
Casts this storage to bfloat16 type
 
bool()- 
      
Casts this storage to bool type
 
byte()- 
      
Casts this storage to byte type
 
byteswap(dtype)- 
      
Swaps bytes in underlying data
 
char()- 
      
Casts this storage to char type
 
clone()- 
      
Returns a copy of this storage
 
complex_double()- 
      
Casts this storage to complex double type
 
complex_float()- 
      
Casts this storage to complex float type
 
copy_()
cpu()- 
      
Returns a CPU copy of this storage if it’s not already on the CPU
 
cuda(device=None, non_blocking=False, **kwargs)- 
      
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
 - 
        
- device (int) – The destination GPU id. Defaults to the current device.
 - non_blocking (bool) – If 
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. 
 
 
data_ptr()
device: device
double()- 
      
Casts this storage to double type
 
element_size()
fill_()
float()- 
      
Casts this storage to float type
 
static from_buffer()
static from_file(filename, shared=False, size=0) → Storage- 
      
If
sharedisTrue, then memory is shared between all processes. All changes are written to the file. IfsharedisFalse, then the changes on the storage do not affect the file.sizeis the number of elements in the storage. IfsharedisFalse, then the file must contain at leastsize * sizeof(Type)bytes (Typeis the type of storage). IfsharedisTruethe file will be created if needed. 
get_device()- 
      
- Return type
 
 
half()- 
      
Casts this storage to half type
 
hpu(device=None, non_blocking=False, **kwargs)- 
      
Returns a copy of this object in HPU memory.
If this object is already in HPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
 - 
        
- device (int) – The destination HPU id. Defaults to the current device.
 - non_blocking (bool) – If 
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. 
 
 
int()- 
      
Casts this storage to int type
 
property is_cuda
property is_hpu
is_pinned(device='cuda')- 
      
Determine whether the CPU storage is already pinned on device.
- Parameters
 - 
        
device (str or torch.device) – The device to pin memory on. Default:
'cuda'. - Returns
 - 
        
A boolean variable.
 
 
is_sparse: bool = False
is_sparse_csr: bool = False
long()- 
      
Casts this storage to long type
 
mps()- 
      
Returns a MPS copy of this storage if it’s not already on the MPS
 
nbytes()
new()
pin_memory(device='cuda')- 
      
Copies the CPU storage to pinned memory, if it’s not already pinned.
- Parameters
 - 
        
device (str or torch.device) – The device to pin memory on. Default:
'cuda'. - Returns
 - 
        
A pinned CPU storage.
 
 
resize_()
short()- 
      
Casts this storage to short type
 
size()- 
      
- Return type
 
 
tolist()- 
      
Returns a list containing the elements of this storage
 
type(dtype=None, non_blocking=False, **kwargs)- 
      
Returns the type if
dtypeis not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
 - 
        
- dtype (type or string) – The desired type
 - non_blocking (bool) – If 
True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key 
asyncin place of thenon_blockingargument. Theasyncarg is deprecated. 
 
 
untyped()
 
class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.float64[source]
 
class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.float32[source]
 
class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.float16[source]
 
class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.int64[source]
 
class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.int32[source]
 
class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.int16[source]
 
class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.int8[source]
 
class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.uint8[source]
 
class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.bool[source]
 
class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.bfloat16[source]
 
class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.complex128[source]
 
class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.complex64[source]
 
class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.quint8[source]
 
class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.qint8[source]
 
class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.qint32[source]
 
class torch.QUInt4x2Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.quint4x2[source]
 
class torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)[source]- 
    
dtype: dtype = torch.quint2x4[source]
 
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
 https://pytorch.org/docs/2.1/storage.html