On this page
torch.Storage
torch.Storage
is an alias for the storage class that corresponds with the default data type (torch.get_default_dtype()
). For instance, if the default data type is torch.float
, torch.Storage
resolves to torch.FloatStorage
.
The torch.<type>Storage
and torch.cuda.<type>Storage
classes, like torch.FloatStorage
, torch.IntStorage
, etc., are not actually ever instantiated. Calling their constructors creates a torch.TypedStorage
with the appropriate torch.dtype
and torch.device
. torch.<type>Storage
classes have all of the same class methods that torch.TypedStorage
has.
A torch.TypedStorage
is a contiguous, one-dimensional array of elements of a particular torch.dtype
. It can be given any torch.dtype
, and the internal data will be interpreted appropriately. torch.TypedStorage
contains a torch.UntypedStorage
which holds the data as an untyped array of bytes.
Every strided torch.Tensor
contains a torch.TypedStorage
, which stores all of the data that the torch.Tensor
views.
class torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
bfloat16()
[source]-
Casts this storage to bfloat16 type
bool()
[source]-
Casts this storage to bool type
byte()
[source]-
Casts this storage to byte type
char()
[source]-
Casts this storage to char type
clone()
[source]-
Returns a copy of this storage
complex_double()
[source]-
Casts this storage to complex double type
complex_float()
[source]-
Casts this storage to complex float type
copy_(source, non_blocking=None)
[source]
cpu()
[source]-
Returns a CPU copy of this storage if it’s not already on the CPU
cuda(device=None, non_blocking=False, **kwargs)
[source]-
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
-
- device (int) – The destination GPU id. Defaults to the current device.
- non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
- Return type:
-
T
data_ptr()
[source]
property device
double()
[source]-
Casts this storage to double type
dtype: dtype
element_size()
[source]
fill_(value)
[source]
float()
[source]-
Casts this storage to float type
classmethod from_buffer(*args, dtype=None, device=None, **kwargs)
[source]
classmethod from_file(filename, shared=False, size=0) → Storage
[source]-
If
shared
isTrue
, then memory is shared between all processes. All changes are written to the file. Ifshared
isFalse
, then the changes on the storage do not affect the file.size
is the number of elements in the storage. Ifshared
isFalse
, then the file must contain at leastsize * sizeof(Type)
bytes (Type
is the type of storage). Ifshared
isTrue
the file will be created if needed.
half()
[source]-
Casts this storage to half type
int()
[source]-
Casts this storage to int type
property is_cuda
is_pinned()
[source]
is_sparse = False
long()
[source]-
Casts this storage to long type
nbytes()
[source]
pickle_storage_type()
[source]
pin_memory()
[source]-
Coppies the storage to pinned memory, if it’s not already pinned.
resize_(size)
[source]
-
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
short()
[source]-
Casts this storage to short type
size()
[source]
tolist()
[source]-
Returns a list containing the elements of this storage
type(dtype=None, non_blocking=False)
[source]-
Returns the type if
dtype
is not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters:
-
- dtype (type or string) – The desired type
- non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
- Return type:
untyped()
[source]-
Returns the internal
torch.UntypedStorage
class torch.UntypedStorage(*args, **kwargs)
[source]-
bfloat16()
-
Casts this storage to bfloat16 type
bool()
-
Casts this storage to bool type
byte()
-
Casts this storage to byte type
char()
-
Casts this storage to char type
clone()
-
Returns a copy of this storage
complex_double()
-
Casts this storage to complex double type
complex_float()
-
Casts this storage to complex float type
copy_()
cpu()
-
Returns a CPU copy of this storage if it’s not already on the CPU
cuda(device=None, non_blocking=False, **kwargs)
-
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters:
-
- device (int) – The destination GPU id. Defaults to the current device.
- non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
data_ptr()
device: device
double()
-
Casts this storage to double type
element_size()
fill_()
float()
-
Casts this storage to float type
static from_buffer()
static from_file(filename, shared=False, size=0) → Storage
-
If
shared
isTrue
, then memory is shared between all processes. All changes are written to the file. Ifshared
isFalse
, then the changes on the storage do not affect the file.size
is the number of elements in the storage. Ifshared
isFalse
, then the file must contain at leastsize * sizeof(Type)
bytes (Type
is the type of storage). Ifshared
isTrue
the file will be created if needed.
get_device()
-
- Return type:
half()
-
Casts this storage to half type
int()
-
Casts this storage to int type
property is_cuda
is_pinned()
is_sparse: bool = False
is_sparse_csr: bool = False
long()
-
Casts this storage to long type
mps()
-
Returns a CPU copy of this storage if it’s not already on the CPU
nbytes()
new()
pin_memory()
-
Copies the storage to pinned memory, if it’s not already pinned.
resize_()
-
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
short()
-
Casts this storage to short type
size()
-
- Return type:
tolist()
-
Returns a list containing the elements of this storage
type(dtype=None, non_blocking=False, **kwargs)
-
Returns the type if
dtype
is not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters:
-
- dtype (type or string) – The desired type
- non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. - **kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
untyped()
class torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.float64
[source]
class torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.float32
[source]
class torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.float16
[source]
class torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.int64
[source]
class torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.int32
[source]
class torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.int16
[source]
class torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.int8
[source]
class torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.uint8
[source]
class torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.bool
[source]
class torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.bfloat16
[source]
class torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.complex128
[source]
class torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.complex64
[source]
class torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.quint8
[source]
class torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.qint8
[source]
class torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None)
[source]-
dtype: dtype = torch.qint32
[source]
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/storage.html