torch.Storage¶
A torch._TypedStorage
is a contiguous, one-dimensional array of
elements of a particular torch.dtype
. It can be given any
torch.dtype
, and the internal data will be interpretted appropriately.
Every strided torch.Tensor
contains a torch._TypedStorage
,
which stores all of the data that the torch.Tensor
views.
For backward compatibility, there are also torch.<type>Storage
classes
(like torch.FloatStorage
, torch.IntStorage
, etc). These
classes are not actually instantiated, and calling their constructors creates
a torch._TypedStorage
with the appropriate torch.dtype
.
torch.<type>Storage
classes have all of the same class methods that
torch._TypedStorage
has.
Also for backward compatibility, torch.Storage
is an alias for the
storage class that corresponds with the default data type
(torch.get_default_dtype()
). For instance, if the default data type is
torch.float
, torch.Storage
resolves to
torch.FloatStorage
.
-
class
torch.
_TypedStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
-
cuda
(device=None, non_blocking=False, **kwargs)[source]¶ Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
device (int) – The destination GPU id. Defaults to the current device.
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument.
-
property
device
¶
-
dtype
: torch.dtype¶
-
classmethod
from_file
(filename, shared=False, size=0) → Storage[source]¶ If shared is True, then memory is shared between all processes. All changes are written to the file. If shared is False, then the changes on the storage do not affect the file.
size is the number of elements in the storage. If shared is False, then the file must contain at least size * sizeof(Type) bytes (Type is the type of storage). If shared is True the file will be created if needed.
-
property
is_cuda
¶
-
is_sparse
= False¶
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
-
type
(dtype=None, non_blocking=False)[source]¶ Returns the type if dtype is not provided, else casts this object to the specified type.
If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
dtype (type or string) – The desired type
non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
-
-
class
torch.
DoubleStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.float64[source]¶
-
-
class
torch.
FloatStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.float32[source]¶
-
-
class
torch.
HalfStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.float16[source]¶
-
-
class
torch.
LongStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.int64[source]¶
-
-
class
torch.
IntStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.int32[source]¶
-
-
class
torch.
ShortStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.int16[source]¶
-
-
class
torch.
CharStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.int8[source]¶
-
-
class
torch.
ByteStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.uint8[source]¶
-
-
class
torch.
BoolStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.bool[source]¶
-
-
class
torch.
BFloat16Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.bfloat16[source]¶
-
-
class
torch.
ComplexDoubleStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.complex128[source]¶
-
-
class
torch.
ComplexFloatStorage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.complex64[source]¶
-
-
class
torch.
QUInt8Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.quint8[source]¶
-
-
class
torch.
QInt8Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.qint8[source]¶
-
-
class
torch.
QInt32Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.qint32[source]¶
-
-
class
torch.
QUInt4x2Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.quint4x2[source]¶
-
-
class
torch.
QUInt2x4Storage
(*args, wrap_storage=None, dtype=None, device=None)[source]¶ -
dtype
: torch.dtype = torch.quint2x4[source]¶
-