On this page
PackedSequence
class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]-
Holds the data and list of
batch_sizesof a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence().Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence(). For instance, given dataabcandxthePackedSequencewould contain dataaxbcwithbatch_sizes=[2,1,1].- Variables
-
- data (Tensor) – Tensor containing packed sequence
- batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
- sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequenceis constructed from sequences. - unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
Note
datacan be on arbitrary device and of arbitrary dtype.sorted_indicesandunsorted_indicesmust betorch.int64tensors on the same device asdata.However,
batch_sizesshould always be a CPUtorch.int64tensor.This invariant is maintained throughout
PackedSequenceclass, and all functions that construct a:class:PackedSequencein PyTorch (i.e., they only pass in tensors conforming to this constraint).batch_sizes: Tensor-
Alias for field number 1
count(value, /)-
Return number of occurrences of value.
data: Tensor-
Alias for field number 0
index(value, start=0, stop=9223372036854775807, /)-
Return first index of value.
Raises ValueError if the value is not present.
property is_cuda-
Returns true if
self.datastored on a gpu
is_pinned()[source]-
Returns true if
self.datastored on in pinned memory
sorted_indices: Optional[Tensor]-
Alias for field number 2
to(*args, **kwargs)[source]-
Performs dtype and/or device conversion on
self.data.It has similar signature as
torch.Tensor.to(), except optional arguments likenon_blockingandcopyshould be passed as kwargs, not args, or they will not apply to the index tensors.Note
If the
self.dataTensor already has the correcttorch.dtypeandtorch.device, thenselfis returned. Otherwise, returns a copy with the desired configuration.
unsorted_indices: Optional[Tensor]-
Alias for field number 3
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.utils.rnn.PackedSequence.html