On this page
GRU
class torch.nn.GRU(self, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, device=None, dtype=None)[source]-
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
where is the hidden state at time
t, is the input at timet, is the hidden state of the layer at timet-1or the initial hidden state at time0, and , , are the reset, update, and new gates, respectively. is the sigmoid function, and is the Hadamard product.In a multilayer GRU, the input of the -th layer ( ) is the hidden state of the previous layer multiplied by dropout where each is a Bernoulli random variable which is with probability
dropout.- Parameters
-
- input_size – The number of expected features in the input
x - hidden_size – The number of features in the hidden state
h - num_layers – Number of recurrent layers. E.g., setting
num_layers=2would mean stacking two GRUs together to form astacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 - bias – If
False, then the layer does not use bias weightsb_ihandb_hh. Default:True - batch_first – If
True, then the input and output tensors are provided as(batch, seq, feature)instead of(seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:False - dropout – If non-zero, introduces a
Dropoutlayer on the outputs of each GRU layer except the last layer, with dropout probability equal todropout. Default: 0 - bidirectional – If
True, becomes a bidirectional GRU. Default:False
- input_size – The number of expected features in the input
- Inputs: input, h_0
-
- input: tensor of shape
for unbatched input,
when
batch_first=Falseor whenbatch_first=Truecontaining the features of the input sequence. The input can also be a packed variable length sequence. Seetorch.nn.utils.rnn.pack_padded_sequence()ortorch.nn.utils.rnn.pack_sequence()for details. - h_0: tensor of shape or containing the initial hidden state for the input sequence. Defaults to zeros if not provided.
where:
- input: tensor of shape
for unbatched input,
when
- Outputs: output, h_n
-
- output: tensor of shape
for unbatched input,
when
batch_first=Falseor whenbatch_first=Truecontaining the output features(h_t)from the last layer of the GRU, for eacht. If atorch.nn.utils.rnn.PackedSequencehas been given as the input, the output will also be a packed sequence. - h_n: tensor of shape or containing the final hidden state for the input sequence.
- output: tensor of shape
for unbatched input,
when
- Variables
-
- weight_ih_l[k] – the learnable input-hidden weights of the
layer (W_ir|W_iz|W_in), of shape
(3*hidden_size, input_size)fork = 0. Otherwise, the shape is(3*hidden_size, num_directions * hidden_size) - weight_hh_l[k] – the learnable hidden-hidden weights of the
layer (W_hr|W_hz|W_hn), of shape
(3*hidden_size, hidden_size) - bias_ih_l[k] – the learnable input-hidden bias of the
layer (b_ir|b_iz|b_in), of shape
(3*hidden_size) - bias_hh_l[k] – the learnable hidden-hidden bias of the
layer (b_hr|b_hz|b_hn), of shape
(3*hidden_size)
- weight_ih_l[k] – the learnable input-hidden weights of the
layer (W_ir|W_iz|W_in), of shape
Note
All the weights and biases are initialized from where
Note
For bidirectional GRUs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False:output.view(seq_len, batch, num_directions, hidden_size).Note
batch_firstargument is ignored for unbatched inputs.Note
The calculation of new gate subtly differs from the original paper and other frameworks. In the original implementation, the Hadamard product between and the previous hidden state is done before the multiplication with the weight matrix
Wand addition of bias:This is in contrast to PyTorch implementation, which is done after
This implementation differs on purpose for efficiency.
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype
torch.float164) V100 GPU is used, 5) input data is not inPackedSequenceformat persistent algorithm can be selected to improve performance.Examples:
>>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.GRU.html