pytorch / 2 / generated / torch.nn.rnn.html

RNN

class torch.nn.RNN(self, input_size, hidden_size, num_layers=1, nonlinearity='tanh', bias=True, batch_first=False, dropout=0.0, bidirectional=False, device=None, dtype=None) [source]

Applies a multi-layer Elman RNN with tanh \tanh or ReLU \text{ReLU} non-linearity to an input sequence.

For each element in the input sequence, each layer computes the following function:

h t = tanh ( x t W i h T + b i h + h t 1 W h h T + b h h ) h_t = \tanh(x_t W_{ih}^T + b_{ih} + h_{t-1}W_{hh}^T + b_{hh})

where h t h_t is the hidden state at time t, x t x_t is the input at time t, and h ( t 1 ) h_{(t-1)} is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity is 'relu', then ReLU \text{ReLU} is used instead of tanh \tanh .

Parameters
  • input_size – The number of expected features in the input x
  • hidden_size – The number of features in the hidden state h
  • num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
  • nonlinearity – The non-linearity to use. Can be either 'tanh' or 'relu'. Default: 'tanh'
  • bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default: False
  • dropout – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0
  • bidirectional – If True, becomes a bidirectional RNN. Default: False
Inputs: input, h_0
  • input: tensor of shape ( L , H i n ) (L, H_{in}) for unbatched input, ( L , N , H i n ) (L, N, H_{in}) when batch_first=False or ( N , L , H i n ) (N, L, H_{in}) when batch_first=True containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details.
  • h_0: tensor of shape ( D num_layers , H o u t ) (D * \text{num\_layers}, H_{out}) for unbatched input or ( D num_layers , N , H o u t ) (D * \text{num\_layers}, N, H_{out}) containing the initial hidden state for the input sequence batch. Defaults to zeros if not provided.

where:

N = batch size L = sequence length D = 2 if bidirectional=True otherwise 1 H i n = input_size H o u t = hidden_size \begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}
Outputs: output, h_n
  • output: tensor of shape ( L , D H o u t ) (L, D * H_{out}) for unbatched input, ( L , N , D H o u t ) (L, N, D * H_{out}) when batch_first=False or ( N , L , D H o u t ) (N, L, D * H_{out}) when batch_first=True containing the output features (h_t) from the last layer of the RNN, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.
  • h_n: tensor of shape ( D num_layers , H o u t ) (D * \text{num\_layers}, H_{out}) for unbatched input or ( D num_layers , N , H o u t ) (D * \text{num\_layers}, N, H_{out}) containing the final hidden state for each element in the batch.
Variables
  • weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
  • weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size, hidden_size)
  • bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
  • bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)

Note

All the weights and biases are initialized from U ( k , k ) \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = 1 hidden_size k = \frac{1}{\text{hidden\_size}}

Note

For bidirectional RNNs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when batch_first=False: output.view(seq_len, batch, num_directions, hidden_size).

Note

batch_first argument is ignored for unbatched inputs.

Warning

There are known non-determinism issues for RNN functions on some versions of cuDNN and CUDA. You can enforce deterministic behavior by setting the following environment variables:

On CUDA 10.1, set environment variable CUDA_LAUNCH_BLOCKING=1. This may affect performance.

On CUDA 10.2 or later, set environment variable (note the leading colon symbol) CUBLAS_WORKSPACE_CONFIG=:16:8 or CUBLAS_WORKSPACE_CONFIG=:4096:2.

See the cuDNN 8 Release Notes for more information.

Note

If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16 4) V100 GPU is used, 5) input data is not in PackedSequence format persistent algorithm can be selected to improve performance.

Examples:

>>> rnn = nn.RNN(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.RNN.html