pytorch / 2 / generated / torch.nn.dropout1d.html

Dropout1d

class torch.nn.Dropout1d(p=0.5, inplace=False) [source]

Randomly zero out entire channels (a channel is a 1D feature map, e.g., the j j -th channel of the i i -th sample in the batched input is a 1D tensor input [ i , j ] \text{input}[i, j] ). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Usually the input comes from nn.Conv1d modules.

As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.

In this case, nn.Dropout1d() will help promote independence between feature maps and should be used instead.

Parameters
  • p (float, optional) – probability of an element to be zero-ed.
  • inplace (bool, optional) – If set to True, will do this operation in-place
Shape:
  • Input: ( N , C , L ) (N, C, L) or ( C , L ) (C, L) .
  • Output: ( N , C , L ) (N, C, L) or ( C , L ) (C, L) (same shape as input).

Examples:

>>> m = nn.Dropout1d(p=0.2)
>>> input = torch.randn(20, 16, 32)
>>> output = m(input)

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.Dropout1d.html