pytorch / 1.8.0 / generated / torch.nn.conv3d.html /

Conv3d

class torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]

Applies a 3D convolution over an input signal composed of several input planes.

In the simplest case, the output value of the layer with input size ( N , C i n , D , H , W ) (N, C_{in}, D, H, W) and output ( N , C o u t , D o u t , H o u t , W o u t ) (N, C_{out}, D_{out}, H_{out}, W_{out}) can be precisely described as:

o u t ( N i , C o u t j ) = b i a s ( C o u t j ) + k = 0 C i n 1 w e i g h t ( C o u t j , k ) i n p u t ( N i , k ) out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k)

where \star is the valid 3D cross-correlation operator

This module supports TensorFloat32.

  • stride controls the stride for the cross-correlation.
  • padding controls the amount of implicit padding on both sides for padding number of points for each dimension.
  • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.
  • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

    • At groups=1, all inputs are convolved to all outputs.
    • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
    • At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channels in_channels \frac{\text{out\_channels}}{\text{in\_channels}} ).

The parameters kernel_size, stride, padding, dilation can either be:

  • a single int – in which case the same value is used for the depth, height and width dimension
  • a tuple of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension

Note

When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.

In other words, for an input of size ( N , C i n , L i n ) (N, C_{in}, L_{in}) , a depthwise convolution with a depthwise multiplier K can be performed with the arguments ( C in = C in , C out = C in × K , . . . , groups = C in ) (C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}) .

Note

In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information.

Parameters
  • in_channels (int) – Number of channels in the input image
  • out_channels (int) – Number of channels produced by the convolution
  • kernel_size (int or tuple) – Size of the convolving kernel
  • stride (int or tuple, optional) – Stride of the convolution. Default: 1
  • padding (int or tuple, optional) – Zero-padding added to all three sides of the input. Default: 0
  • padding_mode (string, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'
  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Shape:
  • Input: ( N , C i n , D i n , H i n , W i n ) (N, C_{in}, D_{in}, H_{in}, W_{in})
  • Output: ( N , C o u t , D o u t , H o u t , W o u t ) (N, C_{out}, D_{out}, H_{out}, W_{out}) where

    D o u t = D i n + 2 × padding [ 0 ] dilation [ 0 ] × ( kernel_size [ 0 ] 1 ) 1 stride [ 0 ] + 1 D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor
    H o u t = H i n + 2 × padding [ 1 ] dilation [ 1 ] × ( kernel_size [ 1 ] 1 ) 1 stride [ 1 ] + 1 H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor
    W o u t = W i n + 2 × padding [ 2 ] dilation [ 2 ] × ( kernel_size [ 2 ] 1 ) 1 stride [ 2 ] + 1 W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor
Variables
  • ~Conv3d.weight (Tensor) – the learnable weights of the module of shape ( out_channels , in_channels groups , (\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, kernel_size[0] , kernel_size[1] , kernel_size[2] ) \text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]}) . The values of these weights are sampled from U ( k , k ) \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = g r o u p s C in i = 0 2 kernel_size [ i ] k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}
  • ~Conv3d.bias (Tensor) – the learnable bias of the module of shape (out_channels). If bias is True, then the values of these weights are sampled from U ( k , k ) \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = g r o u p s C in i = 0 2 kernel_size [ i ] k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}

Examples:

>>> # With square kernels and equal stride
>>> m = nn.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
>>> input = torch.randn(20, 16, 10, 50, 100)
>>> output = m(input)

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.nn.Conv3d.html