pytorch / 2 / generated / torch.ao.nn.quantized.functional.leaky_relu.html

leaky_relu

class torch.ao.nn.quantized.functional.leaky_relu(input, negative_slope=0.01, inplace=False, scale=None, zero_point=None) [source]

Quantized version of the. leaky_relu(input, negative_slope=0.01, inplace=False, scale, zero_point) -> Tensor

Applies element-wise, LeakyReLU ( x ) = max ( 0 , x ) + negative_slope min ( 0 , x ) \text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)

Parameters
  • input (Tensor) – Quantized input
  • negative_slope (float) – The slope of the negative input
  • inplace (bool) – Inplace modification of the input tensor
  • scale (Optional[float]) – Scale and zero point of the output tensor.
  • zero_point (Optional[int]) – Scale and zero point of the output tensor.

See LeakyReLU for more details.

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.ao.nn.quantized.functional.leaky_relu.html