On this page
LinearReLU
class torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)
[source]-
A LinearReLU module fused from Linear and ReLU modules that can be used for dynamic quantization. Supports both, FP16 and INT8 quantization.
We adopt the same interface as
torch.ao.nn.quantized.dynamic.Linear
.- Variables
-
torch.ao.nn.quantized.dynamic.Linear (Same as) –
Examples:
>>> m = nn.intrinsic.quantized.dynamic.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU.html