On this page
torch.fake_quantize_per_channel_affine
torch.fake_quantize_per_channel_affine(input, scale, zero_point, axis, quant_min, quant_max) → Tensor-
Returns a new tensor with the data in
inputfake quantized per channel usingscale,zero_point,quant_minandquant_max, across the channel specified byaxis.- Parameters
-
- input (Tensor) – the input value(s), in
torch.float32 - scale (Tensor) – quantization scale, per channel in
torch.float32 - zero_point (Tensor) – quantization zero_point, per channel in
torch.int32ortorch.halfortorch.float32 - axis (int32) – channel axis
- quant_min (int64) – lower bound of the quantized domain
- quant_max (int64) – upper bound of the quantized domain
- input (Tensor) – the input value(s), in
- Returns
-
A newly fake_quantized per channel
torch.float32tensor - Return type
Example:
>>> x = torch.randn(2, 2, 2) >>> x tensor([[[-0.2525, -0.0466], [ 0.3491, -0.2168]], [[-0.5906, 1.6258], [ 0.6444, -0.0542]]]) >>> scales = (torch.randn(2) + 1) * 0.05 >>> scales tensor([0.0475, 0.0486]) >>> zero_points = torch.zeros(2).to(torch.int32) >>> zero_points tensor([0, 0]) >>> torch.fake_quantize_per_channel_affine(x, scales, zero_points, 1, 0, 255) tensor([[[0.0000, 0.0000], [0.3405, 0.0000]], [[0.0000, 1.6134], [0.6323, 0.0000]]])
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.fake_quantize_per_channel_affine.html