On this page
torch.fake_quantize_per_tensor_affine
torch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) → Tensor
-
Returns a new tensor with the data in
input
fake quantized usingscale
,zero_point
,quant_min
andquant_max
.- Parameters
-
- input (Tensor) – the input value(s),
torch.float32
tensor - scale (double scalar or
float32
Tensor) – quantization scale - zero_point (int64 scalar or
int32
Tensor) – quantization zero_point - quant_min (int64) – lower bound of the quantized domain
- quant_max (int64) – upper bound of the quantized domain
- input (Tensor) – the input value(s),
- Returns
-
A newly fake_quantized
torch.float32
tensor - Return type
Example:
>>> x = torch.randn(4) >>> x tensor([ 0.0552, 0.9730, 0.3973, -1.0780]) >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255) tensor([0.1000, 1.0000, 0.4000, 0.0000]) >>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255) tensor([0.1000, 1.0000, 0.4000, 0.0000])
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.fake_quantize_per_tensor_affine.html