tf.raw_ops.QuantizedMatMulWithBiasAndDequantize
  
  tf.raw_ops.QuantizedMatMulWithBiasAndDequantize(
    a, b, bias, min_a, max_a, min_b, max_b, min_freezed_output, max_freezed_output,
    Toutput, transpose_a=False, transpose_b=False, input_quant_mode='MIN_FIRST',
    name=None
)
  
   
    
     
     
    
    
     
      | Args | 
     
     
      a | 
      A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. | 
     
     
      b | 
      A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16. | 
     
     
      bias | 
      A Tensor. Must be one of the following types: float32, qint32. | 
     
     
      min_a | 
      A Tensor of type float32. | 
     
     
      max_a | 
      A Tensor of type float32. | 
     
     
      min_b | 
      A Tensor of type float32. | 
     
     
      max_b | 
      A Tensor of type float32. | 
     
     
      min_freezed_output | 
      A Tensor of type float32. | 
     
     
      max_freezed_output | 
      A Tensor of type float32. | 
     
     
      Toutput | 
      A tf.DType from: tf.float32. | 
     
     
      transpose_a | 
      An optional bool. Defaults to False. | 
     
     
      transpose_b | 
      An optional bool. Defaults to False. | 
     
     
      input_quant_mode | 
      An optional string from: "MIN_FIRST", "SCALED". Defaults to "MIN_FIRST". | 
     
     
      name | 
      A name for the operation (optional). | 
     
    
   
   
  
   
    
     
     
    
    
     
      | Returns | 
     
     
      A Tensor of type Toutput. |