tensorflow / 2.9.1 / nn / relu6.html /

tf.nn.relu6

Computes Rectified Linear 6: min(max(features, 0), 6).

In comparison with tf.nn.relu, relu6 activation functions have shown to empirically perform better under low-precision conditions (e.g. fixed point inference) by encouraging the model to learn sparse features earlier. Source: Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010.

For example:

x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)
y = tf.nn.relu6(x)
y.numpy()
array([0., 0., 0., 6., 6.], dtype=float32)
Args
features A Tensor with type float, double, int32, int64, uint8, int16, or int8.
name A name for the operation (optional).
Returns
A Tensor with the same type as features.

References:

Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010 (pdf)

© 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/nn/relu6