On this page
tf.distribute.experimental.CommunicationImplementation
Cross device communication implementation.
AUTO
: Automatically chosen by Tensorflow.RING
: TensorFlow's ring algorithms for all-reduce and all-gather.NCCL
: NVIDIA®'s NCCL library. This is now only used for all-reduce on GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.
Class Variables | |
---|---|
AUTO | <CommunicationImplementation.AUTO: 'AUTO'> |
NCCL | <CommunicationImplementation.NCCL: 'NCCL'> |
RING | <CommunicationImplementation.RING: 'RING'> |
© 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/distribute/experimental/CommunicationImplementation