On this page
tf.VariableSynchronization
Indicates when a distributed variable will be synced.
AUTO
: Indicates that the synchronization will be determined by the currentDistributionStrategy
(eg. WithMirroredStrategy
this would beON_WRITE
).NONE
: Indicates that there will only be one copy of the variable, so there is no need to sync.ON_WRITE
: Indicates that the variable will be updated across devices every time it is written.ON_READ
: Indicates that the variable will be aggregated across devices when it is read (eg. when checkpointing or when evaluating an op that uses the variable).Example:
>>> temp_grad=[tf.Variable([0.], trainable=False,
... synchronization=tf.VariableSynchronization.ON_READ,
... aggregation=tf.VariableAggregation.MEAN
... )]
Class Variables | |
---|---|
AUTO | <VariableSynchronization.AUTO: 0> |
NONE | <VariableSynchronization.NONE: 1> |
ON_READ | <VariableSynchronization.ON_READ: 3> |
ON_WRITE | <VariableSynchronization.ON_WRITE: 2> |
© 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/VariableSynchronization