On this page
tf.raw_ops.ConfigureDistributedTPU
Sets up the centralized structures for a distributed TPU system.
tf.raw_ops.ConfigureDistributedTPU(
    embedding_config='', tpu_embedding_config='', is_global_init=False,
    enable_whole_mesh_compilations=False, compilation_failure_closes_chips=True,
    name=None
)
  | Args | |
|---|---|
embedding_config | 
      An optional string. Defaults to "". Reserved. Do not use. | 
     
tpu_embedding_config | 
      An optional string. Defaults to "". Serialized tensorflow.tpu.TPUEmbeddingConfiguration that describes the embedding lookups of the program. | 
     
is_global_init | 
      An optional bool. Defaults to False. Reserved. Do not use. | 
     
enable_whole_mesh_compilations | 
      An optional bool. Defaults to False. | 
     
compilation_failure_closes_chips | 
      An optional bool. Defaults to True. | 
     
name | 
      A name for the operation (optional). | 
| Returns | |
|---|---|
A Tensor of type string. | 
     
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
 https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/raw_ops/ConfigureDistributedTPU