On this page
tf.raw_ops.ApplyAdadelta
Update '*var' according to the adadelta scheme.
tf.raw_ops.ApplyAdadelta(
    var, accum, accum_update, lr, rho, epsilon, grad, use_locking=False, name=None
)
  accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;
| Args | |
|---|---|
var | 
      A mutable Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64. Should be from a Variable(). | 
     
accum | 
      A mutable Tensor. Must have the same type as var. Should be from a Variable(). | 
     
accum_update | 
      A mutable Tensor. Must have the same type as var. Should be from a Variable(). | 
     
lr | 
      A Tensor. Must have the same type as var. Scaling factor. Must be a scalar. | 
     
rho | 
      A Tensor. Must have the same type as var. Decay factor. Must be a scalar. | 
     
epsilon | 
      A Tensor. Must have the same type as var. Constant factor. Must be a scalar. | 
     
grad | 
      A Tensor. Must have the same type as var. The gradient. | 
     
use_locking | 
      An optional bool. Defaults to False. If True, updating of the var, accum and update_accum tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. | 
     
name | 
      A name for the operation (optional). | 
| Returns | |
|---|---|
A mutable Tensor. Has the same type as var. | 
     
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
 https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/raw_ops/ApplyAdadelta