tensorflow / 2.9.1 / keras / optimizers / legacy / nadam.html /

tf.keras.optimizers.legacy.Nadam

Optimizer that implements the NAdam algorithm.

Inherits From: Nadam, Optimizer

Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.

Args
learning_rate A Tensor or a floating point value. The learning rate.
beta_1 A float value or a constant float tensor. The exponential decay rate for the 1st moment estimates.
beta_2 A float value or a constant float tensor. The exponential decay rate for the exponentially weighted infinity norm.
epsilon A small constant for numerical stability.
name Optional name for the operations created when applying gradients. Defaults to "Nadam".
**kwargs keyword arguments. Allowed arguments are clipvalue, clipnorm, global_clipnorm. If clipvalue (float) is set, the gradient of each weight is clipped to be no higher than this value. If clipnorm (float) is set, the gradient of each weight is individually clipped so that its norm is no higher than this value. If global_clipnorm (float) is set the gradient of all weights is clipped so that their global norm is no higher than this value.

Usage Example:

opt = tf.keras.optimizers.Nadam(learning_rate=0.2)
var1 = tf.Variable(10.0)
loss = lambda: (var1 ** 2) / 2.0
step_count = opt.minimize(loss, [var1]).numpy()
"{:.1f}".format(var1.numpy())
9.8

Reference:

Raises
ValueError in case of any invalid argument.

© 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/keras/optimizers/legacy/Nadam