Glossary

What is Adam Optimization Algorithm?

Written by Dr. Jagreet Kaur Gill | 13 Mar 2023

Overview of Adam Optimization Algorithm

Adam optimization is an algorithm that can be used to update network weights iteratively based on training data instead of the traditional stochastic gradient descent method. Adam is derived from the calculation of the evolutionary moment. For deep learning, this algorithm is used.

Adam Optimization Algorithm Features

The listed below are the features of using Adam on non-convex optimization issues -

  • Implementation is straightforward.
  • Effective in computing.
  • No requires memory.
  • Invariant of gradient diagonal rescale.
  • Best suited for information or parameters-sized problems.
  • Suitable for non-stationary targets.
  • Suitable for very noisy/sparse gradient problems.
  • Hyperparameter analysis is intuitive and usually requires minimal tuning.
  • Adam combines the best AdaGrad and RMSProp algorithms properties to provide an optimization algorithm that can manage sparse gradients on noisy issues.
  • Adam is relatively easy to customize, where the default configuration parameters cause most issues.

Comparison with other Algorithms

Adam combines the benefits of two other stochastic gradient descent extensions and the Adaptive Gradient Algorithm (AdaGrad), which retains a learning speed per-parameter that improves performance on sparse gradients issues (e.g., natural language issues and computer vision issues). Root Mean Square Propagation (RMSProp) also preserves per-parameters learning rates adjusted to the weight based on the average of recent magnitudes. Offline and non-stationary problems, this algorithm does well.