Investigation into the Training Dynamics of Learned Optimizers

Abstract

Optimization is an integral part of modern deep learning. Recently, the concept of learned optimizers has emerged as a way to accelerate this optimization process by replacing traditional, hand-crafted algorithms with meta-learned functions. Despite the initial promising results of these methods, issues with stability and generalization still remain, limiting their practical use. Moreover, their inner workings and behavior under different conditions are not yet fully understood, making it difficult to come up with improvements. For this reason, our work examines their optimization trajectories from the perspective of network architecture symmetries and parameter update distributions. Furthermore, by contrasting the learned optimizers with their manually designed counterparts, we identify several key insights that demonstrate how each approach can benefit from the strengths of the other.

Publication
16th International Conference on Agents and Artificial Intelligence (ICAART 2024)

Published in the proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, 135–146. https://doi.org/10.5220/0012317000003636.

Preprint available on arXiv.

Jan Sobotka
Jan Sobotka
CS Master’s Student & AI/ML Research Assistant

I am a master’s student in computer science at EPFL, and a research assistant at the MLBio Lab. I am interested in representation learning, (mechanistic) interpretability, meta-learning, reasoning, test-time training, and machine consciousness.