Category: Preprint

Conservation laws for neural network training: two papers at @NeurIPS23 & @ICML24

We study conservation laws during the (euclidean or not) gradient or momentum flow of neural networks. Keep the Momentum: Conservation Laws beyond Euclidean Gradient Flows, accepted at ICML24 1/ We define the concept of conservation laws for momentum flows and show how to extend the framework from our previous paper (Abide by the Law and …

Continue reading

New preprint “Abide by the Law and Follow the Flow”

New preprint “Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows” w. @SibylleMarcotte and @GabrielPeyre, we define and study “conservation laws” for the optimization of over-parameterized models. https://arxiv.org/abs/2307.00144