-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Closed
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
There’s a line of work out of Michael Jordan’s lab regarding perturbed stochastic gradient descent that allegedly has advantages over SGD:
- Gradient Descent Can Take Exponential Time to Escape Saddle Points
- How to Escape Saddle Points Efficiently
- Stochastic Gradient Descent Escapes Saddle Points Efficiently
Is there an implementation of Perturbed SGD in PyTorch as an optimizer? I looked through the available optimizers and the answer appears to be no.
Metadata
Metadata
Assignees
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: optimizerRelated to torch.optimRelated to torch.optimtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module