skip to main content


Title: A Stochastic Subgradient Method for Distributionally Robust Non-convex and Non-smooth Learning
Award ID(s):
1814888 2053485 1907522
NSF-PAR ID:
10399563
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Optimization Theory and Applications
Volume:
194
Issue:
3
ISSN:
0022-3239
Page Range / eLocation ID:
1014 to 1041
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a (δ,ϵ)-stationary point from O(ϵ^(-4),δ^(-1)) stochastic gradient queries to O(ϵ^(-3),δ^(-1)), which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of O(ϵ^(-1.5),δ^(-0.5)). Our techniques also recover all optimal or best-known results for finding ϵ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings. 
    more » « less