- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
02100010000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Mehta, Harsh (4)
-
Cutkosky, Ashok (3)
-
Orabona, Francesco (2)
-
Abid, Abubakar (1)
-
Agarwal, Akshat (1)
-
Agha, Omar (1)
-
Alabi, Jesujoba (1)
-
Ali, Tariq (1)
-
Alipoormolabashi, Pegah (1)
-
Aminnaseri, Moin (1)
-
Anand, Sajant (1)
-
Andreassen, Anders Johan (1)
-
Arakawa, Riku (1)
-
Argueta, Cedrick (1)
-
Arnaud, Melody (1)
-
Asaadi, Shima (1)
-
Ashcraft, Courtney (1)
-
Askell, Amanda (1)
-
Bahri, Yasaman (1)
-
Bai, Yuntao (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We introduce a technique for tuning the learning rate scale factor of any base optimization algorithm and schedule automatically, which we call Mechanic. Our method provides a practical realization of recent theoretical reductions for accomplishing a similar goal in online convex optimization. We rigorously evaluate Mechanic on a range of large scale deep learning tasks with varying batch sizes, schedules, and base optimization algorithms. These experiments demonstrate that depending on the problem, Mechanic either comes very close to, matches or even improves upon manual tuning of learning rates.more » « less
-
Cutkosky, Ashok ; Mehta, Harsh ; Orabona, Francesco ( , International Conference on Machine Learning)We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a (δ,ϵ)-stationary point from O(ϵ^(-4),δ^(-1)) stochastic gradient queries to O(ϵ^(-3),δ^(-1)), which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of O(ϵ^(-1.5),δ^(-0.5)). Our techniques also recover all optimal or best-known results for finding ϵ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings.more » « less
-
Cutkosky, Ashok ; Mehta, Harsh ; Orabona, Francesco ( , Proceedings of Machine Learning Research)
-
Srivastava, Aarohi ; Rastogi, Abhinav ; Rao, Abhishek ; Shoeb, Abu Awal ; Abid, Abubakar ; Fisch, Adam ; Brown, Adam R. ; Santoro, Adam ; Gupta, Aditya ; Garriga-Alonso, Adri ; et al ( , Transactions on machine learning research)