- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0003000000000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Balu, A (2)
-
Cho, M (2)
-
Hegde, C (2)
-
Sarkar, S. (2)
-
Balu, A. (1)
-
Deva Prasad, A (1)
-
Esfendiari, Y. (1)
-
Ganapathysubramanian, B (1)
-
Ganapathysubramanian, B. (1)
-
Hegde, C. (1)
-
Herron, E. (1)
-
Jiang, Z. (1)
-
Joshi, A (1)
-
Joshi, A. (1)
-
Khara, B (1)
-
Khara, B. (1)
-
Krishnamurthy, A (1)
-
Krishnamurthy, A. (1)
-
Prasad, A (1)
-
Sarkar, S (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Cho, M; Balu, A; Joshi, A; Deva Prasad, A; Khara, B; Sarkar, S; Ganapathysubramanian, B; Krishnamurthy, A; Hegde, C (, Advances in neural information processing systems)The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable" layer''in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis. We also open-source the code at\url {https://github. com/idealab-isu/DSA}.more » « less
-
Esfendiari, Y.; Tan, S.; Balu, A; Jiang, Z.; Herron, E.; Hegde, C; Sarkar, S. (, International Conference on Machine Learning)