In this paper, we study the finite-sum convex optimization problem focusing on the general convex case. Recently, the study of variance reduced (VR) methods and their accelerated variants has made exciting progress. However, the step size used in the existing VR algorithms typically depends on the smoothness parameter, which is often unknown and requires tuning in practice. To address this problem, we propose two novel adaptive VR algorithms: Adaptive Variance Reduced Accelerated Extra-Gradient (AdaVRAE) and Adaptive Variance Reduced Accelerated Gradient (AdaVRAG). Our algorithms do not require knowledge of the smoothness parameter. AdaVRAE uses $$\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta}{\epsilon}}\right)$$ and AdaVRAG uses $$\mathcal{O}\left(n\log\log n+\sqrt{\frac{n\beta\log\beta}{\epsilon}}\right)$$ gradient evaluations to attain an $$\mathcal{O}(\epsilon)$$-suboptimal solution, where $$n$$ is the number of functions in the finite sum and $$\beta$$ is the smoothness parameter. This result matches the best-known convergence rate of non-adaptive VR methods and it improves upon the convergence of the state of the art adaptive VR method, AdaSVRG. We demonstrate the superior performance of our algorithms compared with previous methods in experiments on real-world datasets. 
                        more » 
                        « less   
                    This content will become publicly available on May 1, 2026
                            
                            Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis
                        
                    
    
            Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO employs a double-loop structure with stepsizes adaptively adjusted by the "inverse of cumulative gradient norms" strategy. S-TFBO features a simpler fully single-loop structure that updates three variables simultaneously with a theory-motivated joint design of adaptive stepsizes for all variables. We provide a comprehensive convergence analysis for both algorithms and show that D-TFBO and S-TFBO respectively require $$\mathcal{O}(\frac{1}{\epsilon})$$ and $$\mathcal{O}(\frac{1}{\epsilon}\log^4(\frac{1}{\epsilon}))$$ iterations to find an $$\epsilon$$-accurate stationary point, (nearly) matching their well-tuned counterparts using the information of problem parameters. Experiments on various problems show that our methods achieve performance comparable to existing well-tuned approaches, while being more robust to the selection of initial stepsizes. To the best of our knowledge, our methods are the first to completely eliminate the need for stepsize tuning, while achieving theoretical guarantees. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10587849
- Publisher / Repository:
- International Conference on Learning Representations
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We investigate the behavior of higher-form symmetries at variousquantum phase transitions. We consider discrete 1-form symmetries, whichcan be either part of the generalized concept “categorical symmetry”(labelled as \tilde{Z}_N^{(1)} Z ̃ N ( 1 ) )introduced recently, or an explicit Z_N^{(1)} Z N ( 1 ) 1-form symmetry. We demonstrate that for many quantum phase transitionsinvolving a Z_N^{(1)} Z N ( 1 ) or \tilde{Z}_N^{(1)} Z ̃ N ( 1 ) symmetry, the following expectation value \langle \left( O_\mathcal{C}\right)^2 \rangle ⟨ ( O 𝒞 ) 2 ⟩ takes the form \langle \left( \log O_\mathcal{C} \right)^2 \rangle \sim - \frac{A}{\epsilon} P + b \log P ⟨ ( log O 𝒞 ) 2 ⟩ ∼ − A ϵ P + b log P , where O_\mathcal{C} O 𝒞 is an operator defined associated with loop \mathcal{C} 𝒞 (or its interior \mathcal{A} 𝒜 ),which reduces to the Wilson loop operator for cases with an explicit Z_N^{(1)} Z N ( 1 ) 1-form symmetry. P P is the perimeter of \mathcal{C} 𝒞 ,and the b \log P b log P term arises from the sharp corners of the loop \mathcal{C} 𝒞 ,which is consistent with recent numerics on a particular example. b b is a universal microscopic-independent number, which in (2+1)d ( 2 + 1 ) d is related to the universal conductivity at the quantum phasetransition. b b can be computed exactly for certain transitions using the dualitiesbetween (2+1)d ( 2 + 1 ) d conformal field theories developed in recent years. We also compute the"strange correlator" of O_\mathcal{C} O 𝒞 : S_{\mathcal{C}} = \langle 0 | O_\mathcal{C} | 1 \rangle / \langle 0 | 1 \rangle S 𝒞 = ⟨ 0 | O 𝒞 | 1 ⟩ / ⟨ 0 | 1 ⟩ where |0\rangle | 0 ⟩ and |1\rangle | 1 ⟩ are many-body states with different topological nature.more » « less
- 
            Dasgupta, Sanjoy; Mandt, Stephan; Li, Yingzhen (Ed.)Average reward reinforcement learning (RL) provides a suitable framework for capturing the objective (i.e. long-run average reward) for continuing tasks, where there is often no natural way to identify a discount factor. However, existing average reward RL algorithms with sample complexity guarantees are not feasible, as they take as input the (unknown) mixing time of the Markov decision process (MDP). In this paper, we make initial progress towards addressing this open problem. We design a feasible average-reward $$Q$$-learning framework that requires no knowledge of any problem parameter as input. Our framework is based on discounted $$Q$$-learning, while we dynamically adapt the discount factor (and hence the effective horizon) to progressively approximate the average reward. In the synchronous setting, we solve three tasks: (i) learn a policy that is $$\epsilon$$-close to optimal, (ii) estimate optimal average reward with $$\epsilon$$-accuracy, and (iii) estimate the bias function (similar to $$Q$$-function in discounted case) with $$\epsilon$$-accuracy. We show that with carefully designed adaptation schemes, (i) can be achieved with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^{8}}{\epsilon^{8}})$$ samples, (ii) with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^5}{\epsilon^5})$$ samples, and (iii) with $$\tilde{O}(\frac{SA B}{\epsilon^9})$$ samples, where $$t_\mathrm{mix}$$ is the mixing time, and $B > 0$ is an MDP-dependent constant. To our knowledge, we provide the first finite-sample guarantees that are polynomial in $$S, A, t_{\mathrm{mix}}, \epsilon$$ for a feasible variant of $$Q$$-learning. That said, the sample complexity bounds have tremendous room for improvement, which we leave for the community’s best minds. Preliminary simulations verify that our framework is effective without prior knowledge of parameters as input.more » « less
- 
            Dasgupta, Sanjoy; Mandt, Stephan; Li, Yingzhen (Ed.)Average reward reinforcement learning (RL) provides a suitable framework for capturing the objective (i.e. long-run average reward) for continuing tasks, where there is often no natural way to identify a discount factor. However, existing average reward RL algorithms with sample complexity guarantees are not feasible, as they take as input the (unknown) mixing time of the Markov decision process (MDP). In this paper, we make initial progress towards addressing this open problem. We design a feasible average-reward $$Q$$-learning framework that requires no knowledge of any problem parameter as input. Our framework is based on discounted $$Q$$-learning, while we dynamically adapt the discount factor (and hence the effective horizon) to progressively approximate the average reward. In the synchronous setting, we solve three tasks: (i) learn a policy that is $$\epsilon$$-close to optimal, (ii) estimate optimal average reward with $$\epsilon$$-accuracy, and (iii) estimate the bias function (similar to $$Q$$-function in discounted case) with $$\epsilon$$-accuracy. We show that with carefully designed adaptation schemes, (i) can be achieved with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^{8}}{\epsilon^{8}})$$ samples, (ii) with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^5}{\epsilon^5})$$ samples, and (iii) with $$\tilde{O}(\frac{SA B}{\epsilon^9})$$ samples, where $$t_\mathrm{mix}$$ is the mixing time, and $B > 0$ is an MDP-dependent constant. To our knowledge, we provide the first finite-sample guarantees that are polynomial in $$S, A, t_{\mathrm{mix}}, \epsilon$$ for a feasible variant of $$Q$$-learning. That said, the sample complexity bounds have tremendous room for improvement, which we leave for the community’s best minds. Preliminary simulations verify that our framework is effective without prior knowledge of parameters as input.more » « less
- 
            Dasgupta, Sanjoy; Mandt, Stephan; Li, Yingzhen (Ed.)Average reward reinforcement learning (RL) provides a suitable framework for capturing the objective (i.e. long-run average reward) for continuing tasks, where there is often no natural way to identify a discount factor. However, existing average reward RL algorithms with sample complexity guarantees are not feasible, as they take as input the (unknown) mixing time of the Markov decision process (MDP). In this paper, we make initial progress towards addressing this open problem. We design a feasible average-reward $$Q$$-learning framework that requires no knowledge of any problem parameter as input. Our framework is based on discounted $$Q$$-learning, while we dynamically adapt the discount factor (and hence the effective horizon) to progressively approximate the average reward. In the synchronous setting, we solve three tasks: (i) learn a policy that is $$\epsilon$$-close to optimal, (ii) estimate optimal average reward with $$\epsilon$$-accuracy, and (iii) estimate the bias function (similar to $$Q$$-function in discounted case) with $$\epsilon$$-accuracy. We show that with carefully designed adaptation schemes, (i) can be achieved with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^{8}}{\epsilon^{8}})$$ samples, (ii) with $$\tilde{O}(\frac{SA t_{\mathrm{mix}}^5}{\epsilon^5})$$ samples, and (iii) with $$\tilde{O}(\frac{SA B}{\epsilon^9})$$ samples, where $$t_\mathrm{mix}$$ is the mixing time, and $B > 0$ is an MDP-dependent constant. To our knowledge, we provide the first finite-sample guarantees that are polynomial in $$S, A, t_{\mathrm{mix}}, \epsilon$$ for a feasible variant of $$Q$$-learning. That said, the sample complexity bounds have tremendous room for improvement, which we leave for the community’s best minds. Preliminary simulations verify that our framework is effective without prior knowledge of parameters as input.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
