In the problem of online portfolio selection as formulated by Cover (1991), the trader repeatedly distributes her capital over d assets in each of T>1 rounds, with the goal of maximizing the total return. Cover proposed an algorithm, termed Universal Portfolios, that performs nearly as well as the best (in hindsight) static assignment of a portfolio, with an O(dlog(T)) regret in terms of the logarithmic return. Without imposing any restrictions on the market this guarantee is known to be worst-case optimal, and no other algorithm attaining it has been discovered so far. Unfortunately, Cover's algorithm crucially relies on computing certain d-dimensional integral which must be approximated in any implementation; this results in a prohibitive O(d^4(T+d)^14) per-round runtime for the fastest known implementation due to Kalai and Vempala (2002). We propose an algorithm for online portfolio selection that admits essentially the same regret guarantee as Universal Portfolios -- up to a constant factor and replacement of log(T) with log(T+d) -- yet has a drastically reduced runtime of O(d^(T+d)) per round. The selected portfolio minimizes the current logarithmic loss regularized by the log-determinant of its Hessian -- equivalently, the hybrid logarithmic-volumetric barrier of the polytope specified by the asset return vectors. As such, our work reveals surprising connections of online portfolio selection with two classical topics in optimization theory: cutting-plane and interior-point algorithms.
more »
« less
This content will become publicly available on June 10, 2026
Portfolio Selection Problem
Given a set of securities or assets it is of interest to find an optimal way of investing in these assets. What is optimal has to specified. The objective is to optimize the return consistent with the specified objective. When there are several assets it is unlikely all the assets will increase if they are correlated. It is necessary to diversify one’s assets for a secure return. To deal with the different assets a combination of the assets should be considered with constraints as needed. One approach is the Markowitz mean-variance model where the mean variance is minimized including constraints. In this paper neural networks and machine learning are used to extend the ways of dealing with portfolio asset allocation. Portfolio selection problem in an efficient way. The use of heuristic algorithms in this case is imperative. In the past some heuristic methods based mainly on evolutionary algorithms, tabu search and simulated annealing have been developed. The purpose of this paper is to consider a particular neural network model, the Hopfield network, which has been used to solve some other optimisation problems and apply it here to the portfolio selection problem, comparing the new results to those obtained with previous heuristic algorithms. Although great success has been achieved for portfolio analysis with the birth of Markowitz model, the demand for timely decision making has significantly increased especially in recent years with the advancement of high frequency trading (HFT), which combines powerful computing servers and the fastest Internet connection to trade at extremely high speeds. This demand poses new challenges to portfolio solvers for real-time processing in the face of time-varying parameters. Neural networks, as one of the most powerful machine learning tools has seen great progress in recent years for financial data analysis and signal processing ([1], [14]). Using computational methods, e.g., machine learning and data analytics, to empower conventional finance is becoming a trend widely adopted in leading investment companies ([3]).
more »
« less
- Award ID(s):
- 2305470
- PAR ID:
- 10631279
- Publisher / Repository:
- Dynamic Publishers Inc.
- Date Published:
- Journal Name:
- Neural Parallel and Scientific Computations
- Volume:
- 33
- ISSN:
- 1061-5369
- Page Range / eLocation ID:
- 85-100
- Subject(s) / Keyword(s):
- Portfolio selection, Multiobjective problems, Neural networks.
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)In this paper, we study a class of time-inconsistent terminal Markovian control problems in discrete time subject to model uncertainty. We combine the concept of the sub-game perfect strategies with the adaptive robust stochastic control method to tackle the theoretical aspects of the considered stochastic control problem. Consequently, as an important application of the theoretical results and by applying a machine learning algorithm we solve numerically the mean-variance portfolio selection problem under the model uncertainty.more » « less
-
This paper revisits building machine learning algorithms that involve interactions between entities, such as those between financial assets in an actively managed portfolio, or interactions between users in a social network. Our goal is to forecast the future evolution of ensembles of multivariate time series in such applications (e.g., the future return of a financial asset or the future popularity of a Twitter account). Designing ML algorithms for such systems requires addressing the challenges of high-dimensional interactions and non-linearity. Existing approaches usually adopt an ad-hoc approach to integrating high-dimensional techniques into non-linear models and re- cent studies have shown these approaches have questionable efficacy in time-evolving interacting systems. To this end, we propose a novel framework, which we dub as the additive influence model. Under our modeling assump- tion, we show that it is possible to decouple the learning of high-dimensional interactions from the learning of non-linear feature interactions. To learn the high-dimensional interac- tions, we leverage kernel-based techniques, with provable guarantees, to embed the entities in a low-dimensional latent space. To learn the non-linear feature-response interactions, we generalize prominent machine learning techniques, includ- ing designing a new statistically sound non-parametric method and an ensemble learning algorithm optimized for vector re- gressions. Extensive experiments on two common applica- tions demonstrate that our new algorithms deliver significantly stronger forecasting power compared to standard and recently proposed methods.more » « less
-
This paper revisits building machine learning algorithms that involve interactions between entities, such as those between financial assets in an actively managed portfolio, or interac- tions between users in a social network. Our goal is to forecast the future evolution of ensembles of multivariate time series in such applications (e.g., the future return of a financial asset or the future popularity of a Twitter account). Designing ML algorithms for such systems requires addressing the challenges of high-dimensional interactions and non-linearity. Existing approaches usually adopt an ad-hoc approach to integrating high-dimensional techniques into non-linear models and re- cent studies have shown these approaches have questionable efficacy in time-evolving interacting systems. To this end, we propose a novel framework, which we dub as the additive influence model. Under our modeling assump- tion, we show that it is possible to decouple the learning of high-dimensional interactions from the learning of non-linear feature interactions. To learn the high-dimensional interac- tions, we leverage kernel-based techniques, with provable guarantees, to embed the entities in a low-dimensional latent space. To learn the non-linear feature-response interactions, we generalize prominent machine learning techniques, includ- ing designing a new statistically sound non-parametric method and an ensemble learning algorithm optimized for vector re- gressions. Extensive experiments on two common applica- tions demonstrate that our new algorithms deliver significantly stronger forecasting power compared to standard and recently proposed methods.more » « less
-
We revisit Markowitz’s mean-variance portfolio selection model by considering a distributionally robust version, in which the region of distributional uncertainty is around the empirical measure and the discrepancy between probability measures is dictated by the Wasserstein distance. We reduce this problem into an empirical variance minimization problem with an additional regularization term. Moreover, we extend the recently developed inference methodology to our setting in order to select the size of the distributional uncertainty as well as the associated robust target return rate in a data-driven way. Finally, we report extensive back-testing results on S&P 500 that compare the performance of our model with those of several well-known models including the Fama–French and Black–Litterman models.more » « less
An official website of the United States government
