Background and Objective: Higuchi’s method of determining fractal dimension (HFD) occupies a valuable place in the study of a wide variety of physical signals. In comparison to other methods, it provides more rapid, accurate estimations for the entire range of possible fractal dimensions. However, a major difficulty in using the method is the correct choice of tuning parameter (kmax) to compute the most accurate results. In the past researchers have used various ad hoc methods to determine the appropriate kmax choice for their particular data. We provide a more objective method of determining, a priori, the best value for the tuning parameter, given a particular length data set. Methods: We create numerous simulations of fractional Brownian motion to perform Monte Carlo simulations of the distribution of the calculated HFD. Results: Experimental results show that HFD depends not only on kmax but also on the length of the time series, which enable derivation of an expression to find the appropriate kmax for an input time series of unknown fractal dimension. Conclusion: The Higuchi method should not be used indiscriminately without reference to the type of data whose fractal dimension is examined. Monte Carlo simulations with different fractional Brownian motions increases the confidence of evaluation results.
more »
« less
NEO: NEuro-Inspired Optimization—A Fractional Time Series Approach
Solving optimization problems is a recurrent theme across different fields, including large-scale machine learning systems and deep learning. Often in practical applications, we encounter objective functions where the Hessian is ill-conditioned, which precludes us from using optimization algorithms utilizing second-order information. In this paper, we propose to use fractional time series analysis methods that have successfully been used to model neurophysiological processes in order to circumvent this issue. In particular, the long memory property of fractional time series exhibiting non-exponential power-law decay of trajectories seems to model behavior associated with the local curvature of the objective function at a given point. Specifically, we propose a NEuro-inspired Optimization (NEO) method that leverages this behavior, which contrasts with the short memory characteristics of currently used methods (e.g., gradient descent and heavy-ball). We provide evidence of the efficacy of the proposed method on a wide variety of settings implicitly found in practice.
more »
« less
- Award ID(s):
- 1936624
- NSF-PAR ID:
- 10318758
- Date Published:
- Journal Name:
- Frontiers in physiology
- ISSN:
- 1664-042X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Tasks across diverse application domains can be posed as large-scale optimization problems, these include graphics, vision, machine learning, imaging, health, scheduling, planning, and energy system forecasting. Independently of the application domain, proximal algorithms have emerged as a formal optimization method that successfully solves a wide array of existing problems, often exploiting problem-specific structures in the optimization. Although model-based formal optimization provides a principled approach to problem modeling with convergence guarantees, at first glance, this seems to be at odds with black-box deep learning methods. A recent line of work shows that, when combined with learning-based ingredients, model-based optimization methods are effective, interpretable, and allow for generalization to a wide spectrum of applications with little or no extra training data. However, experimenting with such hybrid approaches for different tasks by hand requires domain expertise in both proximal optimization and deep learning, which is often error-prone and time-consuming. Moreover, naively unrolling these iterative methods produces lengthy compute graphs, which when differentiated via autograd techniques results in exploding memory consumption, making batch-based training challenging. In this work, we introduce ∇-Prox, a domain-specific modeling language and compiler for large-scale optimization problems using differentiable proximal algorithms. ∇-Prox allows users to specify optimization objective functions of unknowns concisely at a high level, and intelligently compiles the problem into compute and memory-efficient differentiable solvers. One of the core features of ∇-Prox is its full differentiability, which supports hybrid model- and learning-based solvers integrating proximal optimization with neural network pipelines. Example applications of this methodology include learning-based priors and/or sample-dependent inner-loop optimization schedulers, learned with deep equilibrium learning or deep reinforcement learning. With a few lines of code, we show ∇-Prox can generate performant solvers for a range of image optimization problems, including end-to-end computational optics, image deraining, and compressive magnetic resonance imaging. We also demonstrate ∇-Prox can be used in a completely orthogonal application domain of energy system planning, an essential task in the energy crisis and the clean energy transition, where it outperforms state-of-the-art CVXPY and commercial Gurobi solvers.more » « less
-
Variable order structures model situations in which the comparison between two points depends on a point-to-cone map. In this paper, inexact projected gradient methods for solving smooth constrained vector optimization problems on variable ordered spaces are presented. It is shown that every accumulation point of the generated sequences satisfies the first-order necessary optimality condition. Moreover, under suitable convexity assumptions for the objective function, it is proved that all accumulation points of any generated sequences are weakly efficient points. The convergence results are also derived in the particular case in which the problem is unconstrained and even if inexact directions are taken as descent directions. Furthermore, we investigate the application of the proposed method to optimization models where the domain of the variable order map coincides with the image of the objective function. In this case, similar concepts and convergence results are presented. Finally, some computational experiments designed to illustrate the behavior of the proposed inexact methods versus the exact ones (in terms of CPU time) are performed.more » « less
-
null (Ed.)A plethora of complex dynamical systems from disordered media to biological systems exhibit mathematical characteristics (e.g., long-range dependence, self-similar and power law magnitude increments) that are well-fitted by fractional partial differential equations (PDEs). For instance, some biological systems displaying an anomalous diffusion behavior, which is characterized by a non-linear mean-square displacement relation, can be mathematically described by fractional PDEs. In general, the PDEs represent various physical laws or rules governing complex dynamical systems. Since prior knowledge about the mathematical equations describing complex dynamical systems in biology, healthcare, disaster mitigation, transportation, or environmental sciences may not be available, we aim to provide algorithmic strategies to discover the integer or fractional PDEs and their parameters from system's evolution data. Toward deciphering non-trivial mechanisms driving a complex system, we propose a data-driven approach that estimates the parameters of a fractional PDE model. We study the space-time fractional diffusion model that describes a complex stochastic process, where the magnitude and the time increments are stable processes. Starting from limited time-series data recorded while the system is evolving, we develop a fractional-order moments-based approach to determine the parameters of a generalized fractional PDE. We formulate two optimization problems to allow us to estimate the arguments of the fractional PDE. Employing extensive simulation studies, we show that the proposed approach is effective at retrieving the relevant parameters of the space-time fractional PDE. The presented mathematical approach can be further enhanced and generalized to include additional operators that may help to identify the dominant rule governing the measurements or to determine the degree to which multiple physical laws contribute to the observed dynamics.more » « less
-
Motivated by gradient methods in optimization theory, we give methods based on
‐fractional derivatives of orderψ in order to solve unconstrained optimization problems. The convergence of these methods is analyzed in detail. This paper also presents an Adams–Bashforth–Moulton (ABM) method for the estimation of solutions to equations involvingα ‐fractional derivatives. Numerical examples using the ABM method show that the fractional orderψ and weightα are tunable parameters, which can be helpful for improving the performance of gradient descent methods.ψ