In this study, we explore the use of low rank and sparse constraints for the noninvasive estimation of epicardial and endocardial extracellular potentials from body-surface electrocardiographic data to locate the focus of premature ventricular contractions (PVCs). The proposed strategy formulates the dynamic spatiotemporal distribution of cardiac potentials by means of low rank and sparse decomposition, where the low rank term represents the smooth background and the anomalous potentials are extracted in the sparse matrix. Compared to the most previous potential-based approaches, the proposed low rank and sparse constraints are batch spatiotemporal constraints that capture the underlying relationship of dynamic potentials. The resulting optimization problem is solved using alternating direction method of multipliers . Three sets of simulation experiments with eight different ventricular pacing sites demonstrate that the proposed model outperforms the existing Tikhonov regularization (zero-order, second-order) and L1-norm based method at accurately reconstructing the potentials and locating the ventricular pacing sites. Experiments on a total of 39 cases of real PVC data also validate the ability of the proposed method to correctly locate ectopic pacing sites.
more »
« less
This content will become publicly available on January 24, 2026
The typicality principle and its implications for statistics and data science
A central focus of data science is the transformation of empirical evidence into knowledge. As such, the key insights and scientific attitudes of deep thinkers like Fisher, Popper, and Tukey are expected to inspire exciting new advances in machine learning and artificial intelligence in years to come. Along these lines, the present paper advances a novel {\em typicality principle} which states, roughly, that if the observed data is sufficiently ``atypical'' in a certain sense relative to a posited theory, then that theory is unwarranted. This emphasis on typicality brings familiar but often overlooked background notions like model-checking to the inferential foreground. One instantiation of the typicality principle is in the context of parameter estimation, where we propose a new typicality-based regularization strategy that leans heavily on goodness-of-fit testing. The effectiveness of this new regularization strategy is illustrated in three non-trivial examples where ordinary maximum likelihood estimation fails miserably. We also demonstrate how the typicality principle fits within a bigger picture of reliable and efficient uncertainty quantification.
more »
« less
- Award ID(s):
- 2412629
- PAR ID:
- 10582696
- Publisher / Repository:
- arXiv.org
- Date Published:
- Format(s):
- Medium: X
- Institution:
- arXiv.org
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Phase estimation plays a central role in communications, sensing, and information processing. Quantum correlated states, such as squeezed states, enable phase estimation beyond the shot-noise limit, and in principle approach the ultimate quantum limit in precision, when paired with optimal quantum measurements. However, physical realizations of optimal quantum measurements for optical phase estimation with quantum-correlated states are still unknown. Here we address this problem by introducing an adaptive Gaussian measurement strategy for optical phase estimation with squeezed vacuum states that, by construction, approaches the quantum limit in precision. This strategy builds from a comprehensive set of locally optimal POVMs through rotations and homodyne measurements and uses the Adaptive Quantum State Estimation framework for optimizing the adaptive measurement process, which, under certain regularity conditions, guarantees asymptotic optimality for this quantum parameter estimation problem. As a result, the adaptive phase estimation strategy based on locally-optimal homodyne measurements achieves the quantum limit within the phase interval of . Furthermore, we generalize this strategy by including heterodyne measurements, enabling phase estimation across the full range of phases from , where squeezed vacuum allows for unambiguous phase encoding. Remarkably, for this phase interval, which is the maximum range of phases that can be encoded in squeezed vacuum, this estimation strategy maintains an asymptotic quantum-optimal performance, representing a significant advancement in quantum metrology.more » « less
-
null (Ed.)In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data. However, the local estimates on each machine are typically biased, relative to the full solution on all of the data, and this can limit the effectiveness of averaging. Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods. Our technique has two novel components: (1) modifying standard sketching techniques to obtain what we call a surrogate sketch; and (2) carefully scaling the global regularization parameter for local computations. Our surrogate sketches are based on determinantal point processes, a family of distributions for which the bias of an estimate of the inverse Hessian can be computed exactly. Based on this computation, we show that when the objective being minimized is l2-regularized with parameter ! and individual machines are each given a sketch of size m, then to eliminate the bias, local estimates should be computed using a shrunk regularization parameter given by (See PDF), where d(See PDF) is the (See PDF)-effective dimension of the Hessian (or, for quadratic problems, the data matrix).more » « less
-
null (Ed.)Abstract Discrete ill-posed inverse problems arise in various areas of science and engineering. The presence of noise in the data often makes it difficult to compute an accurate approximate solution. To reduce the sensitivity of the computed solution to the noise, one replaces the original problem by a nearby well-posed minimization problem, whose solution is less sensitive to the noise in the data than the solution of the original problem. This replacement is known as regularization. We consider the situation when the minimization problem consists of a fidelity term, that is defined in terms of a p -norm, and a regularization term, that is defined in terms of a q -norm. We allow 0 < p , q ≤ 2. The relative importance of the fidelity and regularization terms is determined by a regularization parameter. This paper develops an automatic strategy for determining the regularization parameter for these minimization problems. The proposed approach is based on a new application of generalized cross validation. Computed examples illustrate the performance of the method proposed.more » « less
-
This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training dataset. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a $$\ epsilon-\delta$$ stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a $$\epsilon-\delta$$ stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations.more » « less