We present a general Bernoulli Gaussian scale mixture based approach for modeling priors that can represent a large class of random signals. For inference, we introduce belief propagation (BP) to multi-snapshot signal recovery based on the minimum mean square error estimation criteria. Our method relies on intra-snapshot messages that update the signal vector for each snapshot and inter-snapshot messages that share probabilistic information related to the common sparsity structure across snapshots. Despite the very general model, our BP method can efficiently compute accurate approximations of marginal posterior PDFs. Preliminary numerical results illustrate the superior convergence rate and improved performance of the proposed method compared to approaches based on sparse Bayesian learning (SBL).
more »
« less
This content will become publicly available on April 11, 2026
Bernoulli-Gaussian Scale Mixture Model and BP Method for Multi-Snapshot Sparse Signal Recovery
We present a general Bernoulli Gaussian scale mixture based approach for modeling priors that can represent a large class of random signals. For inference, we introduce belief propagation (BP) to multi-snapshot signal recovery based on the minimum mean square error estimation criteria. Our method relies on intra-snapshot messages that update the signal vector for each snapshot and inter-snapshot messages that share probabilistic information related to the common sparsity structure across snapshots. Despite the very general model, our BP method can efficiently compute accurate approximations of marginal posterior PDFs. Preliminary numerical results illustrate the superior convergence rate and improved performance of the proposed method compared to approaches based on sparse Bayesian learning (SBL).
more »
« less
- Award ID(s):
- 2146261
- PAR ID:
- 10564933
- Publisher / Repository:
- IEEE
- Date Published:
- Subject(s) / Keyword(s):
- Sparse signal recovery, belief propagation, MMSE estimation
- Format(s):
- Medium: X
- Location:
- Hyderabad, India
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Belief propagation (BP) is a classical algorithm that approximates the marginal distribution associated with a factor graph by passing messages between adjacent nodes in the graph. It gained popularity in the 1990’s as a powerful decoding algorithm for LDPC codes. In 2016, Renes introduced a belief propagation with quantum messages (BPQM) and described how it could be used to decode classical codes defined by tree factor graphs that are sent over the classical-quantum pure-state channel. In this work, we propose an extension of BPQM to general binary-input symmetric classical-quantum (BSCQ) channels based on the implementation of a symmetric "paired measurement". While this new paired-measurement BPQM (PMBPQM) approach is suboptimal in general, it provides a concrete BPQM decoder that can be implemented with local operations. Finally, we demonstrate that density evolution can be used to analyze the performance of PMBPQM on tree factor graphs. As an application, we compute noise thresholds of some LDPC codes with BPQM decoding for a class of BSCQ channels.more » « less
-
With great potential for being applied to Internet of Things (IoT) applications, the concept of cloud-based Snapshot Real Time Kinematics (SRTK) was proposed and its feasibility under zero-baseline configuration was confirmed recently by the authors. This article first introduces the general workflow of the SRTK engine, as well as a discussion on the challenges of achieving an SRTK fix using actual snapshot data. This work also describes a novel solution to ensure a nanosecond level absolute timing accuracy in order to compute highly precise satellite coordinates, which is required for SRTK. Parameters such as signal bandwidth, integration time and baseline distances have an impact on the SRTK performance. To characterize this impact, different combinations of these settings are analyzed through experimental tests. The results show that the use of higher signal bandwidths and longer integration times result in higher SRTK fix rates, while the more significant impact on the performance comes from the baseline distance. The results also show that the SRTK fix rate can reach more than 93% by using snapshots with a data size as small as 255 kB. The positioning accuracy is at centimeter level when phase ambiguities are resolved at a baseline distance less or equal to 15 km.more » « less
-
SUMMARY We explore the potential of utilizing distributed acoustic sensing (DAS) for back-projection (BP) to image earthquake rupture processes. Synthetic tests indicate that sensor geometry, azimuthal coverage and velocity model are key factors controlling the quality of DAS-based BP images. We show that mitigation strategies and data processing modifications effectively stabilize the BP image in less optimal scenarios, such as asymmetric geometry, narrow azimuthal coverage and poorly constrained velocity structures. We apply our method to the $$M_w7.6$$ 2022 Michoacán earthquake recorded by a DAS array in Mexico City. We also conduct a BP analysis with teleseismic data for a reference. We identify three subevents from the DAS-based BP image, which exhibit a consistent rupture direction with the teleseismic results despite minor differences caused by uncertainties of BP with DAS data. We analyse the sources of the associated uncertainties and propose a transferable analysis scheme to understand the feasibility of BP with known source–receiver geometries preliminarily. Our findings demonstrate that integrating DAS recordings into BP can help with earthquake rupture process imaging for a broad magnitude range at regional distances. It can enhance seismic hazard assessment, especially in regions with limited conventional seismic coverage.more » « less
-
We propose a novel family of connectionist models based on kernel machines and consider the problem of learning layer by layer a compositional hypothesis class (i.e., a feedforward, multilayer architecture) in a supervised setting. In terms of the models, we present a principled method to “kernelize” (partly or completely) any neural network (NN). With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons. In terms of learning, when learning a feedforward deep architecture in a supervised setting, one needs to train all the components simultaneously using backpropagation (BP) since there are no explicit targets for the hidden layers (Rumelhart, Hinton, & Williams, 1986). We consider without loss of generality the two-layer case and present a general framework that explicitly characterizes a target for the hidden layer that is optimal for minimizing the objective function of the network. This characterization then makes possible a purely greedy training scheme that learns one layer at a time, starting from the input layer. We provide instantiations of the abstract framework under certain architectures and objective functions. Based on these instantiations, we present a layer-wise training algorithm for an l-layer feedforward network for classification, where l≥2 can be arbitrary. This algorithm can be given an intuitive geometric interpretation that makes the learning dynamics transparent. Empirical results are provided to complement our theory. We show that the kernelized networks, trained layer-wise, compare favorably with classical kernel machines as well as other connectionist models trained by BP. We also visualize the inner workings of the greedy kernelized models to validate our claim on the transparency of the layer-wise algorithm.more » « less
An official website of the United States government
