Abstract—Shared information is a measure of mutual dependence among m ≥ 2 jointly distributed discrete random variables. For a Markov chain on a tree with a given joint distribution, we give a new proof of an explicit characterization of shared information. When the joint distribution is not known, we exploit the special form of this characterization to provide a multiarmed bandit algorithm for estimating shared information, and analyze its error performance.
more »
« less
Shared Information for a Markov Chain on a Tree
Abstract—Shared information is a measure of mutual de- pendence among multiple jointly distributed random variables with finite alphabets. For a Markov chain on a tree with a given joint distribution, we give a new proof of an explicit characterization of shared information. The Markov chain on a tree is shown to possess a global Markov property based on graph separation; this property plays a key role in our proofs. When the underlying joint distribution is not known, we exploit the special form of this characterization to provide a multiarmed bandit algorithm for estimating shared information, and analyze its error performance.
more »
« less
- Award ID(s):
- 1910497
- PAR ID:
- 10486816
- Editor(s):
- Veeravalli, V. V.
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Information Theory
- Edition / Version:
- 1.0
- Volume:
- 1
- Issue:
- 1
- ISSN:
- 0018-9448
- Page Range / eLocation ID:
- 1-13
- Format(s):
- Medium: X Size: 478KB Other: cxv
- Size(s):
- 478KB
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a single trajectory of a Markov Chain. In particular, we observe a single trajectory X0,...,Xt,... of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M0 , or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [Kaz78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M0 is O(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
The paper’s abstract in valid LaTeX, without non-standard macros or \cite commands. Classical distribution testing assumes access to i.i.d. samples from the distribution that is being tested. We initiate the study of Markov chain testing, assuming access to a {\em single trajectory of a Markov Chain.} In particular, we observe a single trajectory X0,…,Xt,… of an unknown, symmetric, and finite state Markov Chain M. We do not control the starting state X0, and we cannot restart the chain. Given our single trajectory, the goal is to test whether M is identical to a model Markov Chain M′, or far from it under an appropriate notion of difference. We propose a measure of difference between two Markov chains, motivated by the early work of Kazakos [78], which captures the scaling behavior of the total variation distance between trajectories sampled from the Markov chains as the length of these trajectories grows. We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if the hitting times of the model chain M′ is O~(n) in the size of the state space n.more » « less
-
A well-reasoned model of data movie in single molecule localization microscopy (SMLM) is desired. A model of data movie can be decomposed into a model of emitter activation process and a model of data frame. In this paper, we focus on Markov chain modeling and analyzing of emitter activation process for both cycled and continuous illuminations. First, a two-phase Markov chain is proposed to model the activation process for a pair of conjugated activator and emitter with cycled illumination. By converting the frame-based Markov chain into several cycle-based Markov chains, the stationary state distribution in the photoactivatable period is derived. Further obtained are several formulas that capture the characterization of the two-phase Markov chain. Second, the Markov chain and analytical result are extended to the continuous illumination where an emitter is excited continuously in all frames. Finally, incorporating the model of emitter activation process with our previous model of data frame, the model of data movie for both cycled and continuous illuminations in 3D and 2D imaging are simulated by custom codes. It is shown that the model can synthesize data movies well and the analytical formulas predict the simulation results accurately. The models provide a means to be broadly utilized in generating well-reasoned data movies for training of neural networks and evaluation of localization algorithms.more » « less
An official website of the United States government

