Convergent cross-mapping (CCM) has attracted increased attention recently due to its capability to detect causality in nonseparable systems under deterministic settings, which may not be covered by the traditional Granger causality. From an information-theoretic perspective, causality is often characterized as the directed information (DI) flowing from one side to the other. As information is essentially nondeterministic, a natural question is: does CCM measure DI flow? Here, we first causalize CCM so that it aligns with the presumption in causality analysis—the future values of one process cannot influence the past of the other, and then establish and validate the approximate equivalence of causalized CCM (cCCM) and DI under Gaussian variables through both theoretical derivations and fMRI-based brain network causality analysis. Our simulation result indicates that, in general, cCCM tends to be more robust than DI in causality detection. The underlying argument is that DI relies heavily on probability estimation, which is sensitive to data size as well as digitization procedures; cCCM, on the other hand, gets around this problem through geometric cross-mapping between the manifolds involved. Overall, our analysis demonstrates that cross-mapping provides an alternative way to evaluate DI and is potentially an effective technique for identifying both linear and nonlinear causal coupling in brain neural networks and other settings, either random or deterministic, or both.
- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
00000020000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Deng, Jinxian (1)
-
Fisher, Ares (1)
-
Li, Tongtong (1)
-
Rao, Rajesh P (1)
-
Ren, Jian (1)
-
Renli, Alina B (1)
-
Scheel, Norman (1)
-
Sun, Boxin (1)
-
Zhang, Rong (1)
-
Zhu, Dajiang (1)
-
Zhu, David C (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
- Filter by Editor
-
-
Abbott, Derek (2)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abbott, Derek (Ed.)
Abstract Free, publicly-accessible full text available December 21, 2024 -
Fisher, Ares ; Rao, Rajesh P ( , PNAS Nexus)Abbott, Derek (Ed.)
Abstract Human vision, thought, and planning involve parsing and representing objects and scenes using structured representations based on part-whole hierarchies. Computer vision and machine learning researchers have recently sought to emulate this capability using neural networks, but a generative model formulation has been lacking. Generative models that leverage compositionality, recursion, and part-whole hierarchies are thought to underlie human concept learning and the ability to construct and represent flexible mental concepts. We introduce Recursive Neural Programs (RNPs), a neural generative model that addresses the part-whole hierarchy learning problem by modeling images as hierarchical trees of probabilistic sensory-motor programs. These programs recursively reuse learned sensory-motor primitives to model an image within different spatial reference frames, enabling hierarchical composition of objects from parts and implementing a grammar for images. We show that RNPs can learn part-whole hierarchies for a variety of image datasets, allowing rich compositionality and intuitive parts-based explanations of objects. Our model also suggests a cognitive framework for understanding how human brains can potentially learn and represent concepts in terms of recursively defined primitives and their relations with each other.