Today’s thermodynamics is largely based on the combined law for equilibrium systems and statistical mechanics derived by Gibbs in 1873 and 1901, respectively, while irreversible thermodynamics for nonequilibrium systems resides essentially on the Onsager Theorem as a separate branch of thermodynamics developed in 1930s. Between them, quantum mechanics was invented and quantitatively solved in terms of density functional theory (DFT) in 1960s. These three scientific domains operate based on different principles and are very much separated from each other. In analogy to the parable of the blind men and the elephant articulated by Perdew, they individually represent different portions of a complex system and thus are incomplete by themselves alone, resulting in the lack of quantitative agreement between their predictions and experimental observations. Over the last two decades, the author’s group has developed a multiscale entropy approach (recently termed as zentropy theory) that integrates DFTbased quantum mechanics and Gibbs statistical mechanics and is capable of accurately predicting entropy and free energy of complex systems. Furthermore, in combination with the combined law for nonequilibrium systems presented by Hillert, the author developed the theory of cross phenomena beyond the phenomenological Onsager Theorem. The zentropy theory and theory of cross phenomena jointly provide quantitative predictive theories for systems from electronic to any observable scales as reviewed in the present work.
Entropy and Information are key concepts not only in Information Theory but also in Physics: historically in the fields of Thermodynamics, Statistical and Analytical Mechanics, and, more recently, in the field of Information Physics. In this paper we argue that Information Physics reconciles and generalizes statistical, geometric, and mechanistic views on information. We start by demonstrating how the use and interpretation of Entropy and Information coincide in Information Theory, Statistical Thermodynamics, and Analytical Mechanics, and how this can be taken advantage of when addressing Earth Science problems in general and hydrological problems in particular. In the second part we discuss how Information Physics provides ways to quantify Information and Entropy from fundamental physical principles. This extends their use to cases where the preconditions to calculate Entropy in the classical manner as an aggregate statistical measure are not met. Indeed, these preconditions are rarely met in the Earth Sciences due either to limited observations or the far‐from‐equilibrium nature of evolving systems. Information Physics therefore offers new opportunities for improving the treatment of Earth Science problems.
more » « less NSFPAR ID:
 10376026
 Publisher / Repository:
 DOI PREFIX: 10.1029
 Date Published:
 Journal Name:
 Water Resources Research
 Volume:
 56
 Issue:
 2
 ISSN:
 00431397
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract 
Abstract The basis for all knowledge is “information” that we compile about the world, expressed through models that support understanding, prediction, and decision making. This overview paper provides a contextual basis for the four papers that make up the “debate series” compiled under the above title. We briefly introduce Information Theory, discuss how “information” can be considered to be both a “physical” quantity and a “probabilistic” basis for representing incompleteness in knowledge, discuss the core motivation for this debate series, and briefly summarize the major arguments advanced by each of the debate papers. Our purpose is to facilitate an understanding of how these papers are related and how they approach the debate series question from different perspectives, while pointing to future directions for research. Finally, we invite further discourse and debate to advance the understanding and prediction of natural system dynamics using Information Theory, including the assessment of its limitations and complementarity to existing physics and machine learning approaches. Ultimately, our goal is to press for the development of philosophical and methodological advances that will enable the Earth science community to address some of the compelling unsolved problems in our field.

Thermodynamic systems typically conserve quantities (known as charges) such as energy and particle number. The charges are often assumed implicitly to commute with each other. Yet quantum phenomena such as uncertainty relations rely on the failure of observables to commute. How do noncommuting charges affect thermodynamic phenomena? This question, upon arising at the intersection of quantum information theory and thermodynamics, spread recently across manybody physics. Noncommutation of charges has been found to invalidate derivations of the form of the thermal state, decrease entropy production, conflict with the eigenstate thermalization hypothesis and more. This Perspective surveys key results in, opportunities for and work adjacent to the quantum thermodynamics of noncommuting charges. Open problems include a conceptual puzzle: evidence suggests that noncommuting charges may hinder thermalization in some ways while enhancing thermalization in others.more » « less

Abstract Statistical thermodynamics is valuable as a conceptual structure that shapes our thinking about equilibrium thermodynamic states. A cloud of unresolved questions surrounding the foundations of the theory could lead an impartial observer to conclude that statistical thermodynamics is in a state of crisis though. Indeed, the discussion about the microscopic origins of irreversibility has continued in the scientific community for more than a hundred years. This paper considers these questions while beginning to develop a statistical thermodynamics for finite nonequilibrium systems. Definitions are proposed for all of the extrinsic variables of the fundamental thermodynamic relation that are consistent with existing results in the equilibrium thermodynamic limit. The probability density function on the phase space is interpreted as a subjective uncertainty about the microstate, and the Gibbs entropy formula is modified to allow for entropy creation without introducing additional physics or modifying the phase space dynamics. Resolutions are proposed to the mixing paradox, Gibbs’ paradox, Loschmidt’s paradox, and Maxwell’s demon thought experiment. Finally, the extrinsic variables of the fundamental thermodynamic relation are evaluated as functions of time and space for a diffusing ideal gas, and the initial and final values are shown to coincide with the expected equilibrium values.more » « less

Abstract We develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in Earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for Earth system science where images are typically the result of physicsbased processes, and the information is often geolocated. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy relative to noninterpretable architectures. We apply the new model to two Earth science use cases: a synthetic dataset that loosely represents atmospheric high and low pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden–Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies when compared with a locationagnostic method. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behavior of tropical climate variability.
Significance Statement Machine learning models are incredibly powerful predictors but are often opaque “black boxes.” The howandwhy the model makes its predictions is inscrutable—the model is not interpretable. We introduce a new machine learning model specifically designed for image analysis in Earth system science applications. The model is designed to be inherently interpretable and extends previous work in computer science to incorporate location information. This is important because images in Earth system science are typically the result of physicsbased processes, and the information is often map based. We demonstrate its use for two Earth science use cases and show that the interpretable network exhibits only a small reduction in accuracy relative to blackbox models.