This content will become publicly available on February 1, 2026
Effect of doping with carbon dots on the alignment and dielectric properties of nematic liquid crystal 4-cyano-4′-pentylbiphenyl in ITO sample cells without conventional alignment layers for low-cost display applications
- Award ID(s):
- 2211347
- PAR ID:
- 10616067
- Editor(s):
- Jelsch, Christian
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Journal of Molecular Structure
- Volume:
- 1321
- Issue:
- P2
- ISSN:
- 0022-2860
- Page Range / eLocation ID:
- 139894
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
A defining feature of three-dimensional hydrodynamic turbulence is that the rate of energy dissipation is bounded away from zero as viscosity is decreased (Reynolds number increased). This phenomenon—anomalous dissipation—is sometimes called the ‘zeroth law of turbulence’ as it underpins many celebrated theoretical predictions. Another robust feature observed in turbulence is that velocity structure functions S p ( ℓ ) := ⟨ | δ ℓ u | p ⟩ exhibit persistent power-law scaling in the inertial range, namely S p ( ℓ ) ∼ | ℓ | ζ p for exponents ζ p > 0 over an ever increasing (with Reynolds) range of scales. This behaviour indicates that the velocity field retains some fractional differentiability uniformly in the Reynolds number. The Kolmogorov 1941 theory of turbulence predicts that ζ p = p / 3 for all p and Onsager’s 1949 theory establishes the requirement that ζ p ≤ p / 3 for p ≥ 3 for consistency with the zeroth law. Empirically, ζ 2 ⪆ 2 / 3 and ζ 3 ⪅ 1 , suggesting that turbulent Navier–Stokes solutions approximate dissipative weak solutions of the Euler equations possessing (nearly) the minimal degree of singularity required to sustain anomalous dissipation. In this note, we adopt an experimentally supported hypothesis on the anti-alignment of velocity increments with their separation vectors and demonstrate that the inertial dissipation provides a regularization mechanism via the Kolmogorov 4/5-law. This article is part of the theme issue ‘Mathematical problems in physical fluid dynamics (part 2)’.more » « less
-
Many imitation learning (IL) algorithms use inverse reinforcement learning (IRL) to infer a reward function that aligns with the demonstration. However, the inferred reward functions often fail to capture the underlying task objectives. In this paper, we propose a novel framework for IRL-based IL that prioritizes task alignment over conventional data alignment. Our framework is a semi-supervised approach that leverages expert demonstrations as weak supervision to derive a set of candidate reward functions that align with the task rather than only with the data. It then adopts an adversarial mechanism to train a policy with this set of reward functions to gain a collective validation of the policy's ability to accomplish the task. We provide theoretical insights into this framework's ability to mitigate task-reward misalignment and present a practical implementation. Our experimental results show that our framework outperforms conventional IL baselines in complex and transfer learning scenarios.more » « less
An official website of the United States government
