This paper introduces a new 2D representation of the orientation distribution function for an arbitrary material texture. The approach is based on the isometric square torus mapping of the Clifford torus, which allows for points on the unit quaternion hypersphere (each corresponding to a 3D orientation) to be represented in a periodic 2D square map. The combination of three such orthogonal mappings into a single RGB (red–green–blue) image provides a compact periodic representation of any set of orientations. Square torus representations of five different orientation sampling methods are compared and analyzed in terms of the Rieszsenergies that quantify the uniformity of the samplings. The effect of crystallographic symmetry on the square torus map is analyzed in terms of the Rodrigues fundamental zones for the rotational symmetry groups. The paper concludes with example representations of important texture components in cubic and hexagonal materials. The new RGB representation provides a convenient and compact way of generating training data for the automated analysis of material textures by means of neural networks.
more »
« less
This content will become publicly available on June 1, 2026
Applications of the Clifford torus texture representation to disorientations in single and multi-phase materials
In this paper we apply the concept of the Clifford torus and the derived square torus maps to the study of disorientations in microstructures. First, we interpret the Clifford torus in terms of the more commonly used orientation representations (Rodrigues-Frank vectors, 3D stereographic vectors, and homochoric vectors) and show representations of the torus in those spaces. This leads to a simple graphical interpretation of the generation and meaning of the square torus maps. Then we apply this approach to the study of disorientations in polycrystalline materials (CSL boundaries in grain boundary engineered Nickel) as well as intervariant boundaries in martensitic and bainitic steels. We show that pre-computed theoretical square torus maps can be used to determine population fractions of specific boundaries.
more »
« less
- Award ID(s):
- 2203378
- PAR ID:
- 10600893
- Publisher / Repository:
- Elsevier BV
- Date Published:
- Journal Name:
- Materials Characterization
- Volume:
- 224
- Issue:
- C
- ISSN:
- 1044-5803
- Page Range / eLocation ID:
- 114982
- Subject(s) / Keyword(s):
- Crystallographic texture Disorientation Quaternion representation Clifford torus Texture visualization
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Many daily activities and psychophysical experiments involve keeping multiple items in working memory. When items take continuous values (e.g., orientation, contrast, length, loudness) they must be stored in a continuous structure of appropriate dimensions. We investigate how this structure is represented in neural circuits by training recurrent networks to report two previously shown stimulus orientations. We find the activity manifold for the two orientations resembles a Clifford torus. Although a Clifford and standard torus (the surface of a donut) are topologically equivalent, they have important functional differences. A Clifford torus treats the two orientations equally and keeps them in orthogonal subspaces, as demanded by the task, whereas a standard torus does not. We find and characterize the connectivity patterns that support the Clifford torus. Moreover, in addition to attractors that store information via persistent activity, our networks also use a dynamic code where units change their tuning to prevent new sensory input from overwriting the previously stored one. We argue that such dynamic codes are generally required whenever multiple inputs enter a memory system via shared connections. Finally, we apply our framework to a human psychophysics experiment in which subjects reported two remembered orientations. By varying the training conditions of the RNNs, we test and support the hypothesis that human behavior is a product of both neural noise and reliance on the more stable and behaviorally relevant memory of the ordinal relationship between the two orientations. This suggests that suitable inductive biases in RNNs are important for uncovering how the human brain implements working memory. Together, these results offer an understanding of the neural computations underlying a class of visual decoding tasks, bridging the scales from human behavior to synaptic connectivity.more » « less
-
Chaudhuri, Kamalika; Jegelka, Stefanie; Song, Le; Szepesvari, Csaba; Niu, Gang; Sabato, Sivan (Ed.)Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image’s ground- truth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model’s input gradients around data points will more closely align with boundaries’ normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradient- based attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: boundary attributions, which aggregate information about the normal vectors of local decision bound- aries to explain a classification outcome. We show that by leveraging the key fac- tors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations{—}even on non-robust models.more » « less
-
Recent work in interpretability shows that large language models (LLMs) can be adapted for new tasks in a learning-free way: it is possible to intervene on LLM representations to elicit desired behaviors for alignment. For instance, adding certain bias vectors to the outputs of certain attention heads is reported to boost the truthfulness of models. In this work, we show that localized fine-tuning serves as an effective alternative to such representation intervention methods. We introduce a framework called Localized Fine-Tuning on LLM Representations (LoFiT), which identifies a subset of attention heads that are most important for learning a specific task, then trains offset vectors to add to the model's hidden representations at those selected heads. LoFiT localizes to a sparse set of heads (3%-10%) and learns the offset vectors from limited training data, comparable to the settings used for representation intervention. For truthfulness and reasoning tasks, we find that LoFiT's intervention vectors are more effective for LLM adaptation than vectors from representation intervention methods such as Inference-time Intervention. We also find that the localization step is important: selecting a task-specific set of attention heads can lead to higher performance than intervening on heads selected for a different task. Finally, across 7 tasks we study, LoFiT achieves comparable performance to other parameter-efficient fine-tuning methods such as LoRA, despite modifying 20x-200x fewer parameters than these methods.more » « less
-
Various realizations of Kitaev’s surface code perform surprisingly well for biased Pauli noise. Attracted by these potential gains, we study the performance of Clifford-deformed surface codes (CDSCs) obtained from the surface code by the application of single-qubit Clifford operators. We first analyze CDSCs on the 3×3 square lattice and find that, depending on the noise bias, their logical error rates can differ by orders of magnitude. To explain the observed behavior, we introduce the effective distance d′, which reduces to the standard distance for unbiased noise. To study CDSC performance in the thermodynamic limit, we focus on random CDSCs. Using the statistical mechanical mapping for quantum codes, we uncover a phase diagram that describes random CDSC families with 50% threshold at infinite bias. In the high-threshold region, we further demonstrate that typical code realizations outperform the thresholds and subthreshold logical error rates, at finite bias, of the best-known translationally invariant codes. We demonstrate the practical relevance of these random CDSC families by constructing a translation-invariant CDSC belonging to a high-performance random CDSC family. We also show that our translation-invariant CDSC outperforms well-known translation-invariant CDSCs, such as the XZZX and XY codes.more » « less
An official website of the United States government
