skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Miolane, Nina"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Image datasets in specialized fields of science, such as biomedicine, are typically smaller than traditional machine learning datasets. As such, they present a problem for training many models. To address this challenge, researchers often attempt to incorporate priors, i.e., external knowledge, to help the learning procedure. Geometric priors, for example, offer to restrict the learning process to the manifold to which the data belong. However, learning on manifolds is sometimes computationally intensive to the point of being prohibitive. Here, we ask a provocative question: is machine learning on manifolds really more accurate than its linear counterpart to the extent that it is worth sacrificing significant speedup in computation? We answer this question through an extensive theoretical and experimental study of one of the most common learning methods for manifold-valued data: geodesic regression. 
    more » « less
    Free, publicly-accessible full text available June 30, 2025
  2. We introduce the shape module of the Python package Geomstats to analyze shapes of objects represented as landmarks, curves and surfaces across fields of natural sciences and engineering. The shape module first implements widely used shape spaces, such as the Kendall shape space, as well as elastic spaces of discrete curves and surfaces. The shape module further implements the abstract mathematical structures of group actions, fiber bundles, quotient spaces and associated Riemannian metrics which allow users to build their own shape spaces. The Riemannian geometry tools enable users to compare, average, interpolate between shapes inside a given shape space. These essential operations can then be leveraged to perform statistics and machine learning on shape data. We present the object-oriented implementation of the shape module along with illustrative examples and show how it can be used to perform statistics and machine learning on shape spaces. 
    more » « less
    Free, publicly-accessible full text available June 14, 2025
  3. We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (G-CNNs), which we call the G-triple-correlation (G-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the G-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max G-Pooling in G-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for G-CNNs defined on both commutative and non-commutative groups--SO(2), O(2), SO(3), and O(3) (discretized as the cyclic C8, dihedral D16, chiral octahedral O and full octahedral Oh groups)--acting on ℝ2 and ℝ3 on both G-MNIST and G-ModelNet10 datasets. 
    more » « less
  4. Free, publicly-accessible full text available July 21, 2025
  5. Free, publicly-accessible full text available July 21, 2025
  6. A fundamental principle of neural representation is to minimize wiring length by spatially organizing neurons according to the frequency of their communication [Sterling and Laughlin, 2015]. A consequence is that nearby regions of the brain tend to represent similar content. This has been explored in the context of the visual cortex in recent works [Doshi and Konkle, 2023, Tong et al., 2023]. Here, we use the notion of cortical distance as a baseline to ground, evaluate, and interpret measures of representational distance. We compare several popular methods—both second-order methods (Representational Similarity Analysis, Centered Kernel Alignment) and first-order methods (Shape Metrics)—and calculate how well the representational distance reflects 2D anatomical distance along the visual cortex (the anatomical stress score). We evaluate these metrics on a large-scale fMRI dataset of human ventral visual cortex [Allen et al., 2022b], and observe that the 3 types of Shape Metrics produce representational-anatomical stress scores with the smallest variance across subjects, (Z score = -1.5), which suggests that first-order representational scores quantify the relationship between representational and cortical geometry in a way that is more invariant across different subjects. Our work establishes a criterion with which to compare methods for quantifying representational similarity with implications for studying the anatomical organization of high-level ventral visual cortex. 
    more » « less
  7. Free, publicly-accessible full text available June 24, 2025
  8. Free, publicly-accessible full text available July 1, 2025
  9. We propose a hierarchical neural network architecture for unsupervised learning of equivariant part-whole decompositions of visual scenes. In contrast to the global equivariance of group-equivariant networks, the proposed architecture exhibits equivariance to part-whole transformations throughout the hierarchy, which we term hierarchical equivariance. The model achieves these structured internal representations via hierarchical Bayesian inference, which gives rise to rich bottom-up, top-down, and lateral information flows, hypothesized to underlie the mechanisms of perceptual inference in visual cortex. We demonstrate these useful properties of the model on a simple dataset of scenes with multiple objects under independent rotations and translations. 
    more » « less
  10. Women are at higher risk of Alzheimer’s and other neurological diseases after menopause, and yet research con- necting female brain health to sex hormone fluctuations is limited. We seek to investigate this connection by develop- ing tools that quantify 3D shape changes that occur in the brain during sex hormone fluctuations. Geodesic regression on the space of 3D discrete surfaces offers a principled way to characterize the evolution of a brain’s shape. However, in its current form, this approach is too computationally expensive for practical use. In this paper, we pro- pose approximation schemes that accelerate geodesic regression on shape spaces of 3D discrete surfaces. We also provide rules of thumb for when each approximation can be used. We test our approach on synthetic data to quantify the speed-accuracy trade-off of these approximations and show that practitioners can expect very significant speed-up while only sacrificing little accuracy. Finally, we apply the method to real brain shape data and produce the first characterization of how the female hippocampus changes shape during the menstrual cycle as a function of progesterone: a characterization made (practically) possible by our approximation schemes. Our work paves the way for comprehensive, practical shape analyses in the fields of bio-medicine and computer vision. Our implementation is publicly avail- able on GitHub. 
    more » « less