skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, October 10 until 2:00 AM ET on Friday, October 11 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Chien, Edward"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We build upon the stripes-based knit planning framework of [Mitra et al. 2023], and view the resultant stripe pattern through the lens of singular foliations. This perspective views the stripes, and thus the candidate course rows or wale columns, as integral curves of a vector field specified by the spinning form of [Knöppel et al. 2015]. We show how to tightly control the topological structure of this vector field with linear level set constraints, preventing helicing of any integral curve. Practically speaking, this obviates the stripe placement constraints of [Mitra et al. 2023] and allows for shifting and variation of the stripe frequency without introducing additional helices. En route, we make the first explicit algebraic characterization of spinning form level set structure within singular triangles, and replace the standard interpolant with an “effective” one that improves the robustness of knit graph generation. We also extend the model of [Mitra et al. 2023] to surfaces with genus, via a Morse-based cylindrical decomposition, and implement automatic singularity pairing on the resulting components. 
    more » « less
    Free, publicly-accessible full text available July 13, 2025
  2. Mixup is a popular regularization technique for training deep neural networks that improves generalization and increases robustness to certain distribution shifts. It perturbs input training data in the direction of other randomly-chosen instances in the training set. To better leverage the structure of the data, we extend mixup in a simple, broadly applicable way to k-mixup, which perturbs k-batches of training points in the direction of other k-batches. The perturbation is done with displacement interpolation, i.e. interpolation under the Wasserstein metric. We demonstrate theoretically and in simulations that k-mixup preserves cluster and manifold structures, and we extend theory studying the efficacy of standard mixup to the k-mixup case. Our empirical results show that training with k-mixup further improves generalization and robustness across several network architectures and benchmark datasets of differing modalities. For the wide variety of real datasets considered, the performance gains of k-mixup over standard mixup are similar to or larger than the gains of mixup itself over standard ERM after hyperparameter optimization. In several instances, in fact, k-mixup achieves gains in settings where standard mixup has negligible to zero improvement over ERM. 
    more » « less
    Free, publicly-accessible full text available November 14, 2024
  3. null (Ed.)
  4. null (Ed.)
  5. null (Ed.)