skip to main content

Search for: All records

Creators/Authors contains: "Li, T."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Unlike traditional structural materials, soft solids can often sustain very large deformation before failure, and many exhibit nonlinear viscoelastic behavior. Modeling nonlinear viscoelasticity is a challenging problem for a number of reasons. In particular, a large number of material parameters are needed to capture material response and validation of models can be hindered by limited amounts of experimental data available. We have developed a Gaussian Process (GP) approach to determine the material parameters of a constitutive model describing the mechanical behavior of a soft, viscoelastic PVA hydrogel. A large number of stress histories generated by the constitutive model constitute themore »training sets. The low-rank representations of stress histories by Singular Value Decomposition (SVD) are taken to be random variables which can be modeled via Gaussian Processes with respect to the material parameters of the constitutive model. We obtain optimal material parameters by minimizing an objective function over the input set. We find that there are many good sets of parameters. Further the process reveals relationships between the model parameters. Results so far show that GP has great potential in fitting constitutive models.« less
    Free, publicly-accessible full text available December 14, 2022
  2. Free, publicly-accessible full text available December 1, 2022
  3. In this work, we explore the unique challenges---and opportunities---of unsupervised federated learning (FL). We develop and analyze a one-shot federated clustering scheme, k-FED, based on the widely-used Lloyd's method for k-means clustering. In contrast to many supervised problems, we show that the issue of statistical heterogeneity in federated networks can in fact benefit our analysis. We analyse k-FED under a center separation assumption and compare it to the best known requirements of its centralized counterpart. Our analysis shows that in heterogeneous regimes where the number of clusters per device (k') is smaller than the total number of clusters over themore »network k, ($k' \le \sqrt{k}$), we can use heterogeneity to our advantage---significantly weakening the cluster separation requirements for k-FED. From a practical viewpoint, k-FED also has many desirable properties: it requires only round of communication, can run asynchronously, and can handle partial participation or node/network failures. We motivate our analysis with experiments on common FL benchmarks, and highlight the practical utility of one-shot clustering through use-cases in personalized FL and device sampling.« less
  4. Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relativemore »to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.« less
  5. Abstract We use a recent census of the Milky Way (MW) satellite galaxy population to constrain the lifetime of particle dark matter (DM). We consider two-body decaying dark matter (DDM) in which a heavy DM particle decays with lifetime τ comparable to the age of the universe to a lighter DM particle (with mass splitting ϵ ) and to a dark radiation species. These decays impart a characteristic “kick velocity,” V kick = ϵ c , on the DM daughter particles, significantly depleting the DM content of low-mass subhalos and making them more susceptible to tidal disruption. We fit themore »suppression of the present-day DDM subhalo mass function (SHMF) as a function of τ and V kick using a suite of high-resolution zoom-in simulations of MW-mass halos, and we validate this model on new DDM simulations of systems specifically chosen to resemble the MW. We implement our DDM SHMF predictions in a forward model that incorporates inhomogeneities in the spatial distribution and detectability of MW satellites and uncertainties in the mapping between galaxies and DM halos, the properties of the MW system, and the disruption of subhalos by the MW disk using an empirical model for the galaxy–halo connection. By comparing to the observed MW satellite population, we conservatively exclude DDM models with τ < 18 Gyr (29 Gyr) for V kick = 20 kms −1 (40 kms −1 ) at 95% confidence. These constraints are among the most stringent and robust small-scale structure limits on the DM particle lifetime and strongly disfavor DDM models that have been proposed to alleviate the Hubble and S 8 tensions.« less
    Free, publicly-accessible full text available June 1, 2023
  6. Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline. Hyperparameter optimization is even more challenging in federated learning, where models are learned over a distributed network of heterogeneous devices; here, the need to keep data on device and perform local training makes it difficult to efficiently train and evaluate configurations. In this work, we investigate the problem of federated hyperparameter tuning. We first identify key challenges and show how standard approaches may be adapted to form baselines for the federated setting. Then, by making a novel connection to the neural architecture search technique of weight-sharing, wemore »introduce a new method, FedEx, to accelerate federated hyperparameter tuning that is applicable to widely-used federated optimization methods such as FedAvg and recent variants. Theoretically, we show that a FedEx variant correctly tunes the on-device learning rate in the setting of online convex optimization across devices. Empirically, we show that FedEx can outperform natural baselines for federated hyperparameter tuning by several percentage points on the Shakespeare, FEMNIST, and CIFAR-10 benchmarks, obtaining higher accuracy using the same training budget.« less
  7. Free, publicly-accessible full text available June 1, 2023