skip to main content

Search for: All records

Creators/Authors contains: "Yang, Xin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The synthesis, photophysics, and electrochemiluminescence (ECL) of four water-soluble dinuclear Ir( iii ) and Ru( ii ) complexes (1–4) terminally-capped by 4′-phenyl-2,2′:6′,2′′-terpyridine (tpy) or 1,3-di(pyrid-2-yl)-4,6-dimethylbenzene (N^C^N) ligands and linked by a 2,7-bis(2,2′:6′,2′′-terpyridyl)fluorene with oligoether chains on C9 are reported. The impact of the tpy or N^C^N ligands and metal centers on the photophysical properties of 1–4 was assessed by spectroscopic methods including UV-vis absorption, emission, and transient absorption, and by time-dependent density functional theory (TDDFT) calculations. These complexes exhibited distinct singlet and triplet excited-state properties upon variation of the terminal-capping terdentate ligands and the metal centers. The ECL properties of complexes 1–3 with better water solubility were investigated in neutral phosphate buffer solutions (PBS) by adding tripropylamine (TPA) as a co-reactant, and the observed ECL intensity followed the descending order of 3 > 1 > 2. Complex 3 bearing the [Ru(tpy) 2 ] 2+ units displayed more pronounced ECL signals, giving its analogues great potential for further ECL study.
    Free, publicly-accessible full text available September 20, 2023
  2. Free, publicly-accessible full text available June 19, 2023
  3. Free, publicly-accessible full text available July 8, 2023
  4. We implement the numerical unified transform method to solve the nonlinear Schrödinger equation on the half-line. For the so-called linearizable boundary conditions, the method solves the half-line problems with comparable complexity as the numerical inverse scattering transform solves whole-line problems. In particular, the method computes the solution at any x and t without spatial discretization or time stepping. Contour deformations based on the method of nonlinear steepest descent are used so that the method’s computational cost does not increase for large x , t and the method is more accurate as x , t increase. Our ideas also apply to some cases where the boundary conditions are not linearizable.
  5. Meila, Marina ; Zhang, Tong (Ed.)
    Federated Learning (FL) is an emerging learning scheme that allows different distributed clients to train deep neural networks together without data sharing. Neural networks have become popular due to their unprecedented success. To the best of our knowledge, the theoretical guarantees of FL concerning neural networks with explicit forms and multi-step updates are unexplored. Nevertheless, training analysis of neural networks in FL is non-trivial for two reasons: first, the objective loss function we are optimizing is non-smooth and non-convex, and second, we are even not updating in the gradient direction. Existing convergence results for gradient descent-based methods heavily rely on the fact that the gradient direction is used for updating. The current paper presents a new class of convergence analysis for FL, Federated Neural Tangent Kernel (FL-NTK), which corresponds to overparamterized ReLU neural networks trained by gradient descent in FL and is inspired by the analysis in Neural Tangent Kernel (NTK). Theoretically, FL-NTK converges to a global-optimal solution at a linear rate with properly tuned learning parameters. Furthermore, with proper distributional assumptions, FL-NTK can also achieve good generalization. The proposed theoretical analysis scheme can be generalized to more complex neural networks.
  6. Federated Learning (FL) is an emerging learning scheme that allows different distributed clients to train deep neural networks together without data sharing. Neural networks have become popular due to their unprecedented success. To the best of our knowledge, the theoretical guarantees of FL concerning neural networks with explicit forms and multi-step updates are unexplored. Nevertheless, training analysis of neural networks in FL is non-trivial for two reasons: first, the objective loss function we are optimizing is non-smooth and non-convex, and second, we are even not updating in the gradient direction. Existing convergence results for gradient descent-based methods heavily rely on the fact that the gradient direction is used for updating. This paper presents a new class of convergence analysis for FL, Federated Learning Neural Tangent Kernel (FL-NTK), which corresponds to over-paramterized ReLU neural networks trained by gradient descent in FL and is inspired by the analysis in Neural Tangent Kernel (NTK). Theoretically, FL-NTK converges to a global-optimal solution at a linear rate with properly tuned learning parameters. Furthermore, with proper distributional assumptions, FL-NTK can also achieve good generalization.
  7. null (Ed.)