skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 9, 2026

Title: TDR-Transformer: A transformer neural network model to determine soil relative permittivity variations along a time domain reflectometry sensor waveguide
Award ID(s):
2037504
PAR ID:
10644087
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Computers and Electronics in Agriculture
Date Published:
Journal Name:
Computers and electronics in agriculture
ISSN:
0168-1699
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Hydrogels, which are hydrophilic soft porous networks, are an important class of materials of broad relevance to bioanalytical chemistry, soft‐robotics, drug delivery, and regenerative medicine. Transformer hydrogels are micro‐ and mesostructured hydrogels that display a dramatic transformation of shape, form, or dimension with associated changes in function, due to engineered local variations such as in swelling or stiffness, in response to external controls or environmental stimuli. This review describes principles that can be utilized to fabricate transformer hydrogels such as by layering, patterning, or generating anisotropy, and gradients. Transformer hydrogels are classified based on their responsivity to different stimuli such as temperature, electromagnetic fields, chemicals, and biomolecules. A survey of the current research progress suggests applications of transformer hydrogels in biomimetics, soft robotics, microfluidics, tissue engineering, drug delivery, surgery, and biomedical engineering. 
    more » « less
  2. null (Ed.)
    In this study, design of a 330kW single-phase transformer (corresponding to 1MW three-phase) operating at 50kHz is presented. Possible core materials and their performances are investigated under high switching frequency operation. Core volume, area, configuration, and market availability are studied to achieve the optimal compact and cost-effective transformer model. Next, transformer winding type, size, placement, and cost are analyzed. These steps will result in a complete transformer electromagnetic design and modelling. Afterwards, a 3D transformer model is created and simulated using a Finite Element Analysis (FEA) tool. ANSYS Maxwell-3D is used to simulate the magnetics, electrostatics, and transients of the designed transformer. This model is integrated with a power electronics circuit in ANSYS Simplorer to make a co-simulation for the entire system. Results obtained will include core maximum flux density, core/copper losses, leakage/magnetizing inductances, windings parasitic capacitances, and input/output voltage, current, and power values. Finally, the systems' overall efficiency is calculated and presented. 
    more » « less
  3. In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts. 
    more » « less