skip to main content


Search for: All records

Creators/Authors contains: "Ren, H"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Bis‐carbonylimidazolide (BCI) functionalization enables an efficient synthetic strategy to generate high molecular weight segmented nonisocyanate polyurethanes (NIPUs). Melt phase polymerization of ED‐2003 Jeffamine,4,4′‐methylenebis(cyclohexylamine), and a BCI monomer that mimics a 1,4‐butanediol chain extender enables polyether NIPUs that contain varying concentrations of hard segments ranging from 40 to 80 wt. %. Dynamic mechanical analysis and differential scanning calorimetry reveal thermal transitions for soft, hard, and mixed phases. Hard segment incorporations between 40 and 60 wt. % display up to three distinct phases pertaining to the poly(ethylene glycol) (PEG) soft segmentTg, melting transition, and hard segmentTg, while higher hard segment concentrations prohibit soft segment crystallization, presumably due to restricted molecular mobility from the hard segment. Atomic force microscopy allows for visualization and size determination of nanophase‐separated regimes, revealing a nanoscale rod‐like assembly of HS. Small‐angle X‐ray scattering confirms nanophase separation within the NIPU, characterizing both nanoscale amorphous domains and varying degrees of crystallinity. These NIPUs, which are synthesized with BCI monomers, display expected phase separation that is comparable to isocyanate‐derived analogues. This work demonstrates nanophase separation in BCI‐derived NIPUs and the feasibility of this nonisocyanate synthetic pathway for the preparation of segmented PU copolymers.

     
    more » « less
  2. The development of fully autonomous artificial pancreas systems (APS) that independently regulate the glucose levels of patients with Type 1 diabetes has been a long-standing goal of diabetes research. A significant barrier to progress is the difficulty of testing new control algorithms and safety features, since clinical trials are time- and resource-intensive. To facilitate ease of validation, we propose an open-source APS testbed that can integrate state-of-the-art APS controllers and glucose simulators with a novel fault injection engine. The testbed is used to reproduce the blood glucose trajectories of real patients from a clinical trial conducted over six months. We evaluate the performance of two closed-loop control algorithms (OpenAPS and Basal Bolus) using the testbed and find that these control algorithms are able to keep blood glucose in a safe region 93.49% and 79.46% of the time on average, compared with 66.18% of the time for the clinical trial. The fault injection engine simulates the real recalls and adverse events reported to the U.S. Food and Drug Administration (FDA) and demonstrates the resilience of the controller in hazardous conditions. We use the testbed to generate 2.5 years of synthetic data representing 20 different patient profiles with realistic adverse event scenarios, which would have been expensive and risky to collect in a clinical trial. 
    more » « less
  3. Hierarchical relations are prevalent and indispensable for organizing human knowledge captured by a knowledge graph (KG). The key property of hierarchical relations is that they induce a partial ordering over the entities, which needs to be modeled in order to allow for hierarchical reasoning. However, current KG embeddings can model only a single global hierarchy (single global partial ordering) and fail to model multiple heterogeneous hierarchies that exist in a single KG. Here we present ConE (Cone Embedding), a KG embedding model that is able to simultaneously model multiple hierarchical as well as non-hierarchical relations in a knowledge graph. ConE embeds entities into hyperbolic cones and models relations as transformations between the cones. In particular, ConE uses cone containment constraints in different subspaces of the hyperbolic embedding space to capture multiple heterogeneous hierarchies. Experiments on standard knowledge graph benchmarks show that ConE obtains state-of-the-art performance on hierarchical reasoning tasks as well as knowledge graph completion task on hierarchical graphs. In particular, our approach yields new state-of-the-art Hits@1 of 45.3% on WN18RR and 16.1% on DDB14 (0.231 MRR). As for hierarchical reasoning task, our approach outperforms previous best results by an average of 20% across the three datasets. 
    more » « less
  4. Abstract

    Vat photopolymerization (VP) and direct ink write (DIW) additive manufacturing (AM) provide complex geometries with precise spatial control employing a vast array of photo‐reactive polymeric systems. Although VP is recognized for superior resolution and surface finish, DIW provides versatility for higher viscosity systems. However, each AM platform presents specific rheological requirements that are essential for successful 3D printing. First, viscosity requirements constrain VP polymeric materials to viscosities below 10 Pa s. Thus, this requirement presents a challenging paradox that must be overcome to attain the physical performance of high molecular weight polymers while maintaining suitable viscosities for VP polymeric materials. Second, the necessary rheological complexity that is required for DIW pastes requires additional rheological measurements to ensure desirable thixotropic behavior. This manuscript describes the importance of rheological measurements when designing polymeric latexes for AM. Latexes effectively decouple the dependency of viscosity on molecular weight, thus enabling high molecular weight polymers with low viscosities. Photo‐crosslinking of water‐soluble monomers and telechelic oligomeric diacrylates in the presence of the latex enables the fabrication of a scaffold, which is restricted to the continuous aqueous phase and effectively surrounds the latex nanoparticles enabling the printing of otherwise inaccessible high molecular weight polymers. Rheological testing, including both steady and oscillatory shear experiments, provides insights into system properties and provides predictability for successful printing. This perspective article aims to provide an understanding of both chemical functionality (photo‐ and thermal‐reactivity) and rheological response and their importance for the successful design and evaluation of VP and DIW processable latex formulations.

     
    more » « less
  5. Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it. However, pretrained language models (LM), the foundation of most modern QA systems, do not robustly represent latent relationships between concepts, which is necessary for reasoning. While knowledge graphs (KG) are often used to augment LMs with structured representations of world knowledge, it remains an open question how to effectively fuse and reason over the KG representations and the language context, which provides situational constraints and nuances. In this work, we propose GreaseLM, a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations. Information from both modalities propagates to the other, allowing language context representations to be grounded by structured world knowledge, and allowing linguistic nuances (e.g., negation, hedging) in the context to inform the graph representations of knowledge. Our results on three benchmarks in the commonsense reasoning (i.e., CommonsenseQA, OpenbookQA) and medical question answering (i.e., MedQA-USMLE) domains demonstrate that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger. 
    more » « less
  6. Transformers provide a class of expressive architectures that are extremely effective for sequence modeling. However, the key limitation of transformers is their quadratic memory and time complexity O(L2) with respect to the sequence length in attention layers, which restricts application in extremely long sequences. Most existing approaches leverage sparsity or low-rank assumptions in the attention matrix to reduce cost, but sacrifice expressiveness. Instead, we propose Combiner, which provides full attention capability in each attention head while maintaining low computation and memory complexity. The key idea is to treat the self-attention mechanism as a conditional expectation over embeddings at each location, and approximate the conditional distribution with a structured factorization. Each location can attend to all other locations, either via direct attention, or through indirect attention to abstractions, which are again conditional expectations of embeddings from corresponding local regions. We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention, resulting in the same sub-quadratic cost (O(L log(L)) or O(L√L)). Combiner is a drop-in replacement for attention layers in existing transformers and can be easily implemented in common frameworks. An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach, yielding state-of-the-art results on several image and text modeling tasks. 
    more » « less
  7. Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions (^) and existential quantifiers (9). Handling queries with logical disjunctions (_) remains an open problem. Here we propose QUERY2BOX, an embedding-based framework for reasoning over arbitrary queries with ^, _, and 9 operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, QUERY2BOX is capable of handling arbitrary logical queries with ^, _, 9 in a scalable manner. We demonstrate the effectiveness of QUERY2BOX on three large KGs and show that QUERY2BOX achieves up to 25% relative improvement over the state of the art. 
    more » « less
  8. Abstract We report on multiwavelength target-of-opportunity observations of the blazar PKS 0735+178, located 2.°2 away from the best-fit position of the IceCube neutrino event IceCube-211208A detected on 2021 December 8. The source was in a high-flux state in the optical, ultraviolet, X-ray, and GeV γ -ray bands around the time of the neutrino event, exhibiting daily variability in the soft X-ray flux. The X-ray data from Swift-XRT and NuSTAR characterize the transition between the low-energy and high-energy components of the broadband spectral energy distribution (SED), and the γ -ray data from Fermi-LAT, VERITAS, and H.E.S.S. require a spectral cutoff near 100 GeV. Both the X-ray and γ -ray measurements provide strong constraints on the leptonic and hadronic models. We analytically explore a synchrotron self-Compton model, an external Compton model, and a lepto-hadronic model. Models that are entirely based on internal photon fields face serious difficulties in matching the observed SED. The existence of an external photon field in the source would instead explain the observed γ -ray spectral cutoff in both the leptonic and lepto-hadronic models and allow a proton jet power that marginally agrees with the Eddington limit in the lepto-hadronic model. We show a numerical lepto-hadronic model with external target photons that reproduces the observed SED and is reasonably consistent with the neutrino event despite requiring a high jet power. 
    more » « less
    Free, publicly-accessible full text available August 23, 2024
  9. null (Ed.)