skip to main content


Search for: All records

Creators/Authors contains: "Keith, C."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In 2016, the National Oceanic and Atmospheric Administration deployed the first iteration of an operational National Water Model (NWM) to forecast the water cycle in the continental United States. With many versions, an hourly, multi-decadal historic simulation is made available to the public. In all released to date, the files containing simulated streamflow contain a snapshot of model conditions across the entire domain for a single timestep which makes accessing  time series a technical and resource-intensive challenge. In the most recent release, extracting a complete streamflow time series for a single location requires managing 367,920 files (~16.2 TB). In this work we describe a reproducible process for restructuring a sequential set of NWM steamflow files for efficient time series access and provide restructured datasets for versions 1.2 (1993–2018), 2.0 (1993–2020), and 2.1 (1979–2022). These datasets have been made accessible via an OPeNDAP enabled THREDDS data server for public use and a brief analysis highlights the latest version of the model should not be assumed best for all locations. Lastly we describe an R package that expedites data retrieval with examples for multiple use-cases.

     
    more » « less
  2. Abstract Objectives

    To compare hospitals that did and did not participate in clinical trials evaluating potential inpatient COVID-19 therapeutics.

    Methods

    We conducted a cross-sectional study of hospitals participating in trials that were registered on clinicaltrials.gov between April and August 2020. Using the 2019 RAND Hospital Dataset and 2019 American Community Survey, we used logistic regression modeling to compare hospital-level traits including demographic features between trial and non-trial hospitals.

    Results

    We included 488 hospitals that were participating in 298 interventional trials and 4232 non-participating hospitals. After controlling for demographic and other hospital traits, we found that teaching status (OR 2.11, 95% CI 1.52–2.95), higher patient acuity (OR 7.48, 4.39, 13.1), and location in the Northeast (OR 1.83, 95% CI 1.18, 2.85) and in wealthier counties (OR: 1.32, 95% CI 1.16–1.51) were associated with increased odds of trial participation, while being in counties with more White residents was associated with reduced odds (OR 0.98, 95% CI 0.98–0.99).

    Conclusions

    Hospitals participating and not participating in COVID-19 inpatient treatment clinical trials differed in many ways, resulting in important implications for the generalizability of trial data.

     
    more » « less
  3. Abstract

    Suzuki−Miyaura cross‐coupling reactions are used to modify the tyrosine residues onBombyx morisilkworm silk proteins using a water‐soluble palladium catalyst. First, model reactions using tyrosine derivatives are screened to determine optimal reaction conditions. For these reactions, a variety of aryl boronic acids, solvents, buffers, and temperature ranges are explored. Qualitative information on the reaction progress is collected via high‐performance liquid chromatography (HPLC), mass spectrometry (MS), and nuclear magnetic resonance (NMR). Optimized reactions are then applied to silk proteins. It is demonstrated the ability to modify silk fibroin in solution by first iodinating the tyrosine residues on the protein, and then carrying out Suzuki‐Miyaura reactions with a variety of boronic acid derivatives. Modification of silk is confirmed with NMR, ion‐exchange chromatography (IEC), UV‐vis, and infrared spectroscopy (IR).

     
    more » « less
  4. Abstract

    With an increasing number of continental‐scale hydrologic models, the ability to evaluate performance is key to understanding uncertainty and making improvements to the model(s). We hypothesize that any model, running a single set of physics, cannot be “properly” calibrated for the range of hydroclimatic diversity as seen in the contenintal United States. Here, we evaluate the NOAA National Water Model (NWM) version 2.0 historical streamflow record in over 4,200 natural and controlled basins using the Nash‐Sutcliffe Efficiency metric decomposed into relative performance, and conditional, and unconditional bias. Each of these is evaluated in the contexts of meteorologic, landscape, and anthropogenic characteristics to better understand where the model does poorly, what potentially causes the poor performance, and what similarities systemically poor performing areas share. The primary objective is to pinpoint traits in places with good/bad performance and low/high bias. NWM relative performance is higher when there is high precipitation, snow coverage (depth and fraction), and barren area. Low relative skill is associated with high potential evapotranspiration, aridity, moisture‐and‐energy phase correlation, and forest, shrubland, grassland, and imperviousness area. We see less bias in locations with high precipitation, moisture‐and‐energy phase correlation, barren, and grassland areas and more bias in areas with high aridity, snow coverage/fraction, and urbanization. The insights gained can help identify key hydrological factors underpinning NWM predictive skill; enforce the need for regionalized parameterization and modeling; and help inform heterogenous modeling systems, like the NOAA Next Generation Water Resource Modeling Framework, to enhance ongoing development and evaluation.

     
    more » « less
  5. Chinn, C. ; Tan, E. ; Chan, C. ; Kali, Y. (Ed.)
    Immersive AR technologies can support students’ learning processes and deep engagement with outdoor science pursuits, yet few studies explore these technologies with out-of-school learners. We analyze how immersive AR features built into an outdoor-based mobile app shaped nine families’ learning experiences as they explored pollinator habitats. Preliminary findings revealed that immersive AR scanning tools built into the Pollinator Explorers app guided families’ observational practices of real-world objects through virtual overlays representing pollinator habitats. 
    more » « less
  6. Abstract Neutron Star Interior Composition Explorer has a comparatively low background rate, but it is highly variable, and its spectrum must be predicted using measurements unaffected by the science target. We describe an empirical, three-parameter model based on observations of seven pointing directions that are void of detectable sources. Two model parameters track different types of background events, while the third is used to predict a low-energy excess tied to observations conducted in sunlight. An examination of 3556 good time intervals (GTIs), averaging 570 s, yields a median rate (0.4–12 keV; 50 detectors) of 0.87 c s −1 , but in 5% (1%) of cases, the rate exceeds 10 (300) c s −1 . Model residuals persist at 20%–30% of the initial rate for the brightest GTIs, implying one or more missing model parameters. Filtering criteria are given to flag GTIs likely to have unsatisfactory background predictions. With such filtering, we estimate a detection limit, 1.20 c s −1 (3 σ , single GTI) at 0.4–12 keV, equivalent to 3.6 × 10 −12 erg cm −2 s −1 for a Crab-like spectrum. The corresponding limit for soft X-ray sources is 0.51 c s −1 at 0.3–2.0 keV, or 4.3 × 10 −13 erg cm −2 s −1 for a 100 eV blackbody. These limits would be four times lower if exploratory GTIs accumulate 10 ks of data after filtering at the level prescribed for faint sources. Such filtering selects background GTIs 85% of the time. An application of the model to a 1 s timescale makes it possible to distinguish source flares from possible surges in the background. 
    more » « less
  7. null (Ed.)
    A series of complexes with low-energy Fe II to Ti IV metal-to-metal charge-transfer (MMCT) transitions, Cp 2 Ti(C 2 Fc) 2 , Cp* 2 Ti(C 2 Fc) 2 , and MeOOC Cp 2 Ti(C 2 Fc) 2 , was investigated using solvatochromism and resonance Raman spectroscopy (RRS) augmented with time-dependent density functional theory (TDDFT) calculations in order to interrogate the nature of the CT transitions. Computational models were benchmarked against the experimental UV-Vis spectra and B3LYP/6-31G(d) was found to most faithfully represent the spectra. The energy of the MMCT transition was measured in 15 different solvents and a multivariate fit to the Catalán solvent parameters – solvent polarizability (SP), solvent dipolarity (SdP), solvent basicity (SB), and solvent acidity (SA) – was performed. The effect of SP indicates a greater degree of electron delocalization in the excited state (ES) than the ground state (GS). The small negative solvatochromism with respect to SdP indicates a smaller dipole moment in the ES than the GS. The effect of SB is consistent with charge-transfer to Ti. Upon excitation into the MMCT absorption band, the RRS data show enhancement of the alkyne stretching modes and of the out-of-plane bending modes of the cyclopentadienyl ring connected to Fe and the alkyne bridge. This is consistent with changes in the oxidation states of Ti and Fe, respectively. The higher-energy transitions (350–450 nm) show enhancement of vibrational modes consistent with ethnylcyclopentadienyl to Ti ligand-to-metal charge transfer (LMCT). The RRS data is consistent with the TDDFT predicted character of these transitions. TDDFT suggests that the lowest-energy transition in Cp 2 Ti(C 2 Fc) 2 CuI, where CuI is coordinated between the alkynes, retains its Fe II to Ti IV MMCT character, in agreement with the RRS data, but that the lowest-energy transitions have significant CuI to Ti character. For Cp 2 Ti(C 2 Fc) 2 CuI, excitation into the low-energy MMCT absorption band results in selective enhancement of the symmetric alkynyl stretching mode. 
    more » « less
  8. Abstract

    A digital map of the built environment is useful for a range of economic, emergency response, and urban planning exercises such as helping find places in app driven interfaces, helping emergency managers know what locations might be impacted by a flood or fire, and helping city planners proactively identify vulnerabilities and plan for how a city is growing. Since its inception in 2004, OpenStreetMap (OSM) sets the benchmark for open geospatial data and has become a key player in the public, research, and corporate realms. Following the foundations laid by OSM, several open geospatial products describing the built environment have blossomed including the Microsoft USA building footprint layer and the OpenAddress project. Each of these products use different data collection methods ranging from public contributions to artificial intelligence, and if taken together, could provide a comprehensive description of the built environment. Yet, these projects are still siloed, and their variety makes integration and interoperability a major challenge. Here, we document an approach for merging data from these three major open building datasets and outline a workflow that is scalable to the continental United States (CONUS). We show how the results can be structured as a knowledge graph over which machine learning models are built. These models can help propagate and complete unknown quantities that can then be leveraged in disaster management.

     
    more » « less