skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Near-Ultraviolet to Midwave Infrared devices for Quantum Sensing and Information Processing
This talk reviews photonic integrated circuit materials, devices and integration techniques developed at MIT Lincoln Laboratory to support the needs of next generation quantum systems across the wavelength spectrum from the near-ultraviolet to the midwave-infrared.  more » « less
Award ID(s):
2016244
PAR ID:
10589669
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Optica Publishing Group
Date Published:
ISBN:
978-1-957171-40-1
Page Range / eLocation ID:
NoTh3B.1
Format(s):
Medium: X
Location:
Québec City
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract FlyBase (www.flybase.org) is the primary online database of genetic, genomic, and functional information aboutDrosophila melanogaster. The long and rich history ofDrosophilaresearch, combined with recent surges in genomic‐scale and high‐throughput technologies, means that FlyBase now houses a huge quantity of data. Researchers need to be able to query these data rapidly and intuitively, and the QuickSearch tool has been designed to meet these needs. This tool is conveniently located on the FlyBase homepage and is organized into a series of simple tabbed interfaces that cover the major data and annotation classes within the database. This article describes the functionality of all aspects of the QuickSearch tool. With this knowledge, FlyBase users will be equipped to take full advantage of all QuickSearch features and thereby gain improved access to data relevant to their research. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Using the “Search FlyBase” tab of QuickSearch Basic Protocol 2: Using the “Data Class” tab of QuickSearch Basic Protocol 3: Using the “References” tab of QuickSearch Basic Protocol 4: Using the “Gene Groups” tab of QuickSearch Basic Protocol 5: Using the “Pathways” tab of QuickSearch Basic Protocol 6: Using the “GO” tab of QuickSearch Basic Protocol 7: Using the “Protein Domains” tab of QuickSearch Basic Protocol 8: Using the “Expression” tab of QuickSearch Basic Protocol 9: Using the “GAL4 etc” tab of QuickSearch Basic Protocol 10: Using the “Phenotype” tab of QuickSearch Basic Protocol 11: Using the “Human Disease” tab of QuickSearch Basic Protocol 12: Using the “Homologs” tab of QuickSearch Support Protocol 1: Managing FlyBase hit lists 
    more » « less
  2. This artifact contains the source code for FlakeRake, a tool for automatically reproducing timing-dependent flaky-test failures. It also includes raw and processed results produced in the evaluation of FlakeRake   Contents:   Timing-related APIs that FlakeRake considers adding sleeps at: timing-related-apis Anonymized code for FlakeRake (not runnable in its anonymized state, but included for reference; we will publicly release the non-anonymized code under an open source license pending double-blind review): flakerake.tgz Failure messages extracted from the FlakeFlagger dataset: 10k_reruns_failures_by_test.csv.gz  Output from running isolated reruns on each flaky test in the FlakeFlager dataset: 10k_isolated_reruns_all_results.csv.gz (all test results summarized into a CSV), 10k_isolated_reruns_failures_by_test.csv.gz (CSV including just test failures, including failure messages), 10k_isolated_reruns_raw_results.tgz (includes all raw results from reruns, including the XML files output by maven) Output from running the FlakeFlagger replication study (non-isolated 10k reruns):flakeFlaggerReplResults.csv.gz (all test results summarized into a CSV), 10k_reruns_failures_by_test.csv.gz (CSV including just failures, including failure messages), flakeFlaggerRepl_raw_results.tgz (includes all raw results from reruns, including the XML files output by maven - this file is markedly larger than the 10k isolated reruns results because we ran *all* tests in this experiment, whereas the 10k isolated rerun experiment only re-ran the tests that were known to be flaky from the FlakeFlagger dataset). Output from running FlakeRake on each flaky test in the FlakeFlagger dataset: For bisection mode: results-bis.tgz For one-by-one mode: results-obo.tgz Scripts used to execute FlakeRake using an HPC cluster: execution-scripts.tgz Scripts used to execute rerun experiments using an HPC cluster: flakeFlaggerReplScripts.tgz Scripts used to parse the "raw" maven test result XML files in this artifact into the CSV files contained in this artifact: parseSurefireXMLs.tgz  Output from running FlakeRake in “reproduction” mode, attempting to reproduce each of the failures that matched the FlakeFlagger dataset (collected for bisection mode only): results-repro-bis.tgz Analysis of timing-dependent API calls in the failure inducing configurations that matched FlakeFlagger failures: bis-sleepyline.cause-to-matched-fail-configs-found.csv 
    more » « less
  3. The data provided here accompany the publication "Drought Characterization with GPS: Insights into Groundwater and Reservoir Storage in California" [Young et al., (2024)] which is currently under review with Water Resources Research. (as of 28 May 2024)Please refer to the manuscript and its supplemental materials for full details. (A link will be appended following publication)File formatting information is listed below, followed by a sub-section of the text describing the Geodetic Drought Index Calculation. The longitude, latitude, and label for grid points are provided in the file "loading_grid_lon_lat".Time series for each Geodetic Drought Index (GDI) time scale are provided within "GDI_time_series.zip".The included time scales are for 00- (daily), 1-, 3-, 6-, 12- 18- 24-, 36-, and 48-month GDI solutions.Files are formatted following...Title: "grid point label L****"_"time scale"_monthFile Format: ["decimal date" "GDI value"]Gridded, epoch-by-epoch, solutions for each time scale are provided within "GDI_grids.zip".Files are formatted following...Title: GDI_"decimal date"_"time scale"_monthFile Format: ["longitude" "latitude" "GDI value" "grid point label L****"]2.2 GEODETIC DROUGHT INDEX CALCULATION We develop the GDI following Vicente-Serrano et al. (2010) and Tang et al. (2023), such that the GDI mimics the derivation of the SPEI, and utilize the log-logistic distribution (further details below). While we apply hydrologic load estimates derived from GPS displacements as the input for this GDI (Figure 1a-d), we note that alternate geodetic drought indices could be derived using other types of geodetic observations, such as InSAR, gravity, strain, or a combination thereof. Therefore, the GDI is a generalizable drought index framework. A key benefit of the SPEI is that it is a multi-scale index, allowing the identification of droughts which occur across different time scales. For example, flash droughts (Otkin et al., 2018), which may develop over the period of a few weeks, and persistent droughts (>18 months), may not be observed or fully quantified in a uni-scale drought index framework. However, by adopting a multi-scale approach these signals can be better identified (Vicente-Serrano et al., 2010). Similarly, in the case of this GPS-based GDI, hydrologic drought signals are expected to develop at time scales that are both characteristic to the drought, as well as the source of the load variation (i.e., groundwater versus surface water and their respective drainage basin/aquifer characteristics). Thus, to test a range of time scales, the TWS time series are summarized with a retrospective rolling average window of D (daily with no averaging), 1, 3, 6, 12, 18, 24, 36, and 48-months width (where one month equals 30.44 days). From these time-scale averaged time series, representative compilation window load distributions are identified for each epoch. The compilation window distributions include all dates that range ±15 days from the epoch in question per year. This allows a characterization of the estimated loads for each day relative to all past/future loads near that day, in order to bolster the sample size and provide more robust parametric estimates [similar to Ford et al., (2016)]; this is a key difference between our GDI derivation and that presented by Tang et al. (2023). Figure 1d illustrates the representative distribution for 01 December of each year at the grid cell co-located with GPS station P349 for the daily TWS solution. Here all epochs between between 16 November and 16 December of each year (red dots), are compiled to form the distribution presented in Figure 1e. This approach allows inter-annual variability in the phase and amplitude of the signal to be retained (which is largely driven by variation in the hydrologic cycle), while removing the primary annual and semi-annual signals. Solutions converge for compilation windows >±5 days, and show a minor increase in scatter of the GDI time series for windows of ±3-4 days (below which instability becomes more prevalent). To ensure robust characterization of drought characteristics, we opt for an extended ±15-day compilation window. While Tang et al. (2023) found the log-logistic distribution to be unstable and opted for a normal distribution, we find that, by using the extended compiled distribution, the solutions are stable with negligible differences compared to the use of a normal distribution. Thus, to remain aligned with the SPEI solution, we retain the three-parameter log-logistic distribution to characterize the anomalies. Probability weighted moments for the log-logistic distribution are calculated following Singh et al., (1993) and Vicente-Serrano et al., (2010). The individual moments are calculated following Equation 3. These are then used to calculate the L-moments for shape (), scale (), and location () of the three-parameter log-logistic distribution (Equations 4 – 6). The probability density function (PDF) and the cumulative distribution function (CDF) are then calculated following Equations 7 and 8, respectively. The inverse Gaussian function is used to transform the CDF from estimates of the parametric sample quantiles to standard normal index values that represent the magnitude of the standardized anomaly. Here, positive/negative values represent greater/lower than normal hydrologic storage. Thus, an index value of -1 indicates that the estimated load is approximately one standard deviation dryer than the expected average load on that epoch. *Equations can be found in the main text. 
    more » « less
  4. Abstract The dynamics of soil phosphorus (P) control its bioavailability. Yet it remains a challenge to quantify soil P dynamics. Here we developed a soil P dynamics (SPD) model. We then assimilated eight data sets of 426‐day changes in Hedley P fractions into the SPD model, to quantify the dynamics of six major P pools in eight soil samples that are representative of a wide type of soils. The performance of our SPD model was better for labile P, secondary mineral P, and occluded P than for nonoccluded organic P (Po) and primary mineral P. All parameters describing soil P dynamics were approximately constrained by the data sets. The average turnover rates were labile P 0.040 g g−1day−1, nonoccluded Po 0.051 g g−1day−1, secondary mineral P 0.023 g g−1day−1, primary mineral P 0.00088 g g−1day−1, occluded Po 0.0066 g g−1day−1, and occluded inorganic P 0.0065 g g−1day−1, in the greenhouse environment studied. Labile P was transferred on average more to nonoccluded Po (transfer coefficient of 0.42) and secondary mineral P (0.38) than to plants (0.20). Soil pH and organic C concentration were the key soil properties regulating the competition for P between plants and soil secondary minerals. The turnover rate of labile P was positively correlated with that of nonoccluded Po and secondary mineral P. The pool size of labile P was most sensitive to its turnover rate. Overall, we suggest data assimilation can contribute significantly to an improved understanding of soil P dynamics. 
    more » « less
  5. In our increasingly data-driven society, it is critical for high school students to learn to integrate computational thinking with other disciplines in solving real world problems. To address this need for the life sciences in particular, we have developed the Bio-CS Bridge, a modular computational system coupled with curriculum integrating biology and computer science. Our transdisciplinary team comprises university and high school faculty and students with expertise in biology, computer science, and education. Our approach engages students and teachers in scientific practices using biological data that they can collect themselves, and computational tools that they help to design and implement, to address the real-world problem of pollinator decline. Our modular approach to high school curriculum design provides teachers with the educational flexibility to address national and statewide biology and computer science standards for a wide range of learner types. We are using a teacher- leader model to disseminate the Bio-CS Bridge, whose components will be freely available online. 
    more » « less