skip to main content


This content will become publicly available on September 7, 2024

Title: A general construction method for component orthogonal arrays
Abstract

Order‐of‐addition (OofA) experiments have gained renewed attention in recent years, especially in regard to their design. For these experiments, the response is determined by the order in which components are added. A particularly useful design introduced in OofA experiments is the component orthogonal array (COA). The COA maintains pairwise balance between any two components while also ensuring each component appears equally often in each position. In this paper, we propose an efficient algorithm for constructing COAs which can be naturally split into blocks of Latin squares. These blocks can be run sequentially in a systematic order, potentially requiring fewer runs to identify optimal orderings, while also preserving good properties should the overall design be needed. We also show how to extend this construction method to create designs for any number of components.

 
more » « less
NSF-PAR ID:
10484916
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Quality and Reliability Engineering International
Volume:
40
Issue:
1
ISSN:
0748-8017
Format(s):
Medium: X Size: p. 712-721
Size(s):
["p. 712-721"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, Iyad Selesnick (Ed.)
    Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEG channel separately. We use the hypotheses generated by the P1 model and create additional features that carry information about the detected events and their confidence. The P2 model uses these additional features and the LFCC features to learn the temporal and spatial aspects of the EEG signals using a hybrid convolutional neural network (CNN) and LSTM model. Finally, Phase 3 aggregates the results from both P1 and P2 before applying a final postprocessing step. The online system implements Phase 1 by taking advantage of the Linux piping mechanism, multithreading techniques, and multi-core processors. To convert Phase 1 into an online system, we divide the system into five major modules: signal preprocessor, feature extractor, event decoder, postprocessor, and visualizer. The system reads 0.1-second frames from each EEG channel and sends them to the feature extractor and the visualizer. The feature extractor generates LFCC features in real time from the streaming EEG signal. Next, the system computes seizure and background probabilities using a channel-based LSTM model and applies a postprocessor to aggregate the detected events across channels. The system then displays the EEG signal and the decisions simultaneously using a visualization module. The online system uses C++, Python, TensorFlow, and PyQtGraph in its implementation. The online system accepts streamed EEG data sampled at 250 Hz as input. The system begins processing the EEG signal by applying a TCP montage [8]. Depending on the type of the montage, the EEG signal can have either 22 or 20 channels. To enable the online operation, we send 0.1-second (25 samples) length frames from each channel of the streamed EEG signal to the feature extractor and the visualizer. Feature extraction is performed sequentially on each channel. The signal preprocessor writes the sample frames into two streams to facilitate these modules. In the first stream, the feature extractor receives the signals using stdin. In parallel, as a second stream, the visualizer shares a user-defined file with the signal preprocessor. This user-defined file holds raw signal information as a buffer for the visualizer. The signal preprocessor writes into the file while the visualizer reads from it. Reading and writing into the same file poses a challenge. The visualizer can start reading while the signal preprocessor is writing into it. To resolve this issue, we utilize a file locking mechanism in the signal preprocessor and visualizer. Each of the processes temporarily locks the file, performs its operation, releases the lock, and tries to obtain the lock after a waiting period. The file locking mechanism ensures that only one process can access the file by prohibiting other processes from reading or writing while one process is modifying the file [9]. The feature extractor uses circular buffers to save 0.3 seconds or 75 samples from each channel for extracting 0.2-second or 50-sample long center-aligned windows. The module generates 8 absolute LFCC features where the zeroth cepstral coefficient is replaced by a temporal domain energy term. For extracting the rest of the features, three pipelines are used. The differential energy feature is calculated in a 0.9-second absolute feature window with a frame size of 0.1 seconds. The difference between the maximum and minimum temporal energy terms is calculated in this range. Then, the first derivative or the delta features are calculated using another 0.9-second window. Finally, the second derivative or delta-delta features are calculated using a 0.3-second window [6]. The differential energy for the delta-delta features is not included. In total, we extract 26 features from the raw sample windows which add 1.1 seconds of delay to the system. We used the Temple University Hospital Seizure Database (TUSZ) v1.2.1 for developing the online system [10]. The statistics for this dataset are shown in Table 1. A channel-based LSTM model was trained using the features derived from the train set using the online feature extractor module. A window-based normalization technique was applied to those features. In the offline model, we scale features by normalizing using the maximum absolute value of a channel [11] before applying a sliding window approach. Since the online system has access to a limited amount of data, we normalize based on the observed window. The model uses the feature vectors with a frame size of 1 second and a window size of 7 seconds. We evaluated the model using the offline P1 postprocessor to determine the efficacy of the delayed features and the window-based normalization technique. As shown by the results of experiments 1 and 4 in Table 2, these changes give us a comparable performance to the offline model. The online event decoder module utilizes this trained model for computing probabilities for the seizure and background classes. These posteriors are then postprocessed to remove spurious detections. The online postprocessor receives and saves 8 seconds of class posteriors in a buffer for further processing. It applies multiple heuristic filters (e.g., probability threshold) to make an overall decision by combining events across the channels. These filters evaluate the average confidence, the duration of a seizure, and the channels where the seizures were observed. The postprocessor delivers the label and confidence to the visualizer. The visualizer starts to display the signal as soon as it gets access to the signal file, as shown in Figure 1 using the “Signal File” and “Visualizer” blocks. Once the visualizer receives the label and confidence for the latest epoch from the postprocessor, it overlays the decision and color codes that epoch. The visualizer uses red for seizure with the label SEIZ and green for the background class with the label BCKG. Once the streaming finishes, the system saves three files: a signal file in which the sample frames are saved in the order they were streamed, a time segmented event (TSE) file with the overall decisions and confidences, and a hypotheses (HYP) file that saves the label and confidence for each epoch. The user can plot the signal and decisions using the signal and HYP files with only the visualizer by enabling appropriate options. For comparing the performance of different stages of development, we used the test set of TUSZ v1.2.1 database. It contains 1015 EEG records of varying duration. The any-overlap performance [12] of the overall system shown in Figure 2 is 40.29% sensitivity with 5.77 FAs per 24 hours. For comparison, the previous state-of-the-art model developed on this database performed at 30.71% sensitivity with 6.77 FAs per 24 hours [3]. The individual performances of the deep learning phases are as follows: Phase 1’s (P1) performance is 39.46% sensitivity and 11.62 FAs per 24 hours, and Phase 2 detects seizures with 41.16% sensitivity and 11.69 FAs per 24 hours. We trained an LSTM model with the delayed features and the window-based normalization technique for developing the online system. Using the offline decoder and postprocessor, the model performed at 36.23% sensitivity with 9.52 FAs per 24 hours. The trained model was then evaluated with the online modules. The current performance of the overall online system is 45.80% sensitivity with 28.14 FAs per 24 hours. Table 2 summarizes the performances of these systems. The performance of the online system deviates from the offline P1 model because the online postprocessor fails to combine the events as the seizure probability fluctuates during an event. The modules in the online system add a total of 11.1 seconds of delay for processing each second of the data, as shown in Figure 3. In practice, we also count the time for loading the model and starting the visualizer block. When we consider these facts, the system consumes 15 seconds to display the first hypothesis. The system detects seizure onsets with an average latency of 15 seconds. Implementing an automatic seizure detection model in real time is not trivial. We used a variety of techniques such as the file locking mechanism, multithreading, circular buffers, real-time event decoding, and signal-decision plotting to realize the system. A video demonstrating the system is available at: https://www.isip.piconepress.com/projects/nsf_pfi_tt/resources/videos/realtime_eeg_analysis/v2.5.1/video_2.5.1.mp4. The final conference submission will include a more detailed analysis of the online performance of each module. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation Partnership for Innovation award number IIP-1827565 and the Pennsylvania Commonwealth Universal Research Enhancement Program (PA CURE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. https://doi.org/10.1088/1741-2552/ab0ab5. [2] A. C. Bridi, T. Q. Louro, and R. C. L. Da Silva, “Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients,” Rev. Lat. Am. Enfermagem, vol. 22, no. 6, p. 1034, 2014. https://doi.org/10.1590/0104-1169.3488.2513. [3] M. Golmohammadi, V. Shah, I. Obeid, and J. Picone, “Deep Learning Approaches for Automatic Seizure Detection from Scalp Electroencephalograms,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York, New York, USA: Springer, 2020, pp. 233–274. https://doi.org/10.1007/978-3-030-36844-9_8. [4] “CFM Olympic Brainz Monitor.” [Online]. Available: https://newborncare.natus.com/products-services/newborn-care-products/newborn-brain-injury/cfm-olympic-brainz-monitor. [Accessed: 17-Jul-2020]. [5] M. L. Scheuer, S. B. Wilson, A. Antony, G. Ghearing, A. Urban, and A. I. Bagic, “Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset,” J. Clin. Neurophysiol., 2020. https://doi.org/10.1097/WNP.0000000000000709. [6] A. Harati, M. Golmohammadi, S. Lopez, I. Obeid, and J. Picone, “Improved EEG Event Classification Using Differential Energy,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium, 2015, pp. 1–4. https://doi.org/10.1109/SPMB.2015.7405421. [7] V. Shah, C. Campbell, I. Obeid, and J. Picone, “Improved Spatio-Temporal Modeling in Automated Seizure Detection using Channel-Dependent Posteriors,” Neurocomputing, 2021. [8] W. Tatum, A. Husain, S. Benbadis, and P. Kaplan, Handbook of EEG Interpretation. New York City, New York, USA: Demos Medical Publishing, 2007. [9] D. P. Bovet and C. Marco, Understanding the Linux Kernel, 3rd ed. O’Reilly Media, Inc., 2005. https://www.oreilly.com/library/view/understanding-the-linux/0596005652/. [10] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Front. Neuroinform., vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083. [11] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. https://dl.acm.org/doi/10.5555/1953048.2078195. [12] J. Gotman, D. Flanagan, J. Zhang, and B. Rosenblatt, “Automatic seizure detection in the newborn: Methods and initial evaluation,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 356–362, 1997. https://doi.org/10.1016/S0013-4694(97)00003-9. 
    more » « less
  2. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  3. Abstract

    The spatial distribution of population affects disease transmission, especially when shelter in place orders restrict mobility for a large fraction of the population. The spatial network structure of settlements therefore imposes a fundamental constraint on the spatial distribution of the population through which a communicable disease can spread. In this analysis we use the spatial network structure of lighted development as a proxy for the distribution of ambient population to compare the spatiotemporal evolution of COVID-19 confirmed cases in the USA and China. The Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band sensor on the NASA/NOAA Suomi satellite has been imaging night light at ~ 700 m resolution globally since 2012. Comparisons with sub-kilometer resolution census observations in different countries across different levels of development indicate that night light luminance scales with population density over ~ 3 orders of magnitude. However, VIIRS’ constant ~ 700 m resolution can provide a more detailed representation of population distribution in peri-urban and rural areas where aggregated census blocks lack comparable spatial detail. By varying the low luminance threshold of VIIRS-derived night light, we depict spatial networks of lighted development of varying degrees of connectivity within which populations are distributed. The resulting size distributions of spatial network components (connected clusters of nodes) vary with degree of connectivity, but maintain consistent scaling over a wide range (5 × to 10 × in area & number) of network sizes. At continental scales, spatial network rank-size distributions obtained from VIIRS night light brightness are well-described by power laws with exponents near −2 (slopes near −1) for a wide range of low luminance thresholds. The largest components (104to 105km2) represent spatially contiguous agglomerations of urban, suburban and periurban development, while the smallest components represent isolated rural settlements. Projecting county and city-level numbers of confirmed cases of COVID-19 for the USA and China (respectively) onto the corresponding spatial networks of lighted development allows the spatiotemporal evolution of the epidemic (infection and detection) to be quantified as propagation within networks of varying connectivity. Results for China show rapid nucleation and diffusion in January 2020 followed by rapid decreases in new cases in February. While most of the largest cities in China showed new confirmed cases approaching zero before the end of February, most of these cities also showed distinct second waves of cases in March or April. Whereas new cases in Wuhan did not approach zero until mid-March, as of December 2020 it has not yet experienced a second wave of cases. In contrast, the results for the USA show a wide range of trajectories, with an abrupt transition from slow increases in confirmed cases in a small number of network components in January and February, to rapid geographic dispersion to a larger number of components shortly before mobility reductions occurred in March. Results indicate that while most of the upper tail of the network had been exposed by the end of March, the lower tail of the component size distribution has only shown steep increases since mid-June.

     
    more » « less
  4. Abstract

    This paper examines a class of involution-constrained PDEs where some part of the PDE system evolves a vector field whose curl remains zero or grows in proportion to specified source terms. Such PDEs are referred to as curl-free or curl-preserving, respectively. They arise very frequently in equations for hyperelasticity and compressible multiphase flow, in certain formulations of general relativity and in the numerical solution of Schrödinger’s equation. Experience has shown that if nothing special is done to account for the curl-preserving vector field, it can blow up in a finite amount of simulation time. In this paper, we catalogue a class of DG-like schemes for such PDEs. To retain the globally curl-free or curl-preserving constraints, the components of the vector field, as well as their higher moments, must be collocated at the edges of the mesh. They are updated using potentials collocated at the vertices of the mesh. The resulting schemes: (i) do not blow up even after very long integration times, (ii) do not need any special cleaning treatment, (iii) can operate with large explicit timesteps, (iv) do not require the solution of an elliptic system and (v) can be extended to higher orders using DG-like methods. The methods rely on a special curl-preserving reconstruction and they also rely on multidimensional upwinding. The Galerkin projection, highly crucial to the design of a DG method, is now conducted at the edges of the mesh and yields a weak form update that uses potentials obtained at the vertices of the mesh with the help of a multidimensional Riemann solver. A von Neumann stability analysis of the curl-preserving methods is conducted and the limiting CFL numbers of this entire family of methods are catalogued in this work. The stability analysis confirms that with the increasing order of accuracy, our novel curl-free methods have superlative phase accuracy while substantially reducing dissipation. We also show that PNPM-like methods, which only evolve the lower moments while reconstructing the higher moments, retain much of the excellent wave propagation characteristics of the DG-like methods while offering a much larger CFL number and lower computational complexity. The quadratic energy preservation of these methods is also shown to be excellent, especially at higher orders. The methods are also shown to be curl-preserving over long integration times.

     
    more » « less
  5. Excessive phosphorus (P) applications to croplands can contribute to eutrophication of surface waters through surface runoff and subsurface (leaching) losses. We analyzed leaching losses of total dissolved P (TDP) from no-till corn, hybrid poplar (Populus nigra X P. maximowiczii), switchgrass (Panicum virgatum), miscanthus (Miscanthus giganteus), native grasses, and restored prairie, all planted in 2008 on former cropland in Michigan, USA. All crops except corn (13 kg P ha−1 year−1) were grown without P fertilization. Biomass was harvested at the end of each growing season except for poplar. Soil water at 1.2 m depth was sampled weekly to biweekly for TDP determination during March–November 2009–2016 using tension lysimeters. Soil test P (0–25 cm depth) was measured every autumn. Soil water TDP concentrations were usually below levels where eutrophication of surface waters is frequently observed (> 0.02 mg L−1) but often higher than in deep groundwater or nearby streams and lakes. Rates of P leaching, estimated from measured concentrations and modeled drainage, did not differ statistically among cropping systems across years; 7-year cropping system means ranged from 0.035 to 0.072 kg P ha−1 year−1 with large interannual variation. Leached P was positively related to STP, which decreased over the 7 years in all systems. These results indicate that both P-fertilized and unfertilized cropping systems may leach legacy P from past cropland management. Experimental details The Biofuel Cropping System Experiment (BCSE) is located at the W.K. Kellogg Biological Station (KBS) (42.3956° N, 85.3749° W; elevation 288 m asl) in southwestern Michigan, USA. This site is a part of the Great Lakes Bioenergy Research Center (www.glbrc.org) and is a Long-term Ecological Research site (www.lter.kbs.msu.edu). Soils are mesic Typic Hapludalfs developed on glacial outwash54 with high sand content (76% in the upper 150 cm) intermixed with silt-rich loess in the upper 50 cm55. The water table lies approximately 12–14 m below the surface. The climate is humid temperate with a mean annual air temperature of 9.1 °C and annual precipitation of 1005 mm, 511 mm of which falls between May and September (1981–2010)56,57. The BCSE was established as a randomized complete block design in 2008 on preexisting farmland. Prior to BCSE establishment, the field was used for grain crop and alfalfa (Medicago sativa L.) production for several decades. Between 2003 and 2007, the field received a total of ~ 300 kg P ha−1 as manure, and the southern half, which contains one of four replicate plots, received an additional 206 kg P ha−1 as inorganic fertilizer. The experimental design consists of five randomized blocks each containing one replicate plot (28 by 40 m) of 10 cropping systems (treatments) (Supplementary Fig. S1; also see Sanford et al.58). Block 5 is not included in the present study. Details on experimental design and site history are provided in Robertson and Hamilton57 and Gelfand et al.59. Leaching of P is analyzed in six of the cropping systems: (i) continuous no-till corn, (ii) switchgrass, (iii) miscanthus, (iv) a mixture of five species of native grasses, (v) a restored native prairie containing 18 plant species (Supplementary Table S1), and (vi) hybrid poplar. Agronomic management Phenological cameras and field observations indicated that the perennial herbaceous crops emerged each year between mid-April and mid-May. Corn was planted each year in early May. Herbaceous crops were harvested at the end of each growing season with the timing depending on weather: between October and November for corn and between November and December for herbaceous perennial crops. Corn stover was harvested shortly after corn grain, leaving approximately 10 cm height of stubble above the ground. The poplar was harvested only once, as the culmination of a 6-year rotation, in the winter of 2013–2014. Leaf emergence and senescence based on daily phenological images indicated the beginning and end of the poplar growing season, respectively, in each year. Application of inorganic fertilizers to the different crops followed a management approach typical for the region (Table 1). Corn was fertilized with 13 kg P ha−1 year−1 as starter fertilizer (N-P-K of 19-17-0) at the time of planting and an additional 33 kg P ha−1 year−1 was added as superphosphate in spring 2015. Corn also received N fertilizer around the time of planting and in mid-June at typical rates for the region (Table 1). No P fertilizer was applied to the perennial grassland or poplar systems (Table 1). All perennial grasses (except restored prairie) were provided 56 kg N ha−1 year−1 of N fertilizer in early summer between 2010 and 2016; an additional 77 kg N ha−1 was applied to miscanthus in 2009. Poplar was fertilized once with 157 kg N ha−1 in 2010 after the canopy had closed. Sampling of subsurface soil water and soil for P determination Subsurface soil water samples were collected beneath the root zone (1.2 m depth) using samplers installed at approximately 20 cm into the unconsolidated sand of 2Bt2 and 2E/Bt horizons (soils at the site are described in Crum and Collins54). Soil water was collected from two kinds of samplers: Prenart samplers constructed of Teflon and silica (http://www.prenart.dk/soil-water-samplers/) in replicate blocks 1 and 2 and Eijkelkamp ceramic samplers (http://www.eijkelkamp.com) in blocks 3 and 4 (Supplementary Fig. S1). The samplers were installed in 2008 at an angle using a hydraulic corer, with the sampling tubes buried underground within the plots and the sampler located about 9 m from the plot edge. There were no consistent differences in TDP concentrations between the two sampler types. Beginning in the 2009 growing season, subsurface soil water was sampled at weekly to biweekly intervals during non-frozen periods (April–November) by applying 50 kPa of vacuum to each sampler for 24 h, during which the extracted water was collected in glass bottles. Samples were filtered using different filter types (all 0.45 µm pore size) depending on the volume of leachate collected: 33-mm dia. cellulose acetate membrane filters when volumes were less than 50 mL; and 47-mm dia. Supor 450 polyethersulfone membrane filters for larger volumes. Total dissolved phosphorus (TDP) in water samples was analyzed by persulfate digestion of filtered samples to convert all phosphorus forms to soluble reactive phosphorus, followed by colorimetric analysis by long-pathlength spectrophotometry (UV-1800 Shimadzu, Japan) using the molybdate blue method60, for which the method detection limit was ~ 0.005 mg P L−1. Between 2009 and 2016, soil samples (0–25 cm depth) were collected each autumn from all plots for determination of soil test P (STP) by the Bray-1 method61, using as an extractant a dilute hydrochloric acid and ammonium fluoride solution, as is recommended for neutral to slightly acidic soils. The measured STP concentration in mg P kg−1 was converted to kg P ha−1 based on soil sampling depth and soil bulk density (mean, 1.5 g cm−3). Sampling of water samples from lakes, streams and wells for P determination In addition to chemistry of soil and subsurface soil water in the BCSE, waters from lakes, streams, and residential water supply wells were also sampled during 2009–2016 for TDP analysis using Supor 450 membrane filters and the same analytical method as for soil water. These water bodies are within 15 km of the study site, within a landscape mosaic of row crops, grasslands, deciduous forest, and wetlands, with some residential development (Supplementary Fig. S2, Supplementary Table S2). Details of land use and cover change in the vicinity of KBS are given in Hamilton et al.48, and patterns in nutrient concentrations in local surface waters are further discussed in Hamilton62. Leaching estimates, modeled drainage, and data analysis Leaching was estimated at daily time steps and summarized as total leaching on a crop-year basis, defined from the date of planting or leaf emergence in a given year to the day prior to planting or emergence in the following year. TDP concentrations (mg L−1) of subsurface soil water were linearly interpolated between sampling dates during non-freezing periods (April–November) and over non-sampling periods (December–March) based on the preceding November and subsequent April samples. Daily rates of TDP leaching (kg ha−1) were calculated by multiplying concentration (mg L−1) by drainage rates (m3 ha−1 day−1) modeled by the Systems Approach for Land Use Sustainability (SALUS) model, a crop growth model that is well calibrated for KBS soil and environmental conditions. SALUS simulates yield and environmental outcomes in response to weather, soil, management (planting dates, plant population, irrigation, N fertilizer application, and tillage), and genetics63. The SALUS water balance sub-model simulates surface runoff, saturated and unsaturated water flow, drainage, root water uptake, and evapotranspiration during growing and non-growing seasons63. The SALUS model has been used in studies of evapotranspiration48,51,64 and nutrient leaching20,65,66,67 from KBS soils, and its predictions of growing-season evapotranspiration are consistent with independent measurements based on growing-season soil water drawdown53 and evapotranspiration measured by eddy covariance68. Phosphorus leaching was assumed insignificant on days when SALUS predicted no drainage. Volume-weighted mean TDP concentrations in leachate for each crop-year and for the entire 7-year study period were calculated as the total dissolved P leaching flux (kg ha−1) divided by the total drainage (m3 ha−1). One-way ANOVA with time (crop-year) as the fixed factor was conducted to compare total annual drainage rates, P leaching rates, volume-weighted mean TDP concentrations, and maximum aboveground biomass among the cropping systems over all seven crop-years as well as with TDP concentrations from local lakes, streams, and groundwater wells. When a significant (α = 0.05) difference was detected among the groups, we used the Tukey honest significant difference (HSD) post-hoc test to make pairwise comparisons among the groups. In the case of maximum aboveground biomass, we used the Tukey–Kramer method to make pairwise comparisons among the groups because the absence of poplar data after the 2013 harvest resulted in unequal sample sizes. We also used the Tukey–Kramer method to compare the frequency distributions of TDP concentrations in all of the soil leachate samples with concentrations in lakes, streams, and groundwater wells, since each sample category had very different numbers of measurements. Individual spreadsheets in “data table_leaching_dissolved organic carbon and nitrogen.xls” 1.    annual precip_drainage 2.    biomass_corn, perennial grasses 3.    biomass_poplar 4.    annual N leaching _vol-wtd conc 5.    Summary_N leached 6.    annual DOC leachin_vol-wtd conc 7.    growing season length 8.    correlation_nh4 VS no3 9.    correlations_don VS no3_doc VS don Each spreadsheet is described below along with an explanation of variates. Note that ‘nan’ indicate data are missing or not available. First row indicates header; second row indicates units 1. Spreadsheet: annual precip_drainage Description: Precipitation measured from nearby Kellogg Biological Station (KBS) Long Term Ecological Research (LTER) Weather station, over 2009-2016 study period. Data shown in Figure 1; original data source for precipitation (https://lter.kbs.msu.edu/datatables/7). Drainage estimated from SALUS crop model. Note that drainage is percolation out of the root zone (0-125 cm). Annual precipitation and drainage values shown here are calculated for growing and non-growing crop periods. Variate    Description year    year of the observation crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” precip_G    precipitation during growing period (milliMeter) precip_NG    precipitation during non-growing period (milliMeter) drainage_G    drainage during growing period (milliMeter) drainage_NG    drainage during non-growing period (milliMeter)      2. Spreadsheet: biomass_corn, perennial grasses Description: Maximum aboveground biomass measurements from corn, switchgrass, miscanthus, native grass and restored prairie plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2015. Data shown in Figure 2.   Variate    Description year    year of the observation date    day of the observation (mm/dd/yyyy) crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” replicate    each crop has four replicated plots, R1, R2, R3 and R4 station    stations (S1, S2 and S3) of samplings within the plot. For more details, refer to link (https://data.sustainability.glbrc.org/protocols/156) species    plant species that are rooted within the quadrat during the time of maximum biomass harvest. See protocol for more information, refer to link (http://lter.kbs.msu.edu/datatables/36) For maize biomass, grain and whole biomass reported in the paper (weed biomass or surface litter are excluded). Surface litter biomass not included in any crops; weed biomass not included in switchgrass and miscanthus, but included in grass mixture and prairie. fraction    Fraction of biomass biomass_plot    biomass per plot on dry-weight basis (Grams_Per_SquareMeter) biomass_ha    biomass (megaGrams_Per_Hectare) by multiplying column biomass per plot with 0.01 3. Spreadsheet: biomass_poplar Description: Maximum aboveground biomass measurements from poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2015. Data shown in Figure 2. Note that poplar biomass was estimated from crop growth curves until the poplar was harvested in the winter of 2013-14. Variate    Description year    year of the observation method    methods of poplar biomass sampling date    day of the observation (mm/dd/yyyy) replicate    each crop has four replicated plots, R1, R2, R3 and R4 diameter_at_ground    poplar diameter (milliMeter) at the ground diameter_at_15cm    poplar diameter (milliMeter) at 15 cm height biomass_tree    biomass per plot (Grams_Per_Tree) biomass_ha    biomass (megaGrams_Per_Hectare) by multiplying biomass per tree with 0.01 4. Spreadsheet: annual N leaching_vol-wtd conc Description: Annual leaching rate (kiloGrams_N_Per_Hectare) and volume-weighted mean N concentrations (milliGrams_N_Per_Liter) of nitrate (no3) and dissolved organic nitrogen (don) in the leachate samples collected from corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2016. Data for nitrogen leached and volume-wtd mean N concentration shown in Figure 3a and Figure 3b, respectively. Note that ammonium (nh4) concentration were much lower and often undetectable (<0.07 milliGrams_N_Per_Liter). Also note that in 2009 and 2010 crop-years, data from some replicates are missing.    Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” crop-year    year of the observation replicate    each crop has four replicated plots, R1, R2, R3 and R4 no3 leached    annual leaching rates of nitrate (kiloGrams_N_Per_Hectare) don leached    annual leaching rates of don (kiloGrams_N_Per_Hectare) vol-wtd no3 conc.    Volume-weighted mean no3 concentration (milliGrams_N_Per_Liter) vol-wtd don conc.    Volume-weighted mean don concentration (milliGrams_N_Per_Liter) 5. Spreadsheet: summary_N leached Description: Summary of total amount and forms of N leached (kiloGrams_N_Per_Hectare) and the percent of applied N lost to leaching over the seven years for corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2016. Data for nitrogen amount leached shown in Figure 4a and percent of applied N lost shown in Figure 4b. Note the fraction of unleached N includes in harvest, accumulation in root biomass, soil organic matter or gaseous N emissions were not measured in the study. Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” no3 leached    annual leaching rates of nitrate (kiloGrams_N_Per_Hectare) don leached    annual leaching rates of don (kiloGrams_N_Per_Hectare) N unleached    N unleached (kiloGrams_N_Per_Hectare) in other sources are not studied % of N applied N lost to leaching    % of N applied N lost to leaching 6. Spreadsheet: annual DOC leachin_vol-wtd conc Description: Annual leaching rate (kiloGrams_Per_Hectare) and volume-weighted mean N concentrations (milliGrams_Per_Liter) of dissolved organic carbon (DOC) in the leachate samples collected from corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2016. Data for DOC leached and volume-wtd mean DOC concentration shown in Figure 5a and Figure 5b, respectively. Note that in 2009 and 2010 crop-years, water samples were not available for DOC measurements.     Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” crop-year    year of the observation replicate    each crop has four replicated plots, R1, R2, R3 and R4 doc leached    annual leaching rates of nitrate (kiloGrams_Per_Hectare) vol-wtd doc conc.    volume-weighted mean doc concentration (milliGrams_Per_Liter) 7. Spreadsheet: growing season length Description: Growing season length (days) of corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in the Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2009-2015. Date shown in Figure S2. Note that growing season is from the date of planting or emergence to the date of harvest (or leaf senescence in case of poplar).   Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” year    year of the observation growing season length    growing season length (days) 8. Spreadsheet: correlation_nh4 VS no3 Description: Correlation of ammonium (nh4+) and nitrate (no3-) concentrations (milliGrams_N_Per_Liter) in the leachate samples from corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2013-2015. Data shown in Figure S3. Note that nh4+ concentration in the leachates was very low compared to no3- and don concentration and often undetectable in three crop-years (2013-2015) when measurements are available. Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” date    date of the observation (mm/dd/yyyy) replicate    each crop has four replicated plots, R1, R2, R3 and R4 nh4 conc    nh4 concentration (milliGrams_N_Per_Liter) no3 conc    no3 concentration (milliGrams_N_Per_Liter)   9. Spreadsheet: correlations_don VS no3_doc VS don Description: Correlations of don and nitrate concentrations (milliGrams_N_Per_Liter); and doc (milliGrams_Per_Liter) and don concentrations (milliGrams_N_Per_Liter) in the leachate samples of corn, switchgrass, miscanthus, native grass, restored prairie and poplar plots in Great Lakes Bioenergy Research Center (GLBRC) Biomass Cropping System Experiment (BCSE) during 2013-2015. Data of correlation of don and nitrate concentrations shown in Figure S4 a and doc and don concentrations shown in Figure S4 b. Variate    Description crop    “corn” “switchgrass” “miscanthus” “nativegrass” “restored prairie” “poplar” year    year of the observation don    don concentration (milliGrams_N_Per_Liter) no3     no3 concentration (milliGrams_N_Per_Liter) doc    doc concentration (milliGrams_Per_Liter) 
    more » « less