{"Abstract":["Evolutionary adaptation can allow a population to persist in the face of a\n new environmental challenge. With many populations now threatened by\n environmental change, it is important to understand whether this process\n of evolutionary rescue is feasible under natural conditions, yet work on\n this topic has been largely theoretical. We used unique long-term data to\n parameterize deterministic and stochastic models of the contribution of\n one trait to evolutionary rescue using field estimates for the subalpine\n plant Ipomopsis aggregata and hybrids with its close relative I.\n tenuituba. In the absence of evolution or plasticity, the two studied\n populations are projected to go locally extinct due to earlier snowmelt\n under climate change, which imposes drought conditions. Phenotypic\n selection on specific leaf area (SLA) was estimated in 12 years and\n multiple populations. Those data on selection and its environmental\n sensitivity to annual snowmelt timing in the spring were combined with\n previous data on heritability of the trait, phenotypic plasticity of the\n trait, and the impact of snowmelt timing on mean absolute fitness.\n Selection favored low values of SLA (thicker leaves). The evolutionary\n response to selection on that single trait was insufficient to allow\n evolutionary rescue by itself, but in combination with phenotypic\n plasticity it promoted evolutionary rescue in one of the two populations.\n The number of years until population size would stop declining and begin\n to rise again was heavily dependent upon stochastic environmental changes\n in snowmelt timing around the trend line. Our study illustrates how field\n estimates of quantitative genetic parameters can be used to predict the\n likelihood of evolutionary rescue. Although a complete set of parameter\n estimates are generally unavailable, it may also be possible to predict\n the general likelihood of evolutionary rescue based on published ranges\n for phenotypic selection and heritability and the extent to which early\n snowmelt impacts fitness."],"Methods":["The study sites consisted of three “Poverty Gulch” sites in\n Gunnsion National Forest and one site “Vera Falls” at the Rocky Mountain\n Biological Laboratory, all in Gunnison County, CO, USA. Focal plants\n included two sets of plants. One set (data from 2009-2019) consisted of\n plants in common gardens at three sites: an I. aggregata\n site (hereafter “agg”), an I. tenuituba\n site (hereafter “ten”) and a site at the center of the natural\n hybrid zone (hereafter “hyb”). The second set consisted of plants growing\n in situ at two of the same Poverty Gulch sites (“agg” and “hyb”), and\n an I. aggregata site at Vera Falls (hereafter “VF”;\n data from 2017-2023). The common gardens were started\n from seed in 2007 and 2008. Measurements of SLA in these gardens began\n when plants were 2 years old, either 2009 or 2010 depending upon the\n garden, as they are only small seedlings during their first summer after\n seed maturation. By 2018, all but 15 of the 4512 plants originally planted\n had died, with or without blooming, and we stopped following these\n gardens. Starting in 2017, in situ vegetative plants at the I.\n aggregata site and the hybrid site whose longest leaf exceeded\n 25 mm were marked with metal tags to facilitate\n identification. In each year of the study, one leaf\n from each vegetative plant was collected in the field and transported on\n ice to the RMBL, 8 km distant. There each leaf was scanned with a flatbed\n scanner and analyzed using ImageJ to measure leaf area. The leaf was dried\n at 70 deg C for 2 hours and then weighed to obtain dry mass and calculate\n SLA as area/dry mass. For plants in the common gardens, SLA was measured\n on 982 leaves from 383 plants in 2009 – 2014. For in situ plants, SLA was\n measured on one leaf from each of 877 plants in 2017 – 2022. Fitness was\n estimated as the binary variable of survival to flowering. Plants that\n were still alive in 2019 in the common gardens or in 2023 at the end of\n the study were assumed to survive to flowering. These\n data were used to estimate selection differentials on SLA in each of 12\n years. We then combined this information with previous information on\n heritability and the effect of snowmelt date in the spring on mean\n absolute fitness, measured as the finite rate of population increase, from\n a previous demographic study. This information was used to parameterize\n models of evolutionary rescue that we developed. We developed two models\n that differed in how snowmelt timing changed: a Step-change model and a\n Gradual environmental change model and analyzed both deterministic and\n stochastic versions. All analysis and modeling was done in R ver\n 4.2.2. "],"TechnicalInfo":["# Data for: Predicting the contribution of single trait evolution to\n rescuing a plant population from demographic impacts of climate change\n Dataset DOI: [10.5061/dryad.ht76hdrtn](10.5061/dryad.ht76hdrtn) ##\n Description of the data and file structure File\n "mastervegtraitsSLA2023.csv" contains data on specific leaf area\n for Ipomopsis plants in the field. Files\n "masterdemography_insitu_2023.csv" and\n "masterdemography_commongarden.csv" provide the corresponding\n information on survival to flowering. File "snowmelt.csv"\n provides dates of snowmelt in the spring. File\n "selection_vs_snowmelt.csv" provides intermediate results on\n selection intensities from analysis with the first parts of the code\n "Campbell-EvolutionLettersMay2025.Rmd". File\n "IPMresults.csv" provides estimates of the finite rate of\n increase (lambda) predicted from the publication by Campbell\n [https://doi.org/10.1073/pnas.1820096116](https://doi.org/10.1073/pnas.1820096116) File "Campbell-EvolutionLettersMay2025.Rmd" provides the R code for statistical analysis and the deterministic and stochastic models of evolutionary rescue. All data analysis and modeling was done in R ver. 4.4.2 on a Windows machine. All necessary input data files are provided. The R code is annotated to indicate which portions produce analyses and figures in the manuscript. For the multipart figures 6-9 the code needs to be manually updated to produce each part of the figure before assembling them. In those cases, each part represents a model with a unique set of parameters. ### Files and variables #### File: Data\\_files\\_for\\_EVL\\_Campbell\\_2025.zip **Description:** All data files Blank cells are indicated by "." except in "selection_vs_snowmelt.csv" where they are indicated by "NA" **File:** mastervegtraitsSLA2023.csv * meltday = first day of bare ground at the Rocky Mountain Biological Lab (RMBL) in units of days starting with January 1 * year = year * site = site. agg = site with I. aggregata. hyb = site with natural hybrids. ten = site with I. tenuituba. VF = Vera Falls site containing I. aggregata. * idtag = metal tag used to identify plant * planttype = type of plant. AA = progeny of I. aggregata x I. aggregata. AT = progeny of I. aggregata x I. tenuituba. TA = progeny of I. tenuituba x I. aggregata. TT = progeny of I. tenuituba x I. tenuituba. F2 = progeny of F1 (either AT or TA) x F1. agg = natural I. aggregata. hyb = natural hybrid. * sla = specific leaf area in units of cm2/g * uniqueid = an id used to identify the plant uniquely across all years and sites **File:** masterdemography\\_insitu\\_2023.csv * site = site. agg = site with I. aggregata. hyb = site with hybrids. VF = Vera Falls site containing I. aggregata. * idtag = metal tag used to identify plant * yeartagged = year the plant was first tagged * flrlabelxx = label for plants flowering in year 20xx * stagexxxx = stage in year xxxx. 0 = dead. 1 = single vegetative rosette. 2 = single inflorescence. 3 = multiple vegetative rosette. 4 = multiple inflorescence. * lengthxx = length of longest leaf in year 20xx in mm * leavesxx = number of leaves in rosette(s) in year 20xx **File:** masterdemography_commongarden.csv * site = site. agg = site with I. aggregata. hyb = site with natural hybrids. ten = site with I. tenuituba. * IDTAG = metal tag used to identify plant * Planttype = type of plant. AA = progeny of I. aggregata x I. aggregata. AT = progeny of I. aggregata x I. tenuituba. TA = progeny of I. tenuituba x I. aggregata. TT = progeny of I. tenuituba x I. tenuituba. F2full = full-sib progeny of F1 (either AT or TA) x F1. F2non = non full-sib progeny of F1 x F1. * stagexx = stage of plant in year 20xx. 0 = dead. 1 = single vegetative rosette. 2 = single inflorescence. 3 = multiple vegetative rosette. 4 = multiple inflorescence. * lengthxx = length of longest leaf in year 20xx in mm. * leavesxx = number of leaves in rosette(s) in year 20xx. **File:** snowmelt.csv * Year = year * Snowmelt = day of first bare ground at the RMBL in units of day starting with January 1. Values prior to 1975 were estimated. **File:** selection*vs*snowmelt.csv * meltday = day of first bare ground at the RMBL in units of day starting with January 1. * year = year * Sbyyearwithsite = standardized selection differential on SLA in model that includes site. These values are reproduced with standard errors in Table 1. * bwithsite = regression coefficient for raw survival on raw SLA in model that includes site. * meansurv = mean survival * covwsla = raw selection differential on SLA * bwithsitehyb = regression coefficient for raw survival on SLA at site hyb * meansurvhyb = mean survival at site hyb * covwslahyb = raw selection differential on SLA at site hyb used in the Gradual environmental change model * covwslaagg = raw selection differential on SLA at site agg used in the Gradual environmental change model * meansurvagg = mean survival at site agg * melthyb = estimated date of bare ground at site hyb * meltagg = estimated date of bare ground at site agg **File:** IPMresults.csv * site = site. agg = site with I. aggregata. hyb = site with natural hybrids. * day = predicted day of snowmelt (all predictions are from Campbell, D. R. 2019. Early snowmelt projected to cause population decline in a subalpine plant. PNAS (USA) 116(26) 1290-12906.) Units are days starting with January 1. * lambda = predicted finite rate of increase **File:** Campbell-EvolutionLettersMay2025.Rmd Contains R code for data analysis and modeling. All analysis and modeling was done in R ver 4.2.2."]}
more »
« less
Dataset for Gelatinous fibers develop asymmetrically to support bends and coils in common bean vines (Phaseolus vulgaris L., Fabaceae)
Internode_lengths_AveragePerGroup_Stage5.csv = averaged internode lengths for each treatment groups at stage 5 (plastochron 9). Data represented in Appendix S2. Internode_lengths_Stage5.csv = data of all internodes lengths per individual plant at stage 5 (plastochron 9). Data represented in Appendix S2. InternodeLengths_allStages.csv = internode lengths through 5 stages (0-9 plastochon). Data represented in Appendix S1. Associated scripts: https://github.com/angelique-acevedo/Common-Bean-Analysis
more »
« less
- Award ID(s):
- 2237046
- PAR ID:
- 10660742
- Publisher / Repository:
- Zenodo
- Date Published:
- Edition / Version:
- 2
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
{"Abstract":["This data set contains 194778 quasireaction subgraphs extracted from CHO transition networks with 2-6 non-hydrogen atoms (CxHyOz, 2 <= x + z <= 6).<\/p>\n\nThe complete table of subgraphs (including file locations) is in CHO-6-atoms-subgraphs.csv file. The subgraphs are in GraphML format (http://graphml.graphdrawing.org) and are compressed using bzip2. All subgraphs are undirected and unweighted. The reactant and product nodes (initial and final) are labeled in the "type" node attribute. The nodes are represented as multi-molecule SMILES strings. The edges are labeled by the reaction rules in SMARTS representation. The forward and backward reading of the SMARTS string should be considered equivalent.<\/p>\n\nThe generation and analysis of this data set is described in\nD. Rappoport, Statistics and Bias-Free Sampling of Reaction Mechanisms from Reaction Network Models, 2023, submitted. Preprint at ChemrXiv, DOI: 10.26434/chemrxiv-2023-wltcr<\/p>\n\nSimulation parameters\n- CHO networks constructed using polar bond break/bond formation rule set for CHO.\n- High-energy nodes were excluded using the following rules:\n (i) more than 3 rings, (ii) triple and allene bonds in rings, (iii) double bonds at\n bridge atoms,(iv) double bonds in fused 3-membered rings.\n- Neutral nodes were defined as containing only neutral molecules.\n- Shortest path lengths were determined for all pairs of neutral nodes.\n- Pairs of neutral nodes with shortest-path length > 8 were excluded.\n- Additionally, pairs of neutral nodes connected only by shortest paths passing through\n additional neutral nodes (reducible paths) were excluded.<\/p>\n\nFor background and additional details, see paper above.<\/p>"],"Other":["This work was supported in part by the National Science Foundation under Grant No. CHE-2227112."]}more » « less
-
{"Abstract":["Modern commercial vehicles are required by law to be equipped with\n Electronic Logging Devices (ELDs) in an effort to make it easier to track,\n manage, and share records of duty status (RODS) data. Research has shown\n that integration of third-party ELDs into commercial trucks can introduce\n a significant cybersecurity risk. This includes the ability of nefarious\n actors to modify firmware on ELDs to gain the ability to arbitrarily write\n messages to the Controller Area Network (CAN) within the vehicle.\n Additionally, a proof-of-concept self-propagating truck-to-truck worm has\n been demonstrated. This dataset was collected during controlled\n testing on a Kenworth T270 Class 6 truck with a commercially available\n ELD, during which the firmware on the ELD was replaced remotely over a\n Wi-Fi connection from an adjacently driving passenger vehicle. The\n compromised ELD then gained the ability to perform arbitrary CAN message\n writes of the attacker’s choice. The dataset contains CAN traffic in the\n `candump` format collected using the Linux `socketcan` tool. \n After taking control of the ELD, the attacker writes Torque Speed control\n messages onto the CAN network, impersonating the Transmission Control\n Module (TCM). These messages command the Engine Control Module (ECM) to\n request 0% torque output, effectively disabling the driver’s control of\n the accelerator and forcing the truck to idle."],"TechnicalInfo":["## Attack data for electronic logging device vulnerability for medium and\n heavy duty vehicles ## Dataset Overview This dataset contains Controller\n Area Network (CAN) logs captured using `candump` from the **SocketCAN**\n framework during a remote drive-by attack on an electronic logging device\n (ELD). The attack is detailed as a public advisory through CISA at:\n [https://www.cisa.gov/news-events/ics-advisories/icsa-24-093-01](https://www.cisa.gov/news-events/ics-advisories/icsa-24-093-01). The logs are in a traditional `.log` format, preserving raw CAN messages, timestamps, and metadata. This dataset is intended for research, forensic analysis, anomaly detection, and reverse engineering of vehicular communication networks. ## File Format Each `.log` file follows the standard `candump` output format: ``` (1623847291.123456) can0 0CF00400 [8] FF FF FF FF FF FF FF FF ``` ### Explanation: * **Timestamps** (`(1623847291.123456)`) – Epoch time with microsecond precision. * **CAN Interface** (`can0`) – The name of the CAN bus interface used for capturing. * **CAN ID** (`0CF00400`) – The hexadecimal identifier of the CAN frame. * **DLC** (`[8]`) – Data Length Code, indicating the number of bytes in the data field. * **Data** (`FF FF FF FF FF FF FF FF`) – The payload transmitted in the CAN message. ## Dataset Contents * `Wireless_Pedal_Jam.log` – Raw CAN logs collected on a specific date. ## Capture Environment * **Hardware Used**: SocketCAN * **Software Used**: `candump` from the `can-utils` package on Linux. * **Vehicle/System**: 2014 Kenworth T270 * **Bus Type**: J1939 ## Usage To analyze the dataset, you can use the following tools: * **`candump`** (for live monitoring) * **`canplayer`** (to replay logs) * **`can-utils`** (`cansniffer`, `canbusload`, `canlogserver`, etc.) * **Python with `python-can`** (for programmatic parsing) * **Wireshark** (for visualization) ### Example Commands #### Replaying the Log File ``` canplayer -I dataset_YYYYMMDD.log ``` #### Filtering Messages by CAN ID: ``` cat dataset_YYYYMMDD.log | grep "0CF00400" ``` #### Converting Logs to CSV **Using Python:** ``` import pandas as pd log_file = "dataset_YYYYMMDD.log" data = [] with open(log_file, "r") as f: for line in f: parts = line.strip().split() if len(parts) >= 5: timestamp = parts[0].strip("()") interface = parts[1] can_id = parts[2] dlc = parts[3].strip("[]") data_bytes = " ".join(parts[4:]) data.append([timestamp, interface, can_id, dlc, data_bytes]) df = pd.DataFrame(data, columns=["Timestamp", "Interface", "CAN_ID", "DLC", "Data"]) df.to_csv("dataset.csv", index=False) ```"]}more » « less
-
{"Abstract":["This data set contains all classifications that the Gravity Spy Machine Learning model for LIGO glitches from the first three observing runs (O1, O2 and O3, where O3 is split into O3a and O3b). Gravity Spy classified all noise events identified by the Omicron trigger pipeline in which Omicron identified that the signal-to-noise ratio was above 7.5 and the peak frequency of the noise event was between 10 Hz and 2048 Hz. To classify noise events, Gravity Spy made Omega scans of every glitch consisting of 4 different durations, which helps capture the morphology of noise events that are both short and long in duration.<\/p>\n\nThere are 22 classes used for O1 and O2 data (including No_Glitch and None_of_the_Above), while there are two additional classes used to classify O3 data.<\/p>\n\nFor O1 and O2, the glitch classes were: 1080Lines, 1400Ripples, Air_Compressor, Blip, Chirp, Extremely_Loud, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line, Whistle<\/p>\n\nFor O3, the glitch classes were: 1080Lines, 1400Ripples, Air_Compressor, Blip, Blip_Low_Frequency<\/strong>, Chirp, Extremely_Loud, Fast_Scattering<\/strong>, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line, Whistle<\/p>\n\nIf you would like to download the Omega scans associated with each glitch, then you can use the gravitational-wave data-analysis tool GWpy. If you would like to use this tool, please install anaconda if you have not already and create a virtual environment using the following command<\/p>\n\n```conda create --name gravityspy-py38 -c conda-forge python=3.8 gwpy pandas psycopg2 sqlalchemy```<\/p>\n\nAfter downloading one of the CSV files for a specific era and interferometer, please run the following Python script if you would like to download the data associated with the metadata in the CSV file. We recommend not trying to download too many images at one time. For example, the script below will read data on Hanford glitches from O2 that were classified by Gravity Spy and filter for only glitches that were labelled as Blips with 90% confidence or higher, and then download the first 4 rows of the filtered table.<\/p>\n\n```<\/p>\n\nfrom gwpy.table import GravitySpyTable<\/p>\n\nH1_O2 = GravitySpyTable.read('H1_O2.csv')<\/p>\n\nH1_O2[(H1_O2["ml_label"] == "Blip") & (H1_O2["ml_confidence"] > 0.9)]<\/p>\n\nH1_O2[0:4].download(nproc=1)<\/p>\n\n```<\/p>\n\nEach of the columns in the CSV files are taken from various different inputs: <\/p>\n\n[\u2018event_time\u2019, \u2018ifo\u2019, \u2018peak_time\u2019, \u2018peak_time_ns\u2019, \u2018start_time\u2019, \u2018start_time_ns\u2019, \u2018duration\u2019, \u2018peak_frequency\u2019, \u2018central_freq\u2019, \u2018bandwidth\u2019, \u2018channel\u2019, \u2018amplitude\u2019, \u2018snr\u2019, \u2018q_value\u2019] contain metadata about the signal from the Omicron pipeline. <\/p>\n\n[\u2018gravityspy_id\u2019] is the unique identifier for each glitch in the dataset. <\/p>\n\n[\u20181400Ripples\u2019, \u20181080Lines\u2019, \u2018Air_Compressor\u2019, \u2018Blip\u2019, \u2018Chirp\u2019, \u2018Extremely_Loud\u2019, \u2018Helix\u2019, \u2018Koi_Fish\u2019, \u2018Light_Modulation\u2019, \u2018Low_Frequency_Burst\u2019, \u2018Low_Frequency_Lines\u2019, \u2018No_Glitch\u2019, \u2018None_of_the_Above\u2019, \u2018Paired_Doves\u2019, \u2018Power_Line\u2019, \u2018Repeating_Blips\u2019, \u2018Scattered_Light\u2019, \u2018Scratchy\u2019, \u2018Tomte\u2019, \u2018Violin_Mode\u2019, \u2018Wandering_Line\u2019, \u2018Whistle\u2019] contain the machine learning confidence for a glitch being in a particular Gravity Spy class (the confidence in all these columns should sum to unity). <\/p>\n\n[\u2018ml_label\u2019, \u2018ml_confidence\u2019] provide the machine-learning predicted label for each glitch, and the machine learning confidence in its classification. <\/p>\n\n[\u2018url1\u2019, \u2018url2\u2019, \u2018url3\u2019, \u2018url4\u2019] are the links to the publicly-available Omega scans for each glitch. \u2018url1\u2019 shows the glitch for a duration of 0.5 seconds, \u2018url2\u2019 for 1 seconds, \u2018url3\u2019 for 2 seconds, and \u2018url4\u2019 for 4 seconds.<\/p>\n\n```<\/p>\n\nFor the most recently uploaded training set used in Gravity Spy machine learning algorithms, please see Gravity Spy Training Set on Zenodo.<\/p>\n\nFor detailed information on the training set used for the original Gravity Spy machine learning paper, please see Machine learning for Gravity Spy: Glitch classification and dataset on Zenodo. <\/p>"]}more » « less
-
{"Abstract":["This is an extracted data product for radar bed reflectivity from Whillans Ice Plain, West Antarctica. The original data are hosted by the Center for Remote Sensing of Ice Sheets (CReSIS; see associated citation below). The files here can be recalculate and are meant to be used within a set of computational notebooks here:https://doi.org/10.5281/zenodo.10859135\n\nThere are two csv files included here, each structured as a Pandas dataframe. You can load them in Python like:df = pd.read_csv('./Picked_Bed_Power.csv')\n\nThe first file, 'Picked_Bed_Power.csv' is the raw, uncorrected power from the radar image at the bed pick provided by CReSIS. There are also other useful variables for georeferencing, flight attributes, etc.\n\nThe second file, 'Processed_Reflectivity.csv' is processed from the first file. Processing includes: 1) a spreading correction; 2) an attenuation correction; and, 3) a power adjustment flight days based on compared power at crossover points. This file also has identifiers for regions including "grounded ice", "ungrounded ice", and "subglacial lakes"."]}more » « less
An official website of the United States government
