skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Dataset for: Banwell et al. 2024, 'Observed meltwater-induced flexure and fracture at a doline on George VI Ice Shelf, Antarctica', Journal of Glaciology.
This dataset contains all of the field data (GNSS, weather station data, timelapse camera images) used in the publication 'Banwell et al., 2024', which documents observations of surface meltwater-induced flexure and fracture at a doline on north George VI Ice Shelf, Antarctic Peninsula.  more » « less
Award ID(s):
1841607
PAR ID:
10594830
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
U.S. Antarctic Program (USAP) Data Center
Date Published:
Subject(s) / Keyword(s):
GPS Data Timelaps Images AWS GNSS Cryosphere ice-shelf flexure Ice Shelf Fracture Ice-Shelf Melt Cryosphere
Format(s):
Medium: X
Location:
Antarctica; Antarctic Peninsula; (East Bound Longitude:-67.5; North Bound Latitude:-71.1; South Bound Latitude:-71.6; West Bound Longitude:-68.28)
Right(s):
Creative Commons Attribution 4.0 International
Sponsoring Org:
National Science Foundation
More Like this
  1. This Python script queries the USGS StreamStats Service API for a list of available basin characteristics, and the values for those characteristics, for each input site. The script takes as input a matrix of site identifiers and location coordinates and returns 1) a matrix of values for available basin characteristics obtained from StreamStats for each input location and 2) a matrix of basin characteristic variable names and definitions. To run this script exactly as written, create 3 columns of data in comma-separated format: 1) 'Site,' which are the study site identifiers, 2) 'lonSS,' the longitudinal coordinates, and 3) 'latSS,' the latitudinal coordinates (in decimal degrees). Name the input file 'ssLocs.csv' and store it in a subfolder named 'Data.' Otherwise, the pathnames for input and output files can be modified within the script. The output files 'ssDats.csv' and 'Descriptions.csv' will also be saved to the subfolder 'Data'. Multiple code runs may be necessary to obtain information for all sites; as long as the output file 'ssDats.csv' remains in the 'Data' folder, the script will only query for sites with missing information. If the program returns an error or is unable to obtain data for a site after several attempts, it may be that the input coordinates do not point to a cell defined as water in the StreamStats application. A solution is to check the coordinates manually in the StreamStats web application (http://streamstats.usgs.gov). This script was developed as part of the analysis described in: URycki DR, Good SP, Crump BC, Chadwick J and Jones GD (2020) River Microbiome Composition Reflects Macroscale Climatic and Geomorphic Differences in Headwater Streams. Front. Water 2:574728. doi: 10.3389/frwa.2020.574728 
    more » « less
  2. {"Abstract":["Data files were used in support of the research paper titled "\u201cExperimentation Framework for Wireless\nCommunication Systems under Jamming Scenarios" which has been submitted to the IET Cyber-Physical Systems: Theory & Applications journal. <\/p>\n\nAuthors: Marko Jacovic, Michael J. Liston, Vasil Pano, Geoffrey Mainland, Kapil R. Dandekar\nContact: krd26@drexel.edu<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nTop-level directories correspond to the case studies discussed in the paper. Each includes the sub-directories: logs, parsers, rayTracingEmulation, results. <\/p>\n\n--------------------------------<\/p>\n\nlogs:    - data logs collected from devices under test\n    - 'defenseInfrastucture' contains console output from a WARP 802.11 reference design network. Filename structure follows '*x*dB_*y*.txt' in which *x* is the reactive jamming power level and *y* is the jaming duration in samples (100k samples = 1 ms). 'noJammer.txt' does not include the jammer and is a base-line case. 'outMedian.txt' contains the median statistics for log files collected prior to the inclusion of the calculation in the processing script. \n    - 'uavCommunication' contains MGEN logs at each receiver for cases using omni-directional and RALA antennas with a 10 dB constant jammer and without the jammer. Omni-directional folder contains multiple repeated experiments to provide reliable results during each calculation window. RALA directories use s*N* folders in which *N* represents each antenna state. \n    - 'vehicularTechnologies' contains MGEN logs at the car receiver for different scenarios. 'rxNj_5rep.drc' does not consider jammers present, 'rx33J_5rep.drc' introduces the periodic jammer, in 'rx33jSched_5rep.drc' the device under test uses time scheduling around the periodic jammer, in 'rx33JSchedRandom_5rep.drc' the same modified time schedule is used with a random jammer. <\/p>\n\n--------------------------------<\/p>\n\nparsers:    - scripts used to collect or process the log files used in the study\n        - 'defenseInfrastructure' contains the 'xputFiveNodes.py' script which is used to control and log the throughput of a 5-node WARP 802.11 reference design network. Log files are manually inspected to generate results (end of log file provides a summary). \n        - 'uavCommunication' contains a 'readMe.txt' file which describes the parsing of the MGEN logs using TRPR. TRPR must be installed to run the scripts and directory locations must be updated. \n        - 'vehicularTechnologies' contains the 'mgenParser.py' script and supporting 'bfb.json' configuration file which also require TRPR to be installed and directories to be updated. <\/p>\n\n--------------------------------<\/p>\n\nrayTracingEmulation:    - 'wirelessInsiteImages': images of model used in Wireless Insite\n            - 'channelSummary.pdf': summary of channel statistics from ray-tracing study\n            - 'rawScenario': scenario files resulting from code base directly from ray-tracing output based on configuration defined by '*WI.json' file \n            - 'processedScenario': pre-processed scenario file to be used by DYSE channel emulator based on configuration defined by '*DYSE.json' file, applies fixed attenuation measured externally by spectrum analyzer and additional transmit power per node if desired\n            - DYSE scenario file format: time stamp (milli seconds), receiver ID, transmitter ID, main path gain (dB), main path phase (radians), main path delay (micro seconds), Doppler shift (Hz), multipath 1 gain (dB), multipath 1 phase (radians), multipath 1 delay relative to main path delay (micro seconds), multipath 2 gain (dB), multipath 2 phase (radians), multipath 2 delay relative to main path delay (micro seconds)\n            - 'nodeMapping.txt': mapping of Wireless Insite transceivers to DYSE channel emulator physical connections required\n            - 'uavCommunication' directory additionally includes 'antennaPattern' which contains the RALA pattern data for the omni-directional mode ('omni.csv') and directional state ('90.csv')<\/p>\n\n--------------------------------<\/p>\n\nresults:    - contains performance results used in paper based on parsing of aforementioned log files\n <\/p>"]} 
    more » « less
  3. {"Abstract":["Data files were used in support of the research paper titled \u201cMitigating RF Jamming Attacks at the Physical Layer with Machine Learning<\/em>" which has been submitted to the IET Communications journal.<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nAll data was collected using the SDR implementation shown here: https://github.com/mainland/dragonradio/tree/iet-paper. Particularly for antenna state selection, the files developed for this paper are located in 'dragonradio/scripts/:'<\/p>\n\n'ModeSelect.py': class used to defined the antenna state selection algorithm<\/li>'standalone-radio.py': SDR implementation for normal radio operation with reconfigurable antenna<\/li>'standalone-radio-tuning.py': SDR implementation for hyperparameter tunning<\/li>'standalone-radio-onmi.py': SDR implementation for omnidirectional mode only<\/li><\/ul>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nAuthors: Marko Jacovic, Xaime Rivas Rey, Geoffrey Mainland, Kapil R. Dandekar\nContact: krd26@drexel.edu<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nTop-level directories and content will be described below. Detailed descriptions of experiments performed are provided in the paper.<\/p>\n\n---------------------------------------------------------------------------------------------<\/p>\n\nclassifier_training: files used for training classifiers that are integrated into SDR platform<\/p>\n\n'logs-8-18' directory contains OTA SDR collected log files for each jammer type and under normal operation (including congested and weaklink states)<\/li>'classTrain.py' is the main parser for training the classifiers<\/li>'trainedClassifiers' contains the output classifiers generated by 'classTrain.py'<\/li><\/ul>\n\npost_processing_classifier: contains logs of online classifier outputs and processing script<\/p>\n\n'class' directory contains .csv logs of each RTE and OTA experiment for each jamming and operation scenario<\/li>'classProcess.py' parses the log files and provides classification report and confusion matrix for each multi-class and binary classifiers for each observed scenario - found in 'results->classifier_performance'<\/li><\/ul>\n\npost_processing_mgen: contains MGEN receiver logs and parser<\/p>\n\n'configs' contains JSON files to be used with parser for each experiment<\/li>'mgenLogs' contains MGEN receiver logs for each OTA and RTE experiment described. Within each experiment logs are separated by 'mit' for mitigation used, 'nj' for no jammer, and 'noMit' for no mitigation technique used. File names take the form *_cj_* for constant jammer, *_pj_* for periodic jammer, *_rj_* for reactive jammer, and *_nj_* for no jammer. Performance figures are found in 'results->mitigation_performance'<\/li><\/ul>\n\nray_tracing_emulation: contains files related to Drexel area, Art Museum, and UAV Drexel area validation RTE studies.<\/p>\n\nDirectory contains detailed 'readme.txt' for understanding.<\/li>Please note: the processing files and data logs present in 'validation' folder were developed by Wolfe et al. and should be cited as such, unless explicitly stated differently. \n\tS. Wolfe, S. Begashaw, Y. Liu and K. R. Dandekar, "Adaptive Link Optimization for 802.11 UAV Uplink Using a Reconfigurable Antenna," MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018, pp. 1-6, doi: 10.1109/MILCOM.2018.8599696.<\/li><\/ul>\n\t<\/li><\/ul>\n\nresults: contains results obtained from study<\/p>\n\n'classifier_performance' contains .txt files summarizing binary and multi-class performance of online SDR system. Files obtained using 'post_processing_classifier.'<\/li>'mitigation_performance' contains figures generated by 'post_processing_mgen.'<\/li>'validation' contains RTE and OTA performance comparison obtained by 'ray_tracing_emulation->validation->matlab->outdoor_hover_plots.m'<\/li><\/ul>\n\ntuning_parameter_study: contains the OTA log files for antenna state selection hyperparameter study<\/p>\n\n'dataCollect' contains a folder for each jammer considered in the study, and inside each folder there is a CSV file corresponding to a different configuration of the learning parameters of the reconfigurable antenna. The configuration selected was the one that performed the best across all these experiments and is described in the paper.<\/li>'data_summary.txt'this file contains the summaries from all the CSV files for convenience.<\/li><\/ul>"]} 
    more » « less
  4. Using the horizontal-to-vertical spectral-ratio (HVSR) method, we infer regolith thickness (i.e., depth to bedrock) throughout the Farmington River Watershed, CT, USA. Between Nov. 2019 and Nov. 2020, MOHO Tromino Model TEP-3C (MOHO, S.R.L.) three-component seismometers collected passive seismic recordings along the Farmington River and the upstream West Branch of Salmon Brook. From these recordings, we derived resonance frequencies using the GRILLA software (MOHO, S.R.L.), and then inferred potential regolith thicknesses based on likely shear wave velocities, Vs, intrinsic to the underlying sediment. Three potential shear wave velocities (Vs = 300m/s, 337m/s, 362 m/s) were considered for Farmington River watershed sediments, providing a range of potential depth estimates along the Farmington. This release contains raw passive seismic recording data, processed resonance frequency data, and the resulting inferred depth estimates displayed in both tabular and vector form. This dataset currently contains 3 zipped files: 1) ?Processed.zip? is a zipped directory containing .asc text files of processed passive seismic data, individual processed reports, tabulated results, and an associated summary text file, 'readme_Processed.txt'; 2) 'Raw.zip' contains .saf text files of passive seismic recordings and an associated 'readme_Raw.txt;' and 3) ?XYLegacyN_HVSR.zip'? contains ESRI shapefile of HVSR point locations with attribute data & a map image offering a visualization of the depth results (where, Vs = 300m/s). Additionally, the main folder contains LegacyN_HVSR_readme.txt which describes these sub-directories in further detail. 
    more » « less
  5. {"Abstract":["The intended use of this archive is to facilitate meta-analysis of the Data Observation Network for Earth (DataONE, [1]). <\/p>\n\nDataONE is a distributed infrastructure that provides information about earth observation data. This dataset was derived from the DataONE network using Preston [2] between 17 October 2018 and 6 November 2018, resolving 335,213 urls at an average retrieval rate of about 5 seconds per url, or 720 files per hour, resulting in a data gzip compressed tar archive of 837.3 MB .  <\/p>\n\nThe archive associates 325,757 unique metadata urls [3] to 202,063 unique ecological metadata files [4]. Also, the DataONE search index was captured to establish provenance of how the dataset descriptors were found and acquired. During the creation of the snapshot (or crawl), 15,389 urls [5], or 4.7% of urls, did not successfully resolve. <\/p>\n\nTo facilitate discovery, the record of the Preston snapshot crawl is included in the preston-ls-* files . There files are derived from the rdf/nquad file with hash://sha256/8c67e0741d1c90db54740e08d2e39d91dfd73566ea69c1f2da0d9ab9780a9a9f . This file can also be found in the data.tar.gz at data/8c/67/e0/8c67e0741d1c90db54740e08d2e39d91dfd73566ea69c1f2da0d9ab9780a9a9f/data . For more information about concepts and format, please see [2]. <\/p>\n\nTo extract all EML files from the included Preston archive, first extract the hashes assocated with EML files using:<\/p>\n\ncat preston-ls.tsv.gz | gunzip | grep "Version" | grep -v "deeplinker" | grep -v "query/solr" | cut -f1,3 | tr '\\t' '\\n' | grep "hash://" | sort | uniq > eml-hashes.txt<\/p>\n\nextract data.tar.gz using:<\/p>\n\n~/preston-archive$$ tar xzf data.tar.gz <\/p>\n\nthen use Preston to extract each hash using something like:<\/p>\n\n~/preston-archive$$ preston get hash://sha256/00002d0fc9e35a9194da7dd3d8ce25eddee40740533f5af2397d6708542b9baa\n<eml:eml xmlns:eml="eml://ecoinformatics.org/eml-2.1.1" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:stmml="http://www.xml-cml.org/schema/stmml_1.1" packageId="doi:10.18739/A24P9Q" system="https://arcticdata.io" scope="system" xsi:schemaLocation="eml://ecoinformatics.org/eml-2.1.1 ~/development/eml/eml.xsd">\n  <dataset>\n    <alternateIdentifier>urn:x-wmo:md:org.aoncadis.www::d76bc3b5-7b19-11e4-8526-00c0f03d5b7c</alternateIdentifier>\n    <alternateIdentifier>d76bc3b5-7b19-11e4-8526-00c0f03d5b7c</alternateIdentifier>\n    <title>Airglow Image Data 2011 4 of 5</title>\n...<\/p>\n\nAlternatively, without using Preston, you can extract the data using the naming convention:<\/p>\n\ndata/[x]/[y]/[z]/[hash]/data<\/p>\n\nwhere x is the first 2 characters of the hash, y the second 2 characters, z the third 2 characters, and hash the full sha256 content hash of the EML file.<\/p>\n\nFor example, the hash hash://sha256/00002d0fc9e35a9194da7dd3d8ce25eddee40740533f5af2397d6708542b9baa can be found in the file: data/00/00/2d/00002d0fc9e35a9194da7dd3d8ce25eddee40740533f5af2397d6708542b9baa/data . For more information, see [2].<\/p>\n\nThe intended use of this archive is to facilitate meta-analysis of the DataONE dataset network. <\/p>\n\n[1] DataONE, https://www.dataone.org\n[2] https://preston.guoda.bio, https://doi.org/10.5281/zenodo.1410543 . DataONE was crawled via Preston with "preston update -u https://dataone.org".\n[3] cat preston-ls.tsv.gz | gunzip | grep "Version" | grep -v "deeplinker" | grep -v "query/solr" | cut -f1,3 | tr '\\t' '\\n' | grep -v "hash://" | sort | uniq | wc -l\n[4] cat preston-ls.tsv.gz | gunzip | grep "Version" | grep -v "deeplinker" | grep -v "query/solr" | cut -f1,3 | tr '\\t' '\\n' | grep "hash://" | sort | uniq | wc -l\n[5] cat preston-ls.tsv.gz | gunzip | grep "Version" | grep  "deeplinker" | grep -v "query/solr" | cut -f1,3 | tr '\\t' '\\n' | grep -v "hash://" | sort | uniq | wc -l<\/p>\n\nThis work is funded in part by grant NSF OAC 1839201 from the National Science Foundation.<\/p>"]} 
    more » « less