skip to main content

Title: Mitigating RF Jamming Attacks at the Physical Layer with Machine Learning Dataset
Abstract
<p>Data files were used in support of the research paper titled “<em>Mitigating RF Jamming Attacks at the Physical Layer with Machine Learning</em>&#34; which has been submitted to the IETMore>>
Creator(s):
;
Publisher:
Zenodo
Publication Year:
NSF-PAR ID:
10355685
Award ID(s):
1730140
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract
    <p>Data files were used in support of the research paper titled &#34;“Experimentation Framework for Wireless<br /> Communication Systems under Jamming Scenarios&#34; which has been submitted to the IET Cyber-Physical Systems: Theory &amp; Applications journal. </p> <p>Authors: Marko Jacovic, Michael J. Liston, Vasil Pano, Geoffrey Mainland, Kapil R. Dandekar<br /> Contact: krd26&#64;drexel.edu</p> <p>---------------------------------------------------------------------------------------------</p> <p>Top-level directories correspond to the case studies discussed in the paper. Each includes the sub-directories: logs, parsers, rayTracingEmulation, results. </p> <p>--------------------------------</p> <p>logs:    - data logs collected from devices under test<br />     - &#39;defenseInfrastucture&#39; contains console output from a WARP 802.11 reference design network. Filename structure follows &#39;*x*dB_*y*.txt&#39; in which *x* is the reactive jamming power level and *y* is the jaming duration in samples (100k samples &#61; 1 ms). &#39;noJammer.txt&#39; does not include the jammer and is a base-line case. &#39;outMedian.txt&#39; contains the median statistics for log files collected prior to the inclusion of the calculation in the processing script. <br />     - &#39;uavCommunication&#39; contains MGEN logs at each receiver for cases using omni-directional and RALA antennas with a 10 dB constant jammer and without the jammer. Omni-directional folder contains multiple repeated experiments to provide reliable results during each calculation window. RALA directories use s*N* folders in whichMore>>
  2. Abstract
    <p>Binder is a publicly accessible online service for executing interactive notebooks based on Git repositories. Binder dynamically builds and deploys containers following a recipe stored in the repository, then gives the user a browser-based notebook interface. The Binder group periodically releases a log of container launches from the public Binder service. Archives of launch records are available here. These records do not include identifiable information like IP addresses, but do give the source repo being launched along with some other metadata. The main content of this dataset is in the <code>binder.sqlite</code> file. This SQLite database includes launch records from 2018-11-03 to 2021-06-06 in the <code>events</code> table, which has the following schema.</p> <code>CREATE TABLE events( version INTEGER, timestamp TEXT, provider TEXT, spec TEXT, origin TEXT, ref TEXT, guessed_ref TEXT ); CREATE INDEX idx_timestamp ON events(timestamp); </code> <ul><li><code>version</code> indicates the version of the record as assigned by Binder. The <code>origin</code> field became available with version 3, and the <code>ref</code> field with version 4. Older records where this information was not recorded will have the corresponding fields set to null.</li><li><code>timestamp</code> is the ISO timestamp of the launch</li><li><code>provider</code> gives the type of source repo being launched (&#34;GitHub&#34; is by far the most common). The rest of theMore>>
  3. Abstract
    Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the differentMore>>
  4. Abstract
    <p>A biodiversity dataset graph: DataONE</p> <p>The intended use of this archive is to facilitate (meta-)analysis of the Data Observation Network for Earth (DataONE). DataONE is a distributed infrastructure that provides information about earth observation data.</p> <p>This dataset provides versioned snapshots of the DataONE network as tracked by Preston [2] between 2018-11-06 and 2020-05-07 using &#34;preston update -u https://dataone.org&#34;.</p> <p>The archive consists of 256 individual parts (e.g., preston-00.tar.gz, preston-01.tar.gz, ...) to allow for parallel file downloads. The archive contains three types of files: index files, provenance logs and data files. In addition, index files have been individually included in this dataset publication to facilitate remote access. Index files provide a way to links provenance files in time to establish a versioning mechanism. Provenance files describe how, when, what and where the DataONE content was retrieved. For more information, please visit https://preston.guoda.bio or https://doi.org/10.5281/zenodo.1410543 .  </p> <p>To retrieve and verify the downloaded DataONE biodiversity dataset graph, first concatenate all the downloaded preston-*.tar.gz files (e.g., cat preston-*.tar.gz &gt; preston.tar.gz). Then, extract the archives into a &#34;data&#34; folder. Alternatively, you can use the preston[2] command-line tool to &#34;clone&#34; this dataset using:</p> <p>$ java -jar preston.jar clone --remote https://zenodo.org/record/3849494/files</p> <p>After that, verify the indexMore>>
  5. Abstract
    <p>A biodiversity dataset graph: DataONE</p> <p>The intended use of this archive is to facilitate meta-analysis of the Data Observation Network for Earth (DataONE). DataONE is a distributed infrastructure that provides information about earth observation data. </p> <p>This dataset provides versioned snapshots of the DataONE network as tracked by Preston [2] between 2018-10-18 and 2019-10-03 using &#34;preston update -u https://dataone.org&#34;. </p> <p>The archive consists of 256 individual parts (e.g., preston-00.tar.gz, preston-01.tar.gz, ...) to allow for parallel file downloads. The archive contains three types of files: index files, provenance logs and data files. In addition, index files have been individually included in this dataset publication to facilitate remote access. Index files provide a way to links provenance files in time to establish a versioning mechanism. Provenance files describe how, when and where the DataONE content was retrieved. For more information, please visit https://preston.guoda.bio or https://doi.org/10.5281/zenodo.1410543).  </p> <p>To retrieve and verify the downloaded DataONE biodiversity dataset graph, first concatenate all the downloaded preston-*.tar.gz files (e.g., cat preston-*.tar.gz &gt; preston.tar.gz). Then, extract the archives into a &#34;data&#34; folder. Alternatively, you can use the preston[2] command-line tool to &#34;clone&#34; this dataset using:</p> <p>$ java -jar preston.jar clone --remote https://zenodo.org/record/3483218/files</p> <p>After that, verify the index of theMore>>