skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An archaeomagnetic study of the Ishtar Gate, Babylon (Dataset)
Paleomagnetic, rock magnetic, or geomagnetic data found in the MagIC data repository from a paper titled: An archaeomagnetic study of the Ishtar Gate, Babylon  more » « less
Award ID(s):
2126298
PAR ID:
10558615
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Magnetics Information Consortium (MagIC)
Date Published:
Subject(s) / Keyword(s):
Archeologic Brick Not Specified 2512 2555 Years BP
Format(s):
Medium: X
Location:
(Latitude:32.5; Longitude:44.4); (Latitude:32.5; Longitude:44.4)
Right(s):
Creative Commons Attribution 4.0 International
Institution:
Paleomagnetic Lab Scripps Institution Of Oceanography, UCSD, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. A ubiquitous problem in aggregating data across different experimental and observational data sources is a lack of software infrastructure that enables flexible and extensible standardization of data and metadata. To address this challenge, we developed HDMF, a hierarchical data modeling framework for modern science data standards. With HDMF, we separate the process of data standardization into three main components: (1) data modeling and specification, (2) data I/O and storage, and (3) data interaction and data APIs. To enable standards to support the complex requirements and varying use cases throughout the data life cycle, HDMF provides object mapping infrastructure to insulate and integrate these various components. This approach supports the flexible development of data standards and extensions, optimized storage backends, and data APIs, while allowing the other components of the data standards ecosystem to remain stable. To meet the demands of modern, large-scale science data, HDMF provides advanced data I/O functionality for iterative data write, lazy data load, and parallel I/O. It also supports optimization of data storage via support for chunking, compression, linking, and modular data storage. We demonstrate the application of HDMF in practice to design NWB 2.0, a modern data standard for collaborative science across the neurophysiology community. 
    more » « less
  2. Recent studies have shown that several government and business organizations experience huge data breaches. Data breaches increase in a daily basis. The main target for attackers is organization sensitive data which includes personal identifiable information (PII) such as social security number (SSN), date of birth (DOB) and credit card /debit card (CCDC). The other target is encryption/decryption keys or passwords to get access to the sensitive data. The cloud computing is emerging as a solution to store, transfer and process the data in a distributed location over the Internet. Big data and internet of things (IoT) increased the possibility of sensitive data exposure. Most methods used for the attack are hacking, unauthorized access, insider theft and false data injection on the move. Most of the attacks happen during three different states of data life cycle such as data-at-rest, data-in-use, and data-in-transit. Hence, protecting sensitive data at all states particularly when data is moving to cloud computing environment needs special attention. The main purpose of this research is to analyze risks caused by data breaches, personal and organizational weaknesses to protect sensitive data and privacy. The paper discusses methods such as data classification and data encryption at different states to protect personal and organizational sensitive data. The paper also presents mathematical analysis by leveraging the concept of birthday paradox to demonstrate the encryption key attack. The analysis result shows that the use of same keys to encrypt sensitive data at different data states make the sensitive data less secure than using different keys. Our results show that to improve the security of sensitive data and to reduce the data breaches, different keys should be used in different states of the data life cycle. 
    more » « less
  3. Pooling and sharing data increases and distributes its value. But since data cannot be revoked once shared, scenarios that require controlled release of data for regulatory, privacy, and legal reasons default to not sharing. Because selectively controlling what data to release is difficult, the few data-sharing consortia that exist are often built around data-sharing agreements resulting from long and tedious one-off negotiations. We introduce Data Station, a data escrow designed to enable the formation of data-sharing consortia. Data owners share data with the escrow knowing it will not be released without their consent. Data users delegate their computation to the escrow. The data escrow relies on delegated computation to execute queries without releasing the data first. Data Station leverages hardware enclaves to generate trust among participants, and exploits the centralization of data and computation to generate an audit log. We evaluate Data Station on machine learning and data-sharing applications while running on an untrusted intermediary. In addition to important qualitative advantages, we show that Data Station: i) outperforms federated learning baselines in accuracy and runtime for the machine learning application; ii) is orders of magnitude faster than alternative secure data-sharing frameworks; and iii) introduces small overhead on the critical path. 
    more » « less
  4. Deep learning is an important technique for extracting value from big data. However, the effectiveness of deep learning requires large volumes of high quality training data. In many cases, the size of training data is not large enough for effectively training a deep learning classifier. Data augmentation is a widely adopted approach for increasing the amount of training data. But the quality of the augmented data may be questionable. Therefore, a systematic evaluation of training data is critical. Furthermore, if the training data is noisy, it is necessary to separate out the noise data automatically. In this paper, we propose a deep learning classifier for automatically separating good training data from noisy data. To effectively train the deep learning classifier, the original training data need to be transformed to suit the input format of the classifier. Moreover, we investigate different data augmentation approaches to generate sufficient volume of training data from limited size original training data. We evaluated the quality of the training data through cross validation of the classification accuracy with different classification algorithms. We also check the pattern of each data item and compare the distributions of datasets. We demonstrate the effectiveness of the proposed approach through an experimental investigation of automated classification of massive biomedical images. Our approach is generic and is easily adaptable to other big data domains. 
    more » « less
  5. Deep learning is an important technique for extracting value from big data. However, the effectiveness of deep learning requires large volumes of high quality training data. In many cases, the size of training data is not large enough for effectively training a deep learning classifier. Data augmentation is a widely adopted approach for increasing the amount of training data. But the quality of the augmented data may be questionable. Therefore, a systematic evaluation of training data is critical. Furthermore, if the training data is noisy, it is necessary to separate out the noise data automatically. In this paper, we propose a deep learning classifier for automatically separating good training data from noisy data. To effectively train the deep learning classifier, the original training data need to be transformed to suit the input format of the classifier. Moreover, we investigate different data augmentation approaches to generate sufficient volume of training data from limited size original training data. We evaluated the quality of the training data through cross validation of the classification accuracy with different classification algorithms. We also check the pattern of each data item and compare the distributions of datasets. We demonstrate the effectiveness of the proposed approach through an experimental investigation of automated classification of massive biomedical images. Our approach is generic and is easily adaptable to other big data domains. 
    more » « less