Statistical analysis is a crucial component of many data science analytic pipelines, and preparing data for such analysis is a large part of the data ingestion step. This task is generally accomplished by writing transformation scripts in languages such as SPSS, Stata, SAS, R, Python (Pandas) etc. The disparate data models, language representations and transformation operations supported by these tools make it hard for end users to understand and document the transformations performed, and for developers to port transformation code across languages. Tackling these challenges, we present a formal paradigm for statistical data transformation called SDTA and embody in a language called SDTL. Experiments with real statistical transformations on socio-economic data show that SDTL can successfully represent 86.1% and 91.6% respectively of 4,185 commands in SAS and 9,087 commands in SPSS obtained from a repository. We illustrate how SDTA/SDTL could assist with the documentation of statistical data transformation, an important aspect often neglected in metadata of datasets. We propose a system called C2Metadata that automatically captures the transformation and provenance information in SDTL as a part of the metadata. Moreover, given the conversion mechanism from a source statistical language to SDTA/SDTL, we show how a data transformation program could be converted to other functionally equivalent programs, permitting code reuse and result reproducibility. We also illustrate the possibility of using SDTA to optimize SDTL transformations using rule-based rewrites similar to SQL optimizations.
more »
« less
SDTA: An Algebra for Statistical Data Transformation
Statistical data manipulation is a crucial component of many data
science analytic pipelines, particularly as part of data ingestion. This
task is generally accomplished by writing transformation scripts in
languages such as SPSS, Stata, SAS, R, Python (Pandas) and etc. The
disparate data models, language representations and transformation
operations supported by these tools make it hard for end users to
understand and document the transformations performed, and for
developers to port transformation code across languages.
Tackling these challenges, we present a formal paradigm for
statistical data transformation. It consists of a data model, called
Structured Data Transformation Data Model (SDTDM), inspired by
the data models of multiple statistical transformations frameworks;
an algebra, Structural Data Transformation Algebra (SDTA), with the
ability to transform not only data within SDTDM but also metadata
at multiple structural levels; and an equivalent descriptive counterpart,
called Structured Data Transformation Language (SDTL),
recently adopted by the DDI Alliance that maintains international
standards for metadata as part of its suite of products. Experiments
with real statistical transformations on socio-economic data show
that SDTL can successfully represent 86.1% and 91.6% respectively
of 4,185 commands in SAS and 9,087 commands in SPSS obtained
from a repository.
We illustrate with examples how SDTA/SDTL could assist with
the documentation of statistical data transformation, an important
aspect often neglected in metadata of datasets.We propose a system
called C2Metadata that automatically captures the transformation
and provenance information in SDTL as a part of the metadata.
Moreover, given the conversion mechanism from a source statistical
language to SDTA/SDTL, we show how functional-equivalent
transformation programs could be converted to other functionally
equivalent programs, in the same or different language, permitting
code reuse and result reproducibility, We also illustrate the possibility
of using of SDTA to optimize SDTL transformations using
rule-based rewrites similar to SQL optimizations.
more »
« less
- Award ID(s):
- 1640575
- PAR ID:
- 10298544
- Date Published:
- Journal Name:
- 33rd International Conference on Scientific and Statistical Database Management
- Page Range / eLocation ID:
- 109 to 120
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Datasets are often derived by manipulating raw data with statistical software packages. The derivation of a dataset must be recorded in terms of both the raw input and the manipulations applied to it. Statistics packages typically provide limited help in documenting provenance for the resulting derived data. At best, the operations performed by the statistical package are described in a script. Disparate representations make these scripts hard to understand for users. To address these challenges, we created Continuous Capture of Metadata (C2Metadata), a system to capture data transformations in scripts for statistical packages and represent it as metadata in a standard format that is easy to understand. We do so by devising a Structured Data Transformation Algebra (SDTA), which uses a small set of algebraic operators to express a large fraction of data manipulation performed in practice. We then implement SDTA, inspired by relational algebra, in a data transformation specification language we call SDTL. In this demonstration, we showcase C2Metadata’s capture of data transformations from a pool of sample transformation scripts in at least two languages: SPSS®and Stata®(SAS®and R are under development), for social science data in a large academic repository. We will allow the audience to explore C2Metadata using a web-based interface, visualize the intermediate steps and trace the provenance and changes of data at different levels for better understanding of the process.more » « less
-
null (Ed.)Structured Data Transformation Language (SDTL) provides structured, machine actionable representations of data transformation commands found in statistical analysis software. The Continuous Capture of Metadata for Statistical Data Project (C2Metadata) created SDTL as part of an automated system that captures provenance metadata from data transformation scripts and adds variable derivations to standard metadata files. SDTL also has potential for auditing scripts and for translating scripts between languages. SDTL is expressed in a set of JSON schemas, which are machine actionable and easily serialized to other formats. Statistical software languages have a number of special features that have been carried into SDTL. We explain how SDTL handles differences among statistical languages and complex operations, such as merging files and reshaping data tables from “wide” to “long”.more » « less
-
Many data analytics and scientific applications rely on data transformation tasks, such as encoding, decoding, parsing of structured and unstructured data, and conversions between data formats and layouts. Previous work has shown that data transformation can represent a performance bottleneck for data analytics workloads. The transducers computational abstraction can be used to express a wide range of data transformations, and recent efforts have proposed configurable engines implementing various transducer models (from finite state transducers, to pushdown transducers, to extended models). This line of research, however, is still at an early stage. Notably, expressing data transformation using transducers requires a paradigm shift, impacting programmability. To address this problem, we propose a programming framework to map data transformation tasks onto a variety of transducer models. Our framework includes: (1) a platform agnostic programming language (xPTLang) to code transducer programs using intuitive programming constructs, and (2) a compiler that, given an xPTLang program, generates efficient transducer processing engines for CPU and GPU. Our compiler includes a set of optimizations to improve code efficiency. We demonstrate our framework on a diverse set of data transformation tasks on an Intel CPU and an Nvidia GPU.more » « less
-
C is an unsafe language. Researchers have been developing tools to port C to safer languages such as Rust, Checked C, or Go. Existing tools, however, resort to preprocessing the source file first, then porting the resulting code, leaving barely recognizable code that loses macro abstractions. To preserve macro usage, porting tools need analyses that understand macro behavior to port to equivalent constructs. But macro semantics differ from typical functions, precluding simple syntactic transformations to port them. We introduce the first comprehensive framework for analyzing the portability of macro usage. We decompose macro behavior into 26 fine-grained properties and implement a program analysis tool, called Maki, that identifies them in real-world code with 94% accuracy. We apply Maki to 21 programs containing a total of 86,199 macro definitions. We found that real-world macros are much more portable than previously known. More than a third (37%) are easy-to-port, and Maki provides hints for porting more complicated macros. We find, on average, 2x more easy-to-port macros and up to 7x more in the best case compared to prior work. Guided by Maki's output, we found and hand-ported macros in four real-world programs. We submitted patches to Linux maintainers that transform eleven macros, nine of which have been accepted.more » « less