To help understand our universe better, researchers and scientists currently run extreme scale cosmology simulations on leadership supercomputers. However, such simulations can generate large amounts of scientific data, which often result in expensive costs in data associated with data movement and storage. Lossy compression techniques have become attractive because they significantly reduce data size and can maintain high data fidelity for post-analysis. In this paper, we propose to use GPU-based lossy compression for extreme scale cosmological simulations. Our contributions are threefold: (1) we implement multiple GPU-based lossy compressors to our opensource compression benchmark and analysis framework named Foresight; (2) we use Foresight to comprehensively evaluate the practicality of using GPU-based lossy compression on two real-world extreme-scale cosmology simulations, namely HACC and Nyx, based on a series of assessment metrics; and (3) we develop a general optimization guideline on how to determine the best-fit configurations for different lossy compressors and cosmological simulations. Experiments show that GPU-based lossy compression can provide necessary accuracy on post-analysis for cosmological simulations and high compression ratio of 5~15x on the tested datasets, as well as much higher compression and decompression throughput than CPU-based compressors.
more »
« less
Understanding Bit-Error Trade-off of Transform-based Lossy Compression on Electrocardiogram Signals
The growing demand for recording longer ECG signals to improve the effectiveness of IoT-enabled remote clinical healthcare is contributing large amounts of ECG data. While lossy compression techniques have shown potential in significantly lowering the amount of data, investigation on how to trade-off between data reduction and data fidelity on ECG data received relatively less attention. This paper gives insight into the power of lossy compression to ECG signals by balancing between data quality and compression ratio. We evaluate the performance of transformed-based lossy compressions on the ECG datasets collected from the Biosemi ActiveTwo devices. Our experimental results indicate that ECG data exhibit high energy compaction property through transformations like DCT and DWT, thus could improve compression ratios significantly without hurting data fidelity much. More importantly, we evaluate the effect of lossy compression on ECG signals by validating the R-peak in the QRS complex. Our method can obtain low error rates measured in PRD (as low as 0.3) and PSNR (up to 67) using only 5% of the transform coefficients. Therefore, R-peaks in the reconstructed ECG signals are almost identical to ones in the original signals, thus facilitating extended ECG monitoring.
more »
« less
- Award ID(s):
- 1751143
- PAR ID:
- 10221563
- Date Published:
- Journal Name:
- 2020 IEEE International Conference on Big Data (Big Data)
- Page Range / eLocation ID:
- 3494 to 3499
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recent years have witnessed an upsurge of interest in lossy compression due to its potential to significantly reduce data volume with adequate exploitation of the spatiotemporal properties of IoT datasets. However, striking a balance between compression ratios and data fidelity is challenging, particularly when losing data fidelity impacts downstream data analytics noticeably. In this paper, we propose a lossy prediction model dealing with binary classification analytics tasks to minimize the impact of the error introduced due to lossy compression. We specifically focus on five classification algorithms for frost prediction in agricultural fields allowing preparation by the predictive advisories to provide helpful information for timely services. While our experimental evaluations reaffirm the nature of lossy compressions where allowing higher errors offers higher compression ratios, we also observe that the classification performance in terms of accuracy and F-1 score differs among all the algorithms we evaluated. Specifically, random forest is the best lossy prediction model for classifying frost. Lastly, we show the robustness of the lossy prediction model based on the data fidelity in prediction performance.more » « less
-
High-performance computing (HPC) systems that run scientific simulations of significance produce a large amount of data during runtime. Transferring or storing such big datasets causes a severe I/O bottleneck and a considerable storage burden. Applying compression techniques, particularly lossy compressors, can reduce the size of the data and mitigate such overheads. Unlike lossless compression algorithms, error-controlled lossy compressors could significantly reduce the data size while respecting the user-defined error bound. DCTZ is one of the transform-based lossy compressors with a highly efficient encoding and purpose-built error control mechanism that accomplishes high compression ratios with high data fidelity. However, since DCTZ quantizes the DCT coefficients in the frequency domain, it may only partially control the relative error bound defined by the user. In this paper, we aim to improve the compression quality of DCTZ. Specifically, we propose a preconditioning method based on level offsetting and scaling to control the magnitude of input of the DCTZ framework, thereby enforcing stricter error bounds. We evaluate the performance of our method in terms of compression ratio and rate distortion with real-world HPC datasets. Our experimental result shows that our method can achieve a higher compression ratio than other state-of-the-art lossy compressors with a tighter error bound while precisely guaranteeing the user-defined error bound.more » « less
-
Error-bounded lossy compression is a state-of-the-art data reduction technique for HPC applications because it not only significantly reduces storage overhead but also can retain high fidelity for postanalysis. Because supercomputers and HPC applications are becoming heterogeneous using accelerator-based architectures, in particular GPUs, several development teams have recently released GPU versions of their lossy compressors. However, existing state-of-the-art GPU-based lossy compressors suffer from either low compression and decompression throughput or low compression quality. In this paper, we present an optimized GPU version, cuSZ, for one of the best error-bounded lossy compressors-SZ. To the best of our knowledge, cuSZ is the first error-bounded lossy compressor on GPUs for scientific data. Our contributions are fourfold. (1) We propose a dual-quantization scheme to entirely remove the data dependency in the prediction step of SZ such that this step can be performed very efficiently on GPUs. (2) We develop an efficient customized Huffman coding for the SZ compressor on GPUs. (3) We implement cuSZ using CUDA and optimize its performance by improving the utilization of GPU memory bandwidth. (4) We evaluate our cuSZ on five real-world HPC application datasets from the Scientific Data Reduction Benchmarks and compare it with other state-of-the-art methods on both CPUs and GPUs. Experiments show that our cuSZ improves SZ's compression throughput by up to 370.1x and 13.1x, respectively, over the production version running on single and multiple CPU cores, respectively, while getting the same quality of reconstructed data. It also improves the compression ratio by up to 3.48x on the tested data compared with another state-of-the-art GPU supported lossy compressor.more » « less
-
null (Ed.)Error-bounded lossy compression is a state-of-the-art data reduction technique for HPC applications because it not only significantly reduces storage overhead but also can retain high fidelity for postanalysis. Because supercomputers and HPC applications are becoming heterogeneous using accelerator-based architectures, in particular GPUs, several development teams have recently released GPU versions of their lossy compressors. However, existing state-of-the-art GPU-based lossy compressors suffer from either low compression and decompression throughput or low compression quality. In this paper, we present an optimized GPU version, cuSZ, for one of the best error-bounded lossy compressors-SZ. To the best of our knowledge, cuSZ is the first error-bounded lossy compressor on GPUs for scientific data. Our contributions are fourfold. (1) We propose a dual-quantization scheme to entirely remove the data dependency in the prediction step of SZ such that this step can be performed very efficiently on GPUs. (2) We develop an efficient customized Huffman coding for the SZ compressor on GPUs. (3) We implement cuSZ using CUDA and optimize its performance by improving the utilization of GPU memory bandwidth. (4) We evaluate our cuSZ on five real-world HPC application datasets from the Scientific Data Reduction Benchmarks and compare it with other state-of-the-art methods on both CPUs and GPUs. Experiments show that our cuSZ improves SZ's compression throughput by up to 370.1x and 13.1x, respectively, over the production version running on single and multiple CPU cores, respectively, while getting the same quality ofmore » « less
An official website of the United States government

