Reasoning about mixed real and floating-point constraints is essential for developing accurate analysis tools for floating-point programs. This paper presents FPRoCK, a prototype tool for solving mixed real and floating-point formulas. FPRoCK transforms a mixed formula into an equisatisfiable one over the reals. This formula is then solved using an off-the-shelf SMT solver. FPRoCK is also integrated with the PRECiSA static analyzer, which computes a sound estimation of the round-off error of a floating-point program. It is used to detect infeasible computational paths, thereby improving the accuracy of PRECiSA.
more »
« less
Exploiting Community Structure for Floating-Point Precision Tuning
Floating-point types are notorious for their intricate representation. The effective use of mixed precision, i.e., using various precisions in different computations, is critical to achieve a good balance between accuracy and performance. Unfortunately, reasoning about mixed precision is difficult even for numerical experts. Techniques have been proposed to systematically search over floating-point variables and/or program instructions to find a faster, mixed-precision version of a given program. These techniques, however, are characterized by their black box nature, and face scalability limitations due to the large search space. In this paper, we exploit the community structure of floating-point variables to devise a scalable hierarchical search for precision tuning. Specifically, we perform dependence analysis and edge profiling to create a weighted dependence graph that presents a network of floating-point variables. We then formulate hierarchy construction on the network as a community detection problem, and present a hierarchical search algorithm that iteratively lowers precision with regard to communities. We implement our algorithm in the tool HiFPTuner, and show that it exhibits higher search efficiency over the state of the art for 75.9% of the experiments taking 59.6% less search time on average. Moreover, HiFPTuner finds more profitable configurations for 51.7% of the experiments, with one known to be as good as the global optimum found through exhaustive search.
more »
« less
- Award ID(s):
- 1750983
- PAR ID:
- 10096798
- Date Published:
- Journal Name:
- Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2018, Amsterdam, The Netherlands, July 16-21, 2018
- Page Range / eLocation ID:
- 333 to 343
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Floating-point arithmetic is a loosely standardized approximation of real arithmetic available on many computers today. Architectural and compiler differences can lead to diverse calculations across platforms, for the same input. If left untreated, platform dependence, called volatility in this paper, seriously interferes with result reproducibility and, ultimately, program portability. We present an approach to stabilizing floating-point programs against volatility. Our approach, dubbed provenance analysis, traces volatility observed in a given intermediate expression E back to volatility in preceding statements, and quantifies individual contributions to the volatility in E. Statements contributing the most are then stabilized, by disambiguating the arithmetic using expression rewriting and control pragmas. The benefit of local (as opposed to program-wide) stabilization is that compilers are free to engage performance- or precision-enhancing optimizations across program fragments that do not destabilize E. We have implemented our technique in a dynamic analysis tool that reports both volatility and provenance information. We demonstrate that local program stabilization often suffices to reduce platform dependence to an acceptable level.more » « less
-
Lossy compression techniques are ubiquitous in many fields including imagery and video; however, the incursion of such lossy compression techniques in the computational fluid dynamics community has not advanced to the same extent in decades. In this work, the lossy compression of high-fidelity direct numerical simulation (DNS) is evaluated to assess the impact on various parameters of engineering interest. A Mach 2.5, spatially developing turbulent boundary layer (SDTBL) at a moderately high Reynolds number has been selected as the subject of the study. The ZFP compression scheme was chosen as the core driving algorithm for this study as it was carefully crafted for scientific, floating point data. The resilience of spectral quantities as well as two-point correlations is highlighted. Notwithstanding, we also noted that point-wise values calculated in the physical domain were prone to quantization errors at high compression ratios. Further, we have also presented the impact on higher order statistics. In summary, we have demonstrated that high fidelity results are within reach while achieving 1.45x to 9.82x reductions in required storage over single precision, IEEE 754-compliant data values.more » « less
-
This paper proposes a new meta-learning method – named HARMLESS (HAwkes Relational Meta LEarning method for Short Sequences) for learning heterogeneous point process models from short event sequence data along with a relational network. Specifically, we propose a hierarchical Bayesian mixture Hawkes process model, which naturally incorporates the relational information among sequences into point process modeling. Compared with existing methods, our model can capture the underlying mixed-community patterns of the relational network, which simultaneously encourages knowledge sharing among sequences and facilitates adaptive learning for each individual sequence. We further propose an efficient stochastic variational meta expectation maximization algorithm that can scale to large problems. Numerical experiments on both synthetic and real data show that HARMLESS outperforms existing methods in terms of predicting the future events.more » « less
-
The classical universal approximation (UA) theorem for neural networks establishes mild conditions under which a feedforward neural network can approximate a continuous functionfwith arbitrary accuracy. A recent result shows that neural networks also enjoy a more generalintervaluniversal approximation (IUA) theorem, in the sense that the abstract interpretation semantics of the network using the interval domain can approximate the direct image map off(i.e., the result of applyingfto a set of inputs) with arbitrary accuracy. These theorems, however, rest on the unrealistic assumption that the neural network computes over infinitely precise real numbers, whereas their software implementations in practice compute over finite-precision floating-point numbers. An open question is whether the IUA theorem still holds in the floating-point setting. This paper introduces the first IUA theorem forfloating-pointneural networks that proves their remarkable ability toperfectly capturethe direct image map of any rounded target functionf, showing no limits exist on their expressiveness. Our IUA theorem in the floating-point setting exhibits material differences from the real-valued setting, which reflects the fundamental distinctions between these two computational models. This theorem also implies surprising corollaries, which include (i) the existence ofprovably robustfloating-point neural networks; and (ii) thecomputational completenessof the class of straight-line programs that use only floating-point additions and multiplications for the class of all floating-point programs that halt.more » « less
An official website of the United States government

