Parallel and distributed computing (PDC) has become ubiquitous to the extent that even common users depend on parallel programming. This points to the need for every programmer to understand how parallelism and distributed programming affect problem solving, teaching only traditional sequential programming is no longer sufficient. To address the rapidly widening gap between emerging highly-parallel computer architectures and the sequential programming approach taught in traditional CS/CE courses, the Computer Science Department at Tennessee Technological University has integrated PDC into their introductory programming course sequence. This paper presents our implementation efforts, experience and lessons learned, as well as preliminary evaluation results.
more »
« less
Parallel Logic Programming: A Sequel
Abstract Multi-core and highly connected architectures have become ubiquitous, and this has brought renewed interest in language-based approaches to the exploitation of parallelism. Since its inception, logic programming has been recognized as a programming paradigm with great potential for automated exploitation of parallelism. The comprehensive survey of the first twenty years of research in parallel logic programming, published in 2001, has served since as a fundamental reference to researchers and developers. The contents are quite valid today, but at the same time the field has continued evolving at a fast pace in the years that have followed. Many of these achievements and ongoing research have been driven by the rapid pace of technological innovation, that has led to advances such as very large clusters, the wide diffusion of multi-core processors, the game-changing role of general-purpose graphic processing units, and the ubiquitous adoption of cloud computing. This has been paralleled by significant advances within logic programming, such as tabling, more powerful static analysis and verification, the rapid growth of Answer Set Programming, and in general, more mature implementations and systems. This survey provides a review of the research in parallel logic programming covering the period since 2001, thus providing a natural continuation of the previous survey. In order to keep the survey self-contained, it restricts its attention to parallelization of the major logic programming languages (Prolog, Datalog, Answer Set Programming) and with an emphasis on automated parallelization and preservation of the sequential observable semantics of such languages. The goal of the survey is to serve not only as a reference for researchers and developers of logic programming systems but also as engaging reading for anyone interested in logic and as a useful source for researchers in parallel systems outside logic programming.
more »
« less
- Award ID(s):
- 1914635
- PAR ID:
- 10387017
- Date Published:
- Journal Name:
- Theory and Practice of Logic Programming
- Volume:
- 22
- Issue:
- 6
- ISSN:
- 1471-0684
- Page Range / eLocation ID:
- 905 to 973
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper explores parallelism performance for C, C++, Go, Java, Julia, and Rust on N-body simulations. We begin with a basic O(N2) simulation for each language based on the n-body benchmark in the Benchmark Game. The original benchmark is adjusted to include a larger number of particles and run in parallel. We also add parallelism to the force calculations using a kD-tree. This work builds on previous work by including parallelism and adding the Julia programming language to our survey. We find that for straight number-crunching, all of these languages provide similar performance, and all have sufficient support for parallelism that runtimes scale well with thread counts. On the other hand, when a spatial data structure, such as the kD-tree, is introduced, the runtimes vary dramatically between languages. In that situation, Julia’s performance looks more like Python, taking over 100 times as long as Rust/C/C++ to finish. Rust comes out on top with an impressive 50% lead over C and C++.more » « less
-
Hicks, Michael (Ed.)Logic programming, as exemplified by datalog, defines the meaning of a program as its unique smallest model: the deductive closure of its inference rules. However, many problems call for an enumeration of models that vary along some set of choices while maintaining structural and logical constraints—there is no single canonical model. The notion of stable models for logic programs with negation has successfully captured programmer intuition about the set of valid solutions for such problems, giving rise to a family of programming languages and associated solvers known as answer set programming. Unfortunately, the definition of a stable model is frustratingly indirect, especially in the presence of rules containing free variables. We propose a new formalism, finite-choice logic programming, that uses choice, not negation, to admit multiple solutions. Finite-choice logic programming contains all the expressive power of the stable model semantics, gives meaning to a new and useful class of programs, and enjoys a least-fixed-point interpretation over a novel domain. We present an algorithm for exploring the solution space and prove it correct with respect to our semantics. Our implementation, the Dusa logic programming language, has performance that compares favorably with state-of-the-art answer set solvers and exhibits more predictable scaling with problem size.more » « less
-
In recent years, the pace of innovations in the fields of machine learning (ML) has accelerated, researchers in SysML have created algorithms and systems that parallelize ML training over multiple devices or computational nodes. As ML models become more structurally complex, many systems have struggled to provide all-round performance on a variety of models. Particularly, ML scale-up is usually underestimated in terms of the amount of knowledge and time required to map from an appropriate distribution strategy to the model. Applying parallel training systems to complex models adds nontrivial development overheads in addition to model prototyping, and often results in lower-than-expected performance. This tutorial identifies research and practical pain points in parallel ML training, and discusses latest development of algorithms and systems on addressing these challenges in both usability and performance. In particular, this tutorial presents a new perspective of unifying seemingly different distributed ML training strategies. Based on it, introduces new techniques and system architectures to simplify and automate ML parallelization. This tutorial is built upon the authors' years' of research and industry experience, comprehensive literature survey, and several latest tutorials and papers published by the authors and peer researchers. The tutorial consists of four parts. The first part will present a landscape of distributed ML training techniques and systems, and highlight the major difficulties faced by real users when writing distributed ML code with big model or big data. The second part dives deep to explain the mainstream training strategies, guided with real use case. By developing a new and unified formulation to represent the seemingly different data- and model- parallel strategies, we describe a set of techniques and algorithms to achieve ML auto-parallelization, and compiler system architectures for auto-generating and exercising parallelization strategies based on models and clusters. The third part of this tutorial exposes a hidden layer of practical pain points in distributed ML training: hyper-parameter tuning and resource allocation, and introduces techniques to improve these aspects. The fourth part is designed as a hands-on coding session, in which we will walk through the audiences on writing distributed training programs in Python, using the various distributed ML tools and interfaces provided by the Ray ecosystem.more » « less
-
Abstract Search-optimization problems are plentiful in scientific and engineering domains. Artificial intelligence (AI) has long contributed to the development of search algorithms and declarative programming languages geared toward solving and modeling search-optimization problems. Automated reasoning and knowledge representation are the subfields of AI that are particularly vested in these developments. Many popular automated reasoning paradigms provide users with languages supporting optimization statements. Recall integer linear programming, MaxSAT, optimization satisfiability modulo theory, (constraint) answer set programming. These paradigms vary significantly in their languages in ways they express quality conditions on computed solutions. Here we propose a unifying framework of so-called extended weight systems that eliminates syntactic distinctions between paradigms. They allow us to see essential similarities and differences between optimization statements provided by distinct automated reasoning languages. We also study formal properties of the proposed systems that immediately translate into formal properties of paradigms that can be captured within our framework.more » « less
An official website of the United States government

