skip to main content


Title: Experimenting with robotic intra-logistics domains
Abstract We introduce the asprilo 1 framework to facilitate experimental studies of approaches addressing complex dynamic applications. For this purpose, we have chosen the domain of robotic intra-logistics. This domain is not only highly relevant in the context of today's fourth industrial revolution but it moreover combines a multitude of challenging issues within a single uniform framework. This includes multi-agent planning, reasoning about action, change, resources, strategies, etc. In return, asprilo allows users to study alternative solutions as regards effectiveness and scalability. Although asprilo relies on Answer Set Programming and Python, it is readily usable by any system complying with its fact-oriented interface format. This makes it attractive for benchmarking and teaching well beyond logic programming. More precisely, asprilo consists of a versatile benchmark generator, solution checker and visualizer as well as a bunch of reference encodings featuring various ASP techniques. Importantly, the visualizer's animation capabilities are indispensable for complex scenarios like intra-logistics in order to inspect valid as well as invalid solution candidates. Also, it allows for graphically editing benchmark layouts that can be used as a basis for generating benchmark suites.  more » « less
Award ID(s):
1619273
NSF-PAR ID:
10100328
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Theory and Practice of Logic Programming
Volume:
18
Issue:
3-4
ISSN:
1471-0684
Page Range / eLocation ID:
502 to 519
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    One of the activities of the Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) is fostering Virtual Biodiversity Expeditions by bringing domain scientists and cyber infrastructure specialists together as a team. Over the past few years, PRAGMA members have been collaborating on virtualizing the Lifemapper software. Virtualization and cloud computing have introduced great flexibility and efficiency into IT projects. Virtualization refers to the technologies that provide a layer of abstraction between server hardware system and software that runs on it. This abstraction enables a logical view of computing resources and allows multiple servers to run on the same hardware. With this project, we are virtualizing Lifemapper by enabling its installation and configuration on a virtual cluster. Virtualization provides application scalability, maximizes resources utilization, and creates a more efficient, agile, and automated infrastructure. However, there are downsides to the complexity inherent in these environments, including the need for special techniques to deploy cluster hosts, dependence on virtual environments, and challenging application installation, management, and configuration. In this study, we report on progress of the Lifemapper virtualization framework focused on a reproducible and highly configurable infrastructure capable of fast deployment.

    Lifemapper is a distributed software application developed by the Biodiversity Institute at The University of Kansas. Lifemapper creates and maintains a publicly accessible archive of species distribution maps calculated from public specimen data. Lifemapper software also provides a suite of tools for biodiversity researchers that calculate single and multispecies distribution predictions and macroecological analyses through application programming interfaces. Our goal is to create a viable solution that can be easily adopted and reused by scientists from multiple institutions or projects. This solution (1) allows fast deployment of ready‐made cluster images, (2) reproduces the complete Lifemapper processing pipeline on demand at multiple sites and in different hosting environments, and (3) enables scientists to perform Lifemapper‐facilitated data processing on restricted‐use data, very large datasets, or other unique data.

    A key contribution of this work is describing the practical experience in taking a complex, clustered, domain‐specific, data analysis, and simulation system and enabling its operation on a variety of system configurations. Uses of this portability range from whole cluster replication to teaching and experimentation on a single laptop. System virtualization is used to practically define and make portable the full application stack, including all of its complex set of supporting software and allows Lifemapper deployment in a variety of environments.

     
    more » « less
  2. Research prior to 2005 found that no single framework existed that could capture the engineering design process fully or well and benchmark each element of the process to a commonly accepted set of referenced artifacts. Compounding the construction of a stepwise, artifact driven framework is that engineering design is typically practiced over time as a complex and iterative process. For both novice and advanced students, learning and applying the design process is often cumulative, with many informal and formal programmatic opportunities to practice essential elements. The Engineering Design Process Portfolio Scoring Rubric (EDPPSR) was designed to apply to any portfolio that is intended to document an individual or team driven process leading to an original attempt to design a product, process, or method to provide the best and most optimal solution to a genuine and meaningful problem. In essence, the portfolio should be a detailed account or “biography” of a project and the thought processes that inform that project. Besides narrative and explanatory text, entries may include (but need not be limited to) drawings, schematics, photographs, notebook and journal entries, transcripts or summaries of conversations and interviews, and audio/video recordings. Such entries are likely to be necessary in order to convey accurately and completely the complex thought processes behind the planning, implementation, and self-evaluation of the project. The rubric is comprised of four main components, each in turn comprised of three elements. Each element has its own holistic rubric. The process by which the EDPPSR was created gives evidence of the relevance and representativeness of the rubric and helps to establish validity. The EDPPSR model as originally rendered has a strong theoretical foundation as it has been developed by reference to the literature on the steps of the design process through focus groups and through expert review by teachers, faculty and researchers in performance based, portfolio rubrics and assessments. Using the unified construct validity framework, the EDDPSR’s validity was further established through expert reviewers (experts in engineering design) providing evidence supporting the content relevance and representativeness of the EDPPSR in representing the basic process of engineering design. This manuscript offers empirical evidence that supports the use of the EDPPSR model to evaluate student design-based projects in a reliable and valid manner. Intra-class correlation coefficients (ICC) were calculated to determine the inter-rater reliability (IRR) of the rubric. Given the small sample size we also examined confidence intervals (95%) to provide a range of values in which the estimate of inter-reliability is likely contained. 
    more » « less
  3. null (Ed.)
    Rank aggregation is widely used in group decision making and many other applications, where it is of interest to consolidate heterogeneous ordered lists. Oftentimes, these rankings may involve a large number of alternatives, contain ties, and/or be incomplete, all of which complicate the use of robust aggregation methods. In particular, these characteristics have limited the applicability of the aggregation framework based on the Kemeny-Snell distance, which satisfies key social choice properties that have been shown to engender improved decisions. This work introduces a binary programming formulation for the generalized Kemeny rank aggregation problem—whose ranking inputs may be complete and incomplete, with and without ties. Moreover, it leverages the equivalence of two ranking aggregation problems, namely, that of minimizing the Kemeny-Snell distance and of maximizing the Kendall-τ correlation, to compare the newly introduced binary programming formulation to a modified version of an existing integer programming formulation associated with the Kendall-τ distance. The new formulation has fewer variables and constraints, which leads to faster solution times. Moreover, we develop a new social choice property, the nonstrict extended Condorcet criterion, which can be regarded as a natural extension of the well-known Condorcet criterion and the Extended Condorcet criterion. Unlike its parent properties, the new property is adequate for handling complete rankings with ties. The property is leveraged to develop a structural decomposition algorithm, through which certain large instances of the NP-hard Kemeny rank aggregation problem can be solved exactly in a practical amount of time. To test the practical implications of the new formulation and social choice property, we work with instances constructed from a probabilistic distribution and with benchmark instances from PrefLib, a library of preference data. 
    more » « less
  4. Programming-by-example (PBE) is a synthesis paradigm that allows users to generate functions by simply providing input-output examples. While a promising interaction paradigm, synthesis is still too slow for realtime interaction and more widespread adoption. Existing approaches to PBE synthesis have used automated reasoning tools, such as SMT solvers, as well as works applying machine learning techniques. At its core, the automated reasoning approach relies on highly domain specific knowledge of programming languages. On the other hand, the machine learning approaches utilize the fact that when working with program code, it is possible to generate arbitrarily large training datasets. In this work, we propose a system for using machine learning in tandem with automated reasoning techniques to solve Syntax Guided Synthesis (SyGuS) style PBE problems. By preprocessing SyGuS PBE problems with a neural network, we can use a data driven approach to reduce the size of the search space, then allow automated reasoning-based solvers to more quickly find a solution analytically. Our system is able to run atop existing SyGuS PBE synthesis tools, decreasing the runtime of the winner of the 2019 SyGuS Competition for the PBE Strings track by 47.65% to outperform all of the competing tools. 
    more » « less
  5. Conformational polymorphs of organic molecular crystals represent a challenging test for quantum chemistry because they require careful balancing of the intra- and intermolecular interactions. This study examines 54 molecular conformations from 20 sets of conformational polymorphs, along with the relative lattice energies and 173 dimer interactions taken from six of the polymorph sets. These systems are studied with a variety of van der Waals-inclusive density functionals theory models; dispersion-corrected spin-component-scaled second-order Møller–Plesset perturbation theory (SCS-MP2D); and domain local pair natural orbital coupled cluster singles, doubles, and perturbative triples [DLPNO-CCSD(T)]. We investigate how delocalization error in conventional density functionals impacts monomer conformational energies, systematic errors in the intermolecular interactions, and the nature of error cancellation that occurs in the overall crystal. The density functionals B86bPBE-XDM, PBE-D4, PBE-MBD, PBE0-D4, and PBE0-MBD are found to exhibit sizable one-body and two-body errors vs DLPNO-CCSD(T) benchmarks, and the level of success in predicting the relative polymorph energies relies heavily on error cancellation between different types of intermolecular interactions or between intra- and intermolecular interactions. The SCS-MP2D and, to a lesser extent, ωB97M-V models exhibit smaller errors and rely less on error cancellation. Implications for crystal structure prediction of flexible compounds are discussed. Finally, the one-body and two-body DLPNO-CCSD(T) energies taken from these conformational polymorphs establish the CP1b and CP2b benchmark datasets that could be useful for testing quantum chemistry models in challenging real-world systems with complex interplay between intra- and intermolecular interactions, a number of which are significantly impacted by delocalization error.

     
    more » « less