skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Jmvx: Fast Multi-threaded Multi-version Execution and Record-Replay for Managed Languages
Multi-Version eXecution (MVX) is a technique that deploys many equivalent versions of the same program — variants — as a single program, with direct applications in important fields such as: security, reliability, analysis, and availability. MVX can be seen as “online Record/Replay (RR)”, as RR captures a program’s execution as a log stored on disk that can later be replayed to observe the same execution. Unfortunately, current MVX techniques target programs written in C/C++ and do not support programs written in managed languages, which are the vast majority of code written nowadays. This paper presents the design, implementation, and evaluation of Jmvx— a novel system for performing MVX and RR on programs written in managed languages. Jmvx supports programs written in Java by intercepting automatically identified non-deterministic methods, via a novel dynamic analysis technique, and ensuring that all variants execute the same methods and obtain the same data. Jmvx supports multi-threaded programs, by capturing synchronization operations in one variant, and ensuring all other variants follow the same ordering. We validated that Jmvx supports MVX and RR by applying it to a suite of benchmarks representative of programs written in Java. Internally, Jmvx uses a circular buffer located in shared memory between JVMs to enable fast communication between all variants, averaging 5% |47% performance overhead when performing MVX with multithreading support disabled|enabled, 8% |25% when recording, and 13% |73% when replaying.  more » « less
Award ID(s):
2227183
PAR ID:
10600850
Author(s) / Creator(s):
; ;
Publisher / Repository:
Association for Computing Machinery
Date Published:
Journal Name:
Proceedings of the ACM on Programming Languages
Volume:
8
Issue:
OOPSLA2
ISSN:
2475-1421
Page Range / eLocation ID:
1641 to 1669
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Introduction Java Multi-Version Execution (JMVX) is a tool for performing Multi-Version Execution (MVX) and Record Replay (RR) in Java. Most tools for MVX and RR observe the behavior of a program at a low level, e.g., by looking at system calls. Unfortunately, this approach fails for high level language virtual machines due to benign divergences (differences in behavior that accomplish that same result) introduced by the virtual machine -- particularly by garbage collection and just-in-time compilation. In other words, the management of the virtual machines creates differing sequences of system calls that lead existing tools to believe a program has diverged, when in practice, the application running on top of the VM has not. JMVX takes a different approach, opting instead to add MVX and RR logic into the bytecode of compiled programs running in the VM to avoid benign divergences related to VM management.   This artifact is a docker image that will create a container holding our source code, compiled system, and experiments with JMVX. The image allows you to run the experiments we used to address the research questions from the paper (from Section 4). This artifact is desiged to show: [Supported] JMVX performs MVX for Java [Supported] JMVX performs RR for Java [Supported] JMVX is performant    In the "Step by Step" section, we will point out how to run experiments to generate data supporting these claims. The 3rd claim is supported, however, it may not be easily reproducible.  For the paper we measured performance on bare metal rather than in a docker container. When testing the containerized artifact on a Macbook (Sonoma v14.5), JMVX ran slower than expected. Similarly, see the section on "Differences From Experiment" to see properties of the artifact that were altered (and could affect runtime results). Thanks for taking the time to explore our artifact.   Hardware Requirements x86 machine running Linux, preferably Ubuntu 22.04 (Jammy) 120 Gb of storage About 10 Gb of RAM to spare 2+ cores Getting Started Guide Section is broken into 2 parts, setting up the docker container and running a quick experiment to test if everything is working.   Container Setup Download the container image (DOI 10.5281/zenodo.12637140). If using docker desktop, increase the size of the virtual disk to 120 gb. In the GUI goto Settings > Resources > Virtual Disk (should be a slider).  From the terminal, modify `diskSizeMiB` field in docker's `settings.json` and restart docker. Linux location: ~/.docker/desktop/settings.json. Mac location  : ~/Library/Group Containers/group.com.docker/settings.json. Install with docker load -i java-mvx-image.tar.gz This process takes can take 30 minutes to 1 hour. Start the container via: docker run --name jmvx -it --shm-size="10g" java-mvx The `--shm-size` parameter is important as JMVX will crash the JVM if not enough space is available (detected via a SIGBUS error).    Quick Start The container starts you off in an environment with JMVX already prepared, e.g., JMVX has been built and the instrumentation is done. The script test-quick.sh will test all of JMVX's features for DaCapo's avrora benchmark. The script has comments explaining each command. It should take about 10 minutes to run.   The script starts by running our system call tracer tool. This phase of the script will create the directory /java-mvx/artifact/trace, which will contain:   natives-avrora.log -- (serialized) map of methods, that resulted in system calls, to the stack trace that generated the call. /java-mvx/artifact/scripts/tracer/analyze2.sh is used to analyze this log and generate other files in this directory. table.txt - a table showing how many unique stack traces led to the invocation of a native method that called a system call. recommended.txt - A list of methods JMVX recommends to instrument for the benchmark. dump.txt - A textual dump of the last 8 methods from every stack trace logged. This allows us to reduce the number of methods we need to instrument by choosing a wrapper that can handle multiple system calls. `FileSystemProvider.checkAccess` is an example of this.   JMVX will recommend functions to instrument, these are included in recommended.txt.  If you inspect the file, you'll see some simple candidates for instrumentation, e.g., available, open, and read, from FileInputStream. The instrumentation code for FileInputInputStream can be found in /java-mvx/src/main/java/edu/uic/cs/jmvx/bytecode/FileInputStreamClassVisitor.java. The recommendations work in many cases, but for some, e.g. FileDescriptor.closeAll, we chose a different method (e.g., FileInputStream.close) by manually inspecting dump.txt.   After tracing, runtime data is gathered, starting with measuring the overhead caused by instrumentation. Next it will move onto getting data on MVX, and finally RR. The raw output of the benchmark runs for these phases is saved in /java-mvx/artifact/data/quick. Tables showing the benchmark's runtime performance will be placed in /java-mvx/artifact/tables/quick. That directory will contain:   instr.txt -- Measures the overhead of instrumentation. mvx.txt -- Performance for multi-version execution mode. rec.txt  -- Performance for recording. rep.txt   -- Performance for replaying.   This script captures data for research claims 1-3 albeit for a single benchmark and with a single iteration. Note, data is captured for the benchmark's memory usage, but the txt tables only display runtime data. For more, see readme.pdf or readme.md. 
    more » « less
  2. Managed languages such as Java and Scala are prevalently used in development of large-scale distributed systems. Under the managed runtime, when performing data transfer across machines, a task frequently conducted in a Big Data system, the system needs to serialize a sea of objects into a byte sequence before sending them over the network. The remote node receiving the bytes then deserializes them back into objects. This process is both performance-inefficient and labor-intensive: (1) object serialization/deserialization makes heavy use of reflection, an expensive runtime operation and/or (2) serialization/deserialization functions need to be hand-written and are error-prone. This paper presents Skyway, a JVM-based technique that can directly connect managed heaps of different (local or remote) JVM processes. Under Skyway, objects in the source heap can be directly written into a remote heap without changing their formats. Skyway provides performance benefits to any JVM-based system by completely eliminating the need (1) of invoking serialization/deserialization functions, thus saving CPU time, and (2) of requiring developers to hand-write serialization functions. 
    more » « less
  3. null (Ed.)
    Resource-disaggregated architectures have risen in popularity for large datacenters. However, prior disaggregation systems are designed for native applications; in addition, all of them require applications to possess excellent locality to be efficiently executed. In contrast, programs written in managed languages are subject to periodic garbage collection (GC), which is a typical graph workload with poor locality. Although most datacenter applications are written in managed languages, current systems are far from delivering acceptable performance for these applications. This paper presents Semeru, a distributed JVM that can dramatically improve the performance of managed cloud applications in a memory-disaggregated environment. Its design possesses three major innovations: (1) a universal Java heap, which provides a unified abstraction of virtual memory across CPU and memory servers and allows any legacy program to run without modifications; (2) a distributed GC, which offloads object tracing to memory servers so that tracing is performed closer to data; and (3) a swap system in the OS kernel that works with the runtime to swap page data efficiently. An evaluation of Semeru on a set of widely-deployed systems shows very promising results. 
    more » « less
  4. Coarse-grained reconfigurable arrays (CGRAs) have gained attention in recent years due to their promising power efficiency compared to traditional von Neumann architectures. To program these architectures using ordinary languages such as C, a dataflow compiler must transform the original sequential, imperative program into an equivalent dataflow graph, composed of dataflow operators running in parallel. This transformation is challenging since the asynchronous nature of dataflow graphs allows out-of-order execution of operators, leading to behaviors not present in the original imperative programs. We address this challenge by developing a translation validation technique for dataflow compilers to ensure that the dataflow program has the same behavior as the original imperative program on all possible inputs and schedules of execution. We apply this method to a state-of-the-art dataflow compiler targeting the RipTide CGRA architecture. Our tool uncovers 8 compiler bugs where the compiler outputs incorrect dataflow graphs, including a data race that is otherwise hard to discover via testing. After repairing these bugs, our tool verifies the correct compilation of all programs in the RipTide benchmark suite. 
    more » « less
  5. Aldrich, Jonathan; Salvaneschi, Guido (Ed.)
    Tensor processing infrastructures such as deep learning frameworks and specialized hardware accelerators have revolutionized how computationally intensive code from domains such as deep learning and image processing is executed and optimized. These infrastructures provide powerful and expressive abstractions while ensuring high performance. However, to utilize them, code must be written specifically using the APIs / ISAs of such software frameworks or hardware accelerators. Importantly, given the fast pace of innovation in these domains, code written today quickly becomes legacy as new frameworks and accelerators are developed, and migrating such legacy code manually is a considerable effort. To enable developers in leveraging such DSLs while preserving their current programming paradigm, we present Tenspiler, a verified-lifting-based compiler that uses program synthesis to translate sequential programs written in general-purpose programming languages (e.g., C++ or Python code that does not leverage any specialized framework or accelerator) into tensor operations. Central to Tenspiler is our carefully crafted yet simple intermediate language, named TensIR, that expresses tensor operations. TensIR enables efficient lifting, verification, and code generation. Unlike classical pattern-matching-based compilers, Tenspiler uses program synthesis to translate input code into TensIR, which is then compiled to the target API / ISA. Currently, Tenspiler already supports six DSLs, spanning a broad spectrum of software and hardware environments. Furthermore, we show that new backends can be easily supported by Tenspiler by adding simple pattern-matching rules for TensIR. Using 10 real-world code benchmark suites, our experimental evaluation shows that by translating code to be executed on 6 different software frameworks and hardware devices, Tenspiler offers on average 105× kernel and 9.65× end-to-end execution time improvement over the fully-optimized sequential implementation of the same benchmarks. 
    more » « less