skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: Human-Centric Programming in the Large - Command Languages to Scalable Cyber Training
Programming in the large allows composition of processes executing code written using programming in the small. Traditionally, systems supporting programming in the large have included interpreters of OS command languages, but today, with the emergence of collaborative “big data” science, these systems also include cyberinfrastructures, which allow computations to be carried out on remote machines in the “cloud”. The rationale for these systems, even the traditional command interpreters, is human-centric computing, as they are designed to support quick, interactive development and execution of process workflows. Some cyberinfrastructures extend this human-centricity by also providing manipulation of visualizations of these workflows. To further increase the human-centricity of these systems, we have started a new project on cyber training - instruction in the use of command languages and visual components of cyberinfrastructures. Our objective is to provide scalable remote awareness of trainees' progress and difficulties, as well as collaborative and automatic resolution of their difficulties. Our current plan is to provide awareness based on a subway workflow metaphor, allow a trainer to collaborate with multiple trainees using a single instance of a command interpreter, and combine research in process and interaction workflows to support automatic help. These research directions can be considered an application of the general principle of integrating programming in the small and large.  more » « less
Award ID(s):
1829752
PAR ID:
10104308
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)
Page Range / eLocation ID:
295 to 297
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent advances in Large Language Models (LLM) have made automatic code generation possible for real-world programming tasks in general-purpose programming languages such as Python. However, there are few human studies on the usability of these tools and how they fit the programming workflow. In this work, we conducted a within-subjects user study with 24 participants to understand how programmers use and perceive Copilot, a LLM-based code generation tool. We found that, while Copilot did not necessarily improve the task completion time or success rate, most participants preferred to use Copilot in daily programming tasks, since Copilot often provided a useful starting point and saved the effort of searching online. However, participants did face difficulties in understanding, editing, and debugging code snippets generated by Copilot, which significantly hindered their task-solving effectiveness. Finally, we highlighted several promising directions for improving the design of Copilot based on our observations and participants’ feedback. 
    more » « less
  2. This talk will present a design space based on these dimensions using concrete examples of several systems/frameworks including Gradescope, Web-CAT, the JUnit testing framework, diff-based systems, and a system we have developed at UNC. It will point out future directions that can be pursued to develop a software framework for assessing concurrent programs written in multiple programming languages that improves the productivity and learning, respectively, of trainers and trainees. 
    more » « less
  3. Abstract Phylogenetic studies now routinely require manipulating and summarizing thousands of data files. For most of these tasks, currently available software requires considerable computing resources and substantial knowledge of command‐line applications. We develop an ultrafast and memory‐efficient software, SEGUL, that performs common phylogenomic dataset manipulations and calculates statistics summarizing essential data features. Our software is available as standalone command‐line interface (CLI) and graphical user interface (GUI) applications, and as a library for Rust, R and Python, with possible support of other languages. The CLI and library versions run native on Windows, Linux and macOS, including Apple ARM Macs. The GUI version extends support to include mobile iOS, iPadOS and Android operating systems. SEGUL leverages the high performance of the Rust programming language to offer fast execution times and low memory footprints regardless of dataset size and platform choice. The inclusion of a GUI minimizes bioinformatics barriers to phylogenomics while SEGUL's efficiency reduces economic barriers by allowing analysis on inexpensive hardware. Our support for mobile operating systems further enables teaching phylogenomics where access to computing power is limited. 
    more » « less
  4. Coordination and situation awareness are amongst the most important aspects of collaborative analysis in smart buildings. They are especially useful for emergency responses such as firefighters, police, and military soldiers. Currently, the communication between command centers and crews are often performed via voice, cameras, and possibly hand-held devices; which offer limited and in-efficient solutions. This work investigates a mixed-reality platform to improve the coordination and situation awareness for multiple users performing real-time operations in smart buildings. Our platform provides a flexible architecture to synchronize crew locations in buildings and vital information across the team and command center in real-time. We have also developed several immersive interaction functions to support efficient exchange of useful visual information. The case study and example results demonstrate the advantages of our immersive approach for on-site collaboration in real physical environment. 
    more » « less
  5. Large scale observatories are shared-use resources that provide open access to data from geographically distributed sensors and instruments. This data has the potential to accelerate scientific discovery. However, seamlessly integrating the data into scientific workflows remains a challenge. In this paper, we summarize our ongoing work in supporting data-driven and data-intensive workflows and outline our vision for how these observatories can improve large-scale science. Specifically, we present programming abstractions and runtime management services to enable the automatic integration of data in scientific workflows. Further, we show how approximation techniques can be used to address network and processing variations by studying constraint limitations and their associated latencies. We use the Ocean Observatories Initiative (OOI) as a driving use case for this work. 
    more » « less