skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Learning to communicate about shared procedural abstractions
Many real-world tasks require agents to coordinate their behavior to achieve shared goals. Successful collaboration requires not only adopting the same communicative conventions, but also grounding these conventions in the same task-appropriate conceptual abstractions. We investigate how humans use natural language to collaboratively solve physical assembly problems more effectively over time. Human participants were paired up in an online environment to reconstruct scenes containing two block towers. One participant could see the target towers, and sent assembly instructions for the other participant to reconstruct. Participants provided increasingly concise instructions across repeated attempts on each pair of towers, using more abstract referring expressions that captured each scene's hierarchical structure. To explain these findings, we extend recent probabilistic models of ad hoc convention formation with an explicit perceptual learning mechanism. These results shed light on the inductive biases that enable intelligent agents to coordinate upon shared procedural abstractions.  more » « less
Award ID(s):
1911835
NSF-PAR ID:
10285682
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
ISSN:
1069-7977
Page Range / eLocation ID:
77-83
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human–human teams, and human–artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents’ ability to infer participants’ knowledge training conditions and predict participants’ next victim type to be rescued. We evaluated ASI agents’ capabilities in three ways: (a) comparison to ground truth—the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition.

     
    more » « less
  2. We have developed and experimented with an approach to teach low-level Java concurrency abstractions in our first required course for CS majors, which assumes knowledge of procedural programming. The driving problems are visualized simulations of multiple physical objects in motion that may (a) be confined to a shared space and (b) coordinate with each other. Such simulations do not require any domain-specific knowledge such as sorting and image processing for driving problems and exercises, and their implementation demonstrates the benefits of object-based programming. They allow focus on both the performance and programmability benefits of concurrency, provide analogies for an abstraction-independent explanation of concurrency concepts, and can be used to incrementally motivate all low-level concurrency abstractions and visualize the effect of using and not using these abstractions. Layered simulation-based worked examples illustrating the abstractions were presented and easily understood in multiple offerings of a course that implemented this approach. Students implemented non-trivial assignments based on these abstractions, even when they were optional, did not face major obstacles because of visual error feedback, and were excited by concurrency as they felt it empowered them to implement arbitrary applications early. 
    more » « less
  3. Abstract

    We present CELER (Corpus of Eye Movements in L1 and L2 English Reading), a broad coverage eye-tracking corpus for English. CELER comprises over 320,000 words, and eye-tracking data from 365 participants. Sixty-nine participants are L1 (first language) speakers, and 296 are L2 (second language) speakers from a wide range of English proficiency levels and five different native language backgrounds. As such, CELER has an order of magnitude more L2 participants than any currently available eye movements dataset with L2 readers. Each participant in CELER reads 156 newswire sentences from the Wall Street Journal (WSJ), in a new experimental design where half of the sentences are shared across participants and half are unique to each participant. We provide analyses that compare L1 and L2 participants with respect to standard reading time measures, as well as the effects of frequency, surprisal, and word length on reading times. These analyses validate the corpus and demonstrate some of its strengths. We envision CELER to enable new types of research on language processing and acquisition, and to facilitate interactions between psycholinguistics and natural language processing (NLP).

     
    more » « less
  4. The ACM/IEEE CS 2013 report recommends fifteen hours of parallel & distributed computing (PDC) education for every undergraduate. This workshop illustrates the use of the Raspberry Pi as an inexpensive, multicore platform for teaching shared-memory parallel programming. The inexpensive and tactile nature of the Raspberry Pi enables each student to experience her own parallel multiprocessor through sight and touch. In this hands-on workshop, we will teach attendees how they can leverage the Raspberry Pi and the OpenMP library to teach shared-memory parallel concepts in their own classrooms. All CS educators who are interested in learning about the Raspberry Pi, shared memory parallelism, and OpenMP are encouraged to attend. In Part I of the workshop, each participant will connect to and learn about the Raspberry Pi's multicore capabilities. In Part II, each participant will engage in self-paced, hands-on exploration of basic parallel computing concepts using the OpenMP "patternlets" from CSinParallel.org. In Part III, participants will investigate more complex applications, such as numeric integration and drug design and study how these applications can be parallelized using OpenMP. We will conclude the workshop with a series of lightning talks discussing how the Raspberry Pi has been used to teach parallel computing concepts at different institutions. We will also present a summary of student perceptions of the Raspberry Pi. All materials from this workshop will be freely available from CSinParallel.org. Space is limited to 20 participants. A laptop is required. 
    more » « less
  5. Two people looking at the same dataset will create different mental models, prioritize different attributes, and connect with different visualizations. We seek to understand the space of data abstractions associated with mental models and how well people communicate their mental models when sketching. Data abstractions have a profound influence on the visualization design, yet it’s unclear how universal they may be when not initially influenced by a representation. We conducted a study about how people create their mental models from a dataset. Rather than presenting tabular data, we presented each participant with one of three datasets in paragraph form, to avoid biasing the data abstraction and mental model. We observed various mental models, data abstractions, and depictions from the same dataset, and how these concepts are influenced by communication and purpose-seeking. Our results have implications for visualization design, especially during the discovery and data collection phase. 
    more » « less