skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection
Award ID(s):
2315725
PAR ID:
10495480
Author(s) / Creator(s):
Publisher / Repository:
NeurIPS Proceedings
Date Published:
Journal Name:
Advances in neural information processing systems
ISSN:
1049-5258
ISBN:
9781713871088
Format(s):
Medium: X
Location:
New Orleans
Sponsoring Org:
National Science Foundation
More Like this
  1. Human decisions are context dependent in ways that violate classical norms of rational choice. However, these norms implicitly depend on idealized descriptive assumptions that are often unrealistic. We focus on one such assumption: that information is constant across contexts. Choice contexts often supply subtle cues—which may be embedded in frames, procedures, or menus—to which human decision makers can be highly sensitive. We review recent evidence that some important context effects reflect dynamically coherent belief and preference updating, in response to ecologically valid cues. This evidence paints a more nuanced picture of human rationality in natural choice environments and opens up prospects for nonpaternalistic forms of choice architecture. 
    more » « less
  2. Over the past two decades, behavioral research in privacy has made considerable progress transitioning from acontextual studies to using contextualization as a powerful sensitizing device for illuminating the boundary conditions of privacy theories. Significant challenges and opportunities wait, however, on elevating and converging individually contextualized studies to a context-contingent theory that explicates the mechanisms through which contexts influence consumers’ privacy concerns and their behavioral reactions. This paper identifies the important barriers occasioned by this lack of context theorizing on the generalizability of privacy research findings and argues for accelerating the transition from the contextualization of individual research studies to an integrative understanding of context effects on privacy concerns. It also takes a first step toward this goal by providing a conceptual framework and the associated methodological instantiation for assessing how context-oriented nuances influence privacy concerns. Empirical evidence demonstrates the value of the framework as a diagnostic device guiding the selection of contextual contingencies in future research, so as to advance the pace of convergence toward context-contingent theories in information privacy. This paper was accepted by Anindya Ghose, information systems. 
    more » « less
  3. Transformer models using segment-based processing have been an effective architecture for simultaneous speech translation. However, such models create a context mismatch between training and inference environments, hindering potential translation accuracy. We solve this issue by proposing Shiftable Context, a simple yet effective scheme to ensure that consistent segment and context sizes are maintained throughout training and inference, even with the presence of partially filled segments due to the streaming nature of simultaneous translation. Shiftable Context is also broadly applicable to segment-based transformers for streaming tasks. Our experiments on the English-German, English-French, and English-Spanish language pairs from the MUST-C dataset demonstrate that when applied to the Augmented Memory Transformer, a state-of-the-art model for simultaneous speech translation, the proposed scheme achieves an average increase of 2.09, 1.83, and 1.95 BLEU scores across each wait-k value for the three language pairs, respectively, with a minimal impact on computation-aware Average Lagging. 
    more » « less