skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 1, 2026

Title: Critical testing in recognition memory: Selective influence, single-item generalization, and the high-threshold hypothesis.
In recent years, discussions comparing high-threshold and continuous accounts of recognition-memory judgments have increasingly turned their attention toward critical testing. One of the de ning features of this approach is its requirement for the relationship between theoretical assumptions and predictions to be laid out in a transparent and precise way. One of the (fortunate) consequences of this requirement is that it encourages researchers to debate the merits of the different assumptions at play. The present work addresses a recent attempt to overturn the dismissal of high-threshold models by getting rid of a background selective- in uence assumption. However, it can be shown that the contrast process proposed to explain this violation undermines a more general assumption that we dubbed“single-item generalization.” We argue that the case for the dismissal of these assumptions and the claimed support for the proposed high-threshold contrast account does not stand the scrutiny of their theoretical properties and empirical implications.  more » « less
Award ID(s):
2145308
PAR ID:
10632548
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
APA
Date Published:
Journal Name:
Journal of Experimental Psychology: Learning, Memory, and Cognition
Volume:
51
Issue:
8
ISSN:
0278-7393
Page Range / eLocation ID:
1259 to 1280
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. One of the primary research challenges in Attribute-Based Encryption (ABE) is constructing and proving cryptosystems that are adaptively secure. To date the main paradigm for achieving adaptive security in ABE is dual system encryption. However, almost all such solutions in bilinear groups rely on (variants of) either the subgroup decision problem over composite order groups or the decision linear assumption. Both of these assumptions are decisional rather than search assumptions and the target of the assumption is a source or bilinear group element. This is in contrast to earlier selectively secure ABE systems which can be proven secure from either the decisional or search Bilinear Diffie-Hellman assumption. In this work we make progress on closing this gap by giving a new ABE construction for the subset functionality and prove security under the Search Bilinear Diffie-Hellman assumption. We first provide a framework for proving adaptive security in Attribute-Based Encryption systems. We introduce a concept of ABE with deletable attributes where any party can take a ciphertext encrypted under the attribute string and modify it into a ciphertext encrypted under any string where is derived by replacing any bits of with symbols (i.e. ``deleting" attributes of ). The semantics of the system are that any private key for a circuit can be used to decrypt a ciphertext associated with if none of the input bits read by circuit are symbols and . We show a pathway for combining ABE with deletable attributes with constrained psuedorandom functions to obtain adaptively secure ABE building upon the recent work of Tsabary. Our new ABE system will be adaptively secure and be a ciphertext-policy ABE that supports the same functionality as the underlying constrained PRF as long as the PRF is ``deletion conforming". Here we also provide a simple constrained PRF construction that gives subset functionality. Our approach enables us to access a broader array of Attribute-Based Encryption schemes support deletion of attributes. For example, we show that both the Goyal~et al.~(GPSW) and Boyen ABE schemes can trivially handle a deletion operation. And, by using a hardcore bit variant of GPSW scheme we obtain an adaptively secure ABE scheme under the Search Bilinear Diffie-Hellman assumption in addition to pseudo random functions in NC1. This gives the first adaptively secure ABE from a search assumption as all prior work relied on decision assumptions over source group elements. 
    more » « less
  2. null (Ed.)
    We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-Dottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-Dottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020). We provide explicit counterexamples to the 2-circular shielded randomness leakage assumption w.r.t. the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the homomorphic pseudorandom LWE samples conjecture proposed by Wee-Wichs. Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved. 
    more » « less
  3. Abstract It is increasingly interesting to model the relationship between two sets of high-dimensional measurements with potentially high correlations. Canonical correlation analysis (CCA) is a classical tool that explores the dependency of two multivariate random variables and extracts canonical pairs of highly correlated linear combinations. Driven by applications in genomics, text mining, and imaging research, among others, many recent studies generalize CCA to high-dimensional settings. However, most of them either rely on strong assumptions on covariance matrices, or do not produce nested solutions. We propose a new sparse CCA (SCCA) method that recasts high-dimensional CCA as an iterative penalized least squares problem. Thanks to the new iterative penalized least squares formulation, our method directly estimates the sparse CCA directions with efficient algorithms. Therefore, in contrast to some existing methods, the new SCCA does not impose any sparsity assumptions on the covariance matrices. The proposed SCCA is also very flexible in the sense that it can be easily combined with properly chosen penalty functions to perform structured variable selection and incorporate prior information. Moreover, our proposal of SCCA produces nested solutions and thus provides great convenient in practice. Theoretical results show that SCCA can consistently estimate the true canonical pairs with an overwhelming probability in ultra-high dimensions. Numerical results also demonstrate the competitive performance of SCCA. 
    more » « less
  4. We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size n and number of features p are large, and n/p can be less than one. Extensive empirical evidence confirms the accuracy of leave-one-out cross validation (LO) for out-of-sample risk estimation. Yet, a unifying theoretical evaluation of the accuracy of LO in high-dimensional problems has remained an open problem. This paper aims to fill this gap for penalized regression in the generalized linear family. With minor assumptions about the data generating process, and without any sparsity assumptions on the regression coefficients, our theoretical analysis obtains finite sample upper bounds on the expected squared error of LO in estimating the out-of-sample error. Our bounds show that the error goes to zero as n,p→∞, even when the dimension p of the feature vectors is comparable with or greater than the sample size n. One technical advantage of the theory is that it can be used to clarify and connect some results from the recent literature on scalable approximate LO. 
    more » « less
  5. We study the problem of out-of-sample risk estimation in the high dimensional regime where both the sample size n and number of features p are large, and n/p can be less than one. Extensive empirical evidence confirms the accuracy of leave-one-out cross validation (LO) for out-of-sample risk estimation. Yet, a unifying theoretical evaluation of the accuracy of LO in high-dimensional problems has remained an open problem. This paper aims to fill this gap for penalized regression in the generalized linear family. With minor assumptions about the data generating process, and without any sparsity assumptions on the regression coefficients, our theoretical analysis obtains finite sample upper bounds on the expected squared error of LO in estimating the out-of-sample error. Our bounds show that the error goes to zero as n,p→∞, even when the dimension p of the feature vectors is comparable with or greater than the sample size n. One technical advantage of the theory is that it can be used to clarify and connect some results from the recent literature on scalable approximate LO. 
    more » « less