Participation in field experiences has been shown to increase students’ confidence, scientific identity, retention, and academic performance (Beltran et al., 2020; Zavaleta et al., 2020). This is particularly true for students from historically excluded groups in ecology and evolutionary biology (EEB). For the purposes of this paper and the novel funding program described herein, field experiences are learning and research opportunities in natural settings that provide students with hands-on, discipline-specific practice and experience (e.g., Morales et al. 2020). 
                        more » 
                        « less   
                    
                            
                            Evolving constraints and rules in Harmonic Grammar
                        
                    
    
            An evolutionary model of pattern learning in the MaxEnt OT/HG framework is de- scribed in which constraint induction and con- straint weighting are consequences of repro- duction with variation and differential tness. The model is shown to t human data from published experiments on both unsupervised phonotactic (Moreton et al., 2017) and super- vised visual (Nosofsky et al., 1994) pattern learning, and to account for the observed re- versal in dif culty order of exclusive-or vs. gang-effect patterns between the two experi- ments. Different parameter settings are shown to yield gradual, parallel, connectionist- and abrupt, serial, symbolic-like performance. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1651105
- PAR ID:
- 10182723
- Date Published:
- Journal Name:
- Proceedings of the Society for Computation in Linguistics
- Volume:
- 3
- Page Range / eLocation ID:
- Article 8
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied. In this paper, we pre-train a customized bilingual BERT, dubbed GigaBERT, that is designed specifically for Arabic NLP and English-to-Arabic zero-shot transfer learning. We study GigaBERT’s effectiveness on zero-short transfer across four IE tasks: named entity recognition, part-of-speech tagging, argument role labeling, and relation extraction. Our best model significantly outperforms mBERT, XLM-RoBERTa, and AraBERT (Antoun et al., 2020) in both the supervised and zero-shot transfer settings. We have made our pre-trained models publicly available at: https://github.com/lanwuwei/GigaBERT.more » « less
- 
            null (Ed.)We introduce a continuous analogue of the Learning with Errors (LWE) problem, which we name CLWE. We give a polynomial-time quantum reduction from worst-case lattice problems to CLWE, showing that CLWE enjoys similar hardness guarantees to those of LWE. Alternatively, our result can also be seen as opening new avenues of (quantum) attacks on lattice problems. Our work resolves an open problem regarding the computational complexity of learning mixtures of Gaussians without separability assumptions (Diakonikolas 2016, Moitra 2018). As an additional motivation, (a slight variant of) CLWE was considered in the context of robust machine learning (Diakonikolas et al. FOCS 2017), where hardness in the statistical query (SQ) model was shown; our work addresses the open question regarding its computational hardness (Bubeck et al. ICML 2019).more » « less
- 
            Video-based analysis of practice models have gained prominence in mathematics and science teacher education inservice professional learning. There is a growing body of evidence that these intensive professional learning (PL) models lead to positive impacts on teacher knowledge, classroom instructional practice, and student learning (Roth et al., 2018; Taylor et al., 2017), but they are expensive and difficult to sustain. An online version would have several benefits, allowing for greater reach to teachers and students across the country, but if online models were substantially less effective, then lower impacts would undercut the benefits of greater accessibility. We designed and studied a fully online version of the face-to-face Science Teachers Learning from Lesson Analysis (STeLLA) PL model (Roth, et al., 2011; Roth et al., 2018; Taylor et al., 2017). We conducted a quasi-experimental study comparing online STeLLA to face-to-face STeLLA. Although we found no significant difference in elementary student learning between the online and face-to-face versions ( p = .09), the effect size raises questions. Exploratory analyses suggest that the impact of online STeLLA on students is greater than the impact of a similar number of hours of traditional, face-to-face content deepening PL, but less than the impact of the full face-to-face STeLLA program. Differences in student populations, with higher percentages of students from racial and ethnic groups underserved by schools in the online STeLLA program, along with testing of the online STeLLA model during the pandemic, complicates interpretation of the findings.more » « less
- 
            Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    