Abstract With predictions of increased frequency of intense hurricanes, it is increasingly crucial to understand how biotic and abiotic components of forests will be affected. This study describes canopy arthropod responses to repeated experimental and natural canopy opening at the Luquillo Experimental Forest Long‐term Ecological Research Site (LTER) in Puerto Rico. The canopy trimming experiment (CTE1) treatments were started in 2004, and a second trimming (CTE2) was conducted in 2014, to study effects of increased hurricane frequency at the site. Paired disturbed plots with canopy trimmed (trim) and undisturbed plots with no trimming (no trim) were replicated in three experimental blocks. Arthropods were sampled by bagging branches on seven representative early and late successional overstory and understory tree species annually from 2004 to 2009 for CTE1 and 2015 to 2019 for CTE2. In addition to the experimental manipulation, the CTE site was disturbed by Hurricane Maria (Category 4) in September 2017, providing an additional natural canopy opening to the experiment. We evaluated the effect of the second experimental trimming, compared canopy arthropod responses to the three canopy‐opening events, and compared the effects of experimental trimming and natural canopy opening by Hurricane Maria. The second experimental canopy trimming produced canopy arthropod responses consistent with hurricane disturbances, with sap‐sucking herbivores increasing in abundance on the trimmed plots, whereas other functional groups generally declined in abundance in disturbed plots. Responses to the first and second trimmings were generally similar. However, Hurricane Maria exacerbated the responses, indicating the likely effect of increased hurricane frequency and intensity. 
                        more » 
                        « less   
                    
                            
                            Doubly robust and heteroscedasticity-aware sample trimming for causal inference
                        
                    
    
            Summary A popular method for variance reduction in causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Crump et al., 2009; Yang & Ding, 2018), but in modern settings where heteroscedastic data are analysed with nonparametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold. First, we describe novel procedures for selecting which units to trim. Our procedures differ from previous works in that we trim, not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well, and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed subpopulations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007–2008 National Health and Nutrition Examination Survey and on a semisynthetic Medicare dataset, and find promising results in all settings. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2143176
- PAR ID:
- 10585916
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Biometrika
- Volume:
- 112
- Issue:
- 2
- ISSN:
- 1464-3510
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            A multirotor trim module is developed for the HPCMP CREATETM-AV Helios rotorcraft simulation code. Trimmed free-flight simulation results are presented for two multirotor configurations, using rotor frequencies and aircraft attitudes as the control variables. The loose-coupling procedure is used to achieve trim, where aerodynamic loading on the rotor blades and fuselage are computed using a simplified aerodynamic model, and modified at each coupling iteration using the airloads computed by the higher fidelity CFD based aerodynamics. Two different optimization methods are tested: a least-square regression algorithm, with the norm of the loads at the center of gravity as the objective function, and a nonlinear constrained optimization code, with the total power as the objective function, and with constraints applied to satisfy trim. First, a commercial small-scale UAV is simulated in forward flight. A reference model for midscale UAM applications is then trimmed in hover to demonstrate the module’s ability to model and trim a complex configuration.more » « less
- 
            Machine learning (ML) methods for causal inference have gained popularity due to their flexibility to predict the outcome model and the propensity score. In this article, we provide a within-group approach for ML-based causal inference methods in order to robustly estimate average treatment effects in multilevel studies when there is cluster-level unmeasured confounding. We focus on one particular ML-based causal inference method based on the targeted maximum likelihood estimation (TMLE) with an ensemble learner called SuperLearner. Through our simulation studies, we observe that training TMLE within groups of similar clusters helps remove bias from cluster-level unmeasured confounders. Also, using within-group propensity scores estimated from fixed effects logistic regression increases the robustness of the proposed within-group TMLE method. Even if the propensity scores are partially misspecified, the within-group TMLE still produces robust ATE estimates due to double robustness with flexible modeling, unlike parametric-based inverse propensity weighting methods. We demonstrate our proposed methods and conduct sensitivity analyses against the number of groups and individual-level unmeasured confounding to evaluate the effect of taking an eighth-grade algebra course on math achievement in the Early Childhood Longitudinal Study.more » « less
- 
            We propose a general method for constructing confidence sets and hypothesis tests that have finite-sample guarantees without regularity conditions. We refer to such procedures as “universal.” The method is very simple and is based on a modified version of the usual likelihood-ratio statistic that we call “the split likelihood-ratio test” (split LRT) statistic. The (limiting) null distribution of the classical likelihood-ratio statistic is often intractable when used to test composite null hypotheses in irregular statistical models. Our method is especially appealing for statistical inference in these complex setups. The method we suggest works for any parametric model and also for some nonparametric models, as long as computing a maximum-likelihood estimator (MLE) is feasible under the null. Canonical examples arise in mixture modeling and shape-constrained inference, for which constructing tests and confidence sets has been notoriously difficult. We also develop various extensions of our basic methods. We show that in settings when computing the MLE is hard, for the purpose of constructing valid tests and intervals, it is sufficient to upper bound the maximum likelihood. We investigate some conditions under which our methods yield valid inferences under model misspecification. Further, the split LRT can be used with profile likelihoods to deal with nuisance parameters, and it can also be run sequentially to yield anytime-valid P values and confidence sequences. Finally, when combined with the method of sieves, it can be used to perform model selection with nested model classes.more » « less
- 
            We propose the use of U-statistics to reduce variance for gradient estimation in importance-weighted variational inference. The key observation is that, given a base gradient estimator that requires m > 1 samples and a total of n > m samples to be used for estimation, lower variance is achieved by averaging the base estimator on overlapping batches of size m than disjoint batches, as currently done. We use classical U-statistic theory to analyze the variance reduction, and propose novel approximations with theoretical guarantees to ensure computational efficiency. We find empirically that U-statistic variance reduction can lead to modest to significant improvements in inference performance on a range of models, with little computational cost.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
