skip to main content


Title: A Central-Limit-Theorem Version of the Periodic Little's Law
We establish a central-limit-theorem (CLT) version of the periodic Little’s law (PLL) in discrete time, which complements the sample-path and stationary versions of the PLL we recently established, motivated by data analysis of a hospital emergency department. Our new CLT version of the PLL extends previous CLT versions of LL. As with the LL, the CLT version of the PLL is useful for statistical applications.  more » « less
Award ID(s):
1634133
NSF-PAR ID:
10109518
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Queueing systems
Volume:
91
ISSN:
0257-0130
Page Range / eLocation ID:
15-47
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Using the global Lagrangian version of the piecewise parabolic method‐magnetohydrodynamic (PPMLR‐MHD) model, we simulate two consecutive storms in December 2015, a moderate storm on 14–15 December and a strong storm on 19–22 December, and calculate the radial diffusion coefficients (DLL) from the simulated ultralow frequency waves. We find that even though the strong storm leads to more enhancedBzandEφpower than the moderate storm, the two storms share in common a lot of features on the azimuthal mode structure and power spectrum of ultralow frequency waves. For both storms, the totalBzandEφpower is better correlated with the solar wind dynamic pressure in the storm initial phase and more correlated withAEindex in the recovery phase.Bzwave power is shown to be mostly distributed in low mode numbers, whileEφpower spreads over a wider range of modes. Furthermore, theBzandEφpower spectral densities are found to be higher at higherLregions, with a strongerLdependence in theBzspectra. The estimatedDLLbased on MHD fields shows that inside the magnetopause, the contribution from electric fields is larger than or comparable to that from magnetic fields, and our event‐specific MHD‐basedDLLcan be smaller than some previous empiricalDLLestimations by more than an order of magnitude. At last, by validating against in situ observations from Magnetospheric Multiscale spacecraft, our MHD results are found to generally well reproduce the totalBzfields and wave power for both storms, while theEφpower is underestimated in the MHD simulations.

     
    more » « less
  2. This paper presents a new and practical approach to lock-free locks based on helping, which allows the user to write code using fine-grained locks, but run it in a lock-free manner. Although lock-free locks have been suggested in the past, they are widely viewed as impractical, have some key limitations, and, as far as we know, have never been implemented. The paper presents some key techniques that make lock-free locks practical and more general. The most important technique is an approach to idempotence—i.e. making code that runs multiple times appear as if it ran once. The idea is based on using a shared log among processes running the same protected code. Importantly, the approach can be library based, requiring very little if any change to standard code—code just needs to use the idempotent versions of memory operations (load, store, LL/SC, allocation, free). We have implemented a C++ library called Flock based on the ideas. Flock allows lock-based data structures to run in either lock-free or blocking (traditional locks) mode. We implemented a variety of tree and list-based data structures with Flock and compare the performance of the lock-free and blocking modes under a variety of workloads. The lock-free mode is almost as fast as blocking mode under almost all workloads, and significantly faster when threads are oversubscribed (more threads than processors). We also compare with several existing lock-based and lock-free alternatives. 
    more » « less
  3. Using GUI-based workflows for data analysis is an iterative process. During each iteration, an analyst makes changes to the workflow to improve it, generating a new version each time. The results produced by executing these versions are materialized to help users refer to them in the future. In many cases, a new version of the workflow, when submitted for execution, produces a result equivalent to that of a previous one. Identifying such equivalence can save computational resources and time by reusing the materialized result. One way to optimize the performance of executing a new version is to compare the current version with a previous one and test if they produce the same results using a workflow version equivalence verifier. As the number of versions grows, this testing can become a computational bottleneck. In this paper, we present Raven, an optimization framework to accelerate the execution of a new version request by detecting and reusing the results of previous equivalent versions with the help of a version equivalence verifier. Raven ranks and prunes the set of prior versions to quickly identify those that may produce an equivalent result to the version execution request. Additionally, when the verifier performs computation to verify the equivalence of a version pair, there may be a significant overlap with previously tested version pairs. Raven identifies and avoids such repeated computations by extending the verifier to reuse previous knowledge of equivalence tests. We evaluated the effectiveness of Raven compared to baselines on real workflows and datasets. 
    more » « less
  4. Abstract Background

    As technology moves rapidly forward and our world becomes more interconnected, we are seeing increases in the complexity and challenge associated with scientific problems. More than ever before, scientists will need to be resilient and able to cope with challenges and failures en route to success. However, we still understand relatively little about how these skills manifest in STEM contexts broadly, and how they are developed by STEM undergraduate students. While recent studies have begun to explore this area, no measures exist that are specifically designed to assess coping behaviors in STEM undergraduate contexts at scale. Fortunately, multiple measures of coping do exist and have been previously used in more general contexts. Drawing strongly from items used in the COPE and Brief COPE, we gathered a pool of items anticipated to be good measures of undergraduate students’ coping behaviors in STEM. We tested the validity of these items for use with STEM students using exploratory factor analyses, confirmatory factor analyses, and cognitive interviews. In particular, our confirmatory factor analyses and cognitive interviews explored whether the items measured coping for persons excluded due to ethnicity or race (PEERs).

    Results

    Our analyses revealed two versions of what we call the STEM-COPE instrument that accurately measure several dimensions of coping for undergraduate STEM students. One version is more fine-grained. We call this the Coping Behaviors version, since it is more specific in its description of coping actions. The other contains some specific scales and two omnibus scales that describe what we call challenge-engaging and challenge-avoiding coping. This version is designated the Coping Styles version. We confirmed that both versions can be used reliably in PEER and non-PEER populations.

    Conclusions

    The final products of our work are two versions of the STEM-COPE. Each version measures several dimensions of coping that can be used in individual classrooms or across contexts to assess STEM undergraduate students’ coping with challenges or failures. Each version can be used as a whole, or individual scales can be adopted and used for more specific studies. This work also highlights the need to either develop or adapt other existing measures for use with undergraduate STEM students, and more specifically, for use with sub-populations within STEM who have been historically marginalized or minoritized.

     
    more » « less
  5. The matrix completion problem seeks to recover a $d\times d$ ground truth matrix of low rank $r\ll d$ from observations of its individual elements. Real-world matrix completion is often a huge-scale optimization problem, with $d$ so large that even the simplest full-dimension vector operations with $O(d)$ time complexity become prohibitively expensive. Stochastic gradient descent (SGD) is one of the few algorithms capable of solving matrix completion on a huge scale, and can also naturally handle streaming data over an evolving ground truth. Unfortunately, SGD experiences a dramatic slow-down when the underlying ground truth is ill-conditioned; it requires at least $O(\kappa\log(1/\epsilon))$ iterations to get $\epsilon$-close to ground truth matrix with condition number $\kappa$. In this paper, we propose a preconditioned version of SGD that preserves all the favorable practical qualities of SGD for huge-scale online optimization while also making it agnostic to $\kappa$. For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, we prove that the preconditioned SGD converges to $\epsilon$-accuracy in $O(\log(1/\epsilon))$ iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with $\kappa=1$. In our numerical experiments, we observe a similar acceleration for ill-conditioned matrix completion under the 1-bit cross-entropy loss, as well as pairwise losses such as the Bayesian Personalized Ranking (BPR) loss. 
    more » « less