skip to main content

The nonstationary Poisson process (NSPP) is a workhorse tool for modeling and simulating arrival processes with time-dependent rates. In many applications only a single sequence of arrival times are observed. While one sample path is sufficient for estimating the arrival rate or integrated rate function of the process—as we illustrate in this paper—we show that testing for Poissonness, in the general case, is futile. In other words, when only a single sequence of arrival data are observed then one can fit an NSPP to it, but the choice of “NSPP” can only be justified by an understanding of the underlying process physics, or a leap of faith, not by testing the data. This result suggests the need for sensitivity analysis when such a model is used to generate arrivals in a simulation.
Bae, K-H; Feng, B; Kim, S; Lazarova-Molnar, S; Zheng, Z; Roeder, T; Thiesing, R
Award ID(s):
Publication Date:
Journal Name:
Proceedings of the Winter Simulation Conference
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Biological research often involves testing a growing number of null hypotheses as new data are accumulated over time. We study the problem of online control of the familywise error rate, that is testing an a priori unbounded sequence of hypotheses ( p-values) one by one over time without knowing the future, such that with high probability there are no false discoveries in the entire sequence. This paper unifies algorithmic concepts developed for offline (single batch) familywise error rate control and online false discovery rate control to develop novel online familywise error rate control methods. Though many offline familywise error ratemore »methods (e.g., Bonferroni, fallback procedures and Sidak’s method) can trivially be extended to the online setting, our main contribution is the design of new, powerful, adaptive online algorithms that control the familywise error rate when the p-values are independent or locally dependent in time. Our numerical experiments demonstrate substantial gains in power, that are also formally proved in an idealized Gaussian sequence model. A promising application to the International Mouse Phenotyping Consortium is described.« less
  2. Abstract Pulsar timing is a process of iteratively fitting pulse arrival times to constrain the spindown, astrometric, and possibly binary parameters of a pulsar, by enforcing integer numbers of pulsar rotations between the arrival times. Phase connection is the process of unambiguously determining those rotation numbers between the times of arrival while determining a pulsar timing solution. Pulsar timing currently requires a manual process of step-by-step phase connection performed by individuals. In an effort to quantify and streamline this process, we created the Algorithmic Pulsar Timer (APT), an algorithm that can accurately phase connect and time isolate pulsars. Using themore »statistical F-test and knowledge of parameter uncertainties and covariances, the algorithm decides what new data to include in a fit, when to add additional timing parameters, and which model to attempt in subsequent iterations. Using these tools, the algorithm can phase-connect timing data that previously required substantial manual effort. We tested the algorithm on 100 simulated systems, with a 99% success rate. APT combines statistical tests and techniques with a logical decision-making process, very similar to the manual one used by pulsar astronomers for decades, and some computational brute force, to automate the often tricky process of isolated pulsar phase connection, setting the foundation for automated fitting of binary pulsar systems.« less
  3. Optical projection tomography (OPT) is a powerful imaging modality for attaining high resolution absorption and fluorescence imaging in tissue samples and embryos with a diameter of roughly 1 mm. Moving past this 1 mm limit, scattered light becomes the dominant fraction detected, adding significant “blur” to OPT. Time-domain OPT has been used to select out early-arriving photons that have taken a more direct route through the tissue to reduce detection of scattered photons in these larger samples, which are the cause of image domain blur1. In addition, it was recently demonstrated by our group that detection of scattered photons couldmore »be further depressed by running in a “deadtime” regime where laser repetition rates are selected such that the deadtime incurred by early-arriving photons acts as a shutter to later-arriving scattered photons2. By running in this deadtime regime, far greater early photon count rates are achievable than with standard early photon OPT. In this work, another advantage of this enhanced early photon collection approach is demonstrated: specifically, a significant improvement in signal-to-noise ratio. In single photon counting detectors, the main source of noise is “afterpulsing,” which is essentially leftover charge from a detected photon that spuriously results in a second photon count. When the arrival of the photons are time-stamped by the time correlated single photon counting (TCSPC) module , the rate constant governing afterpusling is slow compared to the time-scale of the light pulse detected so it is observed as a background signal with very little time-correlation. This signal is present in all time-gates and so adds noise to the detection of early photons. However, since the afterpusling signal is proportional to the total rate of photon detection, our enhanced early photon approach is uniquely able to have increased early photon counts with no appreciable increase in the afterpulsing since overall count-rate does not change. This is because as the rate of early photon detection goes up, the rate of late-photon detection reduces commensurately, yielding no net change in the overall rate of photons detected. This hypothesis was tested on a 4 mm diameter tissue-mimicking phantom (μa = 0.02 mm-1, μs’ = 1 mm-1) by ranging the power of a 10 MHz pulse 780-nm laser with pulse spread of < 100 fs (Calmar, USA) and an avalanche photodiode (MPD, Picoquant, Germany) and TCSPC module (HydraHarp, Picoquant, Germany) for light detection. Details of the results are in Fig. 1a, but of note is that we observed more than a 60-times improvement in SNR compared to conventional early photon detection that would have taken 1000-times longer to achieve the same early photon count. A demonstration of the type of resolution possible is in Fig 1b with an image of a 4-mm-thick human breast cancer biopsy where tumor spiculations of less than 100 μm diameter are observable. 1Fieramonti, L. et al. PloS one (2012). 2Sinha, L., et al. Optics letters (2016).« less
  4. We consider the problem of accurately recovering a matrix B of size M by M, which represents a probability distribution over M^2 outcomes, given access to an observed matrix of "counts" generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including building recommendation systems and collaborative filtering inmore »the sparse data regime, community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute "word embeddings". Many aspects of this problem---both in terms of learning and property testing/estimation and on both the algorithmic and information theoretic sides---remain open. Our results apply to the setting where B has a low rank structure. For this setting, we propose an efficient (and practically viable) algorithm that accurately recovers the underlying M by M matrix using O(M) samples} (where we assume the rank is a constant). This linear sample complexity is optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires Omega(M) samples. Additionally, we provide an even stronger lower bound showing that distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by a well-conditioned Hidden Markov Model with two hidden states requires Omega(M) observations, while our positive results for recovering B immediately imply that Omega(M) observations suffice to learn such an HMM. This lower bound precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs.« less
  5. Green wireless networks Wake-up radio Energy harvesting Routing Markov decision process Reinforcement learning 1. Introduction With 14.2 billions of connected things in 2019, over 41.6 billions expected by 2025, and a total spending on endpoints and services that will reach well over $1.1 trillion by the end of 2026, the Internet of Things (IoT) is poised to have a transformative impact on the way we live and on the way we work [1–3]. The vision of this ‘‘connected continuum’’ of objects and people, however, comes with a wide variety of challenges, especially for those IoT networks whose devices rely onmore »some forms of depletable energy support. This has prompted research on hardware and software solutions aimed at decreasing the depen- dence of devices from ‘‘pre-packaged’’ energy provision (e.g., batteries), leading to devices capable of harvesting energy from the environment, and to networks – often called green wireless networks – whose lifetime is virtually infinite. Despite the promising advances of energy harvesting technologies, IoT devices are still doomed to run out of energy due to their inherent constraints on resources such as storage, processing and communica- tion, whose energy requirements often exceed what harvesting can provide. The communication circuitry of prevailing radio technology, especially, consumes relevant amount of energy even when in idle state, i.e., even when no transmissions or receptions occur. Even duty cycling, namely, operating with the radio in low energy consumption ∗ Corresponding author. E-mail address: (G. Koutsandria). (sleep) mode for pre-set amounts of time, has been shown to only mildly alleviate the problem of making IoT devices durable [4]. An effective answer to eliminate all possible forms of energy consumption that are not directly related to communication (e.g., idle listening) is provided by ultra low power radio triggering techniques, also known as wake-up radios [5,6]. Wake-up radio-based networks allow devices to remain in sleep mode by turning off their main radio when no communication is taking place. Devices continuously listen for a trigger on their wake-up radio, namely, for a wake-up sequence, to activate their main radio and participate to communication tasks. Therefore, devices wake up and turn their main radio on only when data communication is requested by a neighboring device. Further energy savings can be obtained by restricting the number of neighboring devices that wake up when triggered. This is obtained by allowing devices to wake up only when they receive specific wake-up sequences, which correspond to particular protocol requirements, including distance from the destina- tion, current energy status, residual energy, etc. This form of selective awakenings is called semantic addressing [7]. Use of low-power wake-up radio with semantic addressing has been shown to remarkably reduce the dominating energy costs of communication and idle listening of traditional radio networking [7–12]. This paper contributes to the research on enabling green wireless networks for long lasting IoT applications. Specifically, we introduce a ABSTRACT This paper presents G-WHARP, for Green Wake-up and HARvesting-based energy-Predictive forwarding, a wake-up radio-based forwarding strategy for wireless networks equipped with energy harvesting capabilities (green wireless networks). Following a learning-based approach, G-WHARP blends energy harvesting and wake-up radio technology to maximize energy efficiency and obtain superior network performance. Nodes autonomously decide on their forwarding availability based on a Markov Decision Process (MDP) that takes into account a variety of energy-related aspects, including the currently available energy and that harvestable in the foreseeable future. Solution of the MDP is provided by a computationally light heuristic based on a simple threshold policy, thus obtaining further computational energy savings. The performance of G-WHARP is evaluated via GreenCastalia simulations, where we accurately model wake-up radios, harvestable energy, and the computational power needed to solve the MDP. Key network and system parameters are varied, including the source of harvestable energy, the network density, wake-up radio data rate and data traffic. We also compare the performance of G-WHARP to that of two state-of-the-art data forwarding strategies, namely GreenRoutes and CTP-WUR. Results show that G-WHARP limits energy expenditures while achieving low end-to-end latency and high packet delivery ratio. Particularly, it consumes up to 34% and 59% less energy than CTP-WUR and GreenRoutes, respectively.« less