- Award ID(s):
- NSF-PAR ID:
- Date Published:
- Journal Name:
- Monthly Notices of the Royal Astronomical Society
- Page Range / eLocation ID:
- 1137 to 1148
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
Skateboarding as a method of transportation has become prevalent, which has increased the occurrence and likelihood of pedestrian–skateboarder collisions and near-collision scenarios in shared-use roadway areas. Collisions between pedestrians and skateboarders can result in significant injury. New approaches are needed to evaluate shared-use areas prone to hazardous pedestrian–skateboarder interactions, and perform real-time, in situ (e.g., on-device) predictions of pedestrian–skateboarder collisions as road conditions vary due to changes in land usage and construction. A mechanism called the Surrogate Safety Measures for skateboarder–pedestrian interaction can be computed to evaluate high-risk conditions on roads and sidewalks using deep learning object detection models. In this paper, we present the first ever skateboarder–pedestrian safety study leveraging deep learning architectures. We view and analyze state of the art deep learning architectures, namely the Faster R-CNN and two variants of the Single Shot Multi-box Detector (SSD) model to select the correct model that best suits two different tasks: automated calculation of Post Encroachment Time (PET) and finding hazardous conflict zones in real-time. We also contribute a new annotated data set that contains skateboarder–pedestrian interactions that has been collected for this study. Both our selected models can detect and classify pedestrians and skateboarders correctly and efficiently. However, due to differences in their architectures and based on the advantages and disadvantages of each model, both models were individually used to perform two different set of tasks. Due to improved accuracy, the Faster R-CNN model was used to automate the calculation of post encroachment time, whereas to determine hazardous regions in real-time, due to its extremely fast inference rate, the Single Shot Multibox MobileNet V1 model was used. An outcome of this work is a model that can be deployed on low-cost, small-footprint mobile and IoT devices at traffic intersections with existing cameras to perform on-device inferencing for in situ Surrogate Safety Measurement (SSM), such as Time-To-Collision (TTC) and Post Encroachment Time (PET). SSM values that exceed a hazard threshold can be published to an Message Queuing Telemetry Transport (MQTT) broker, where messages are received by an intersection traffic signal controller for real-time signal adjustment, thus contributing to state-of-the-art vehicle and pedestrian safety at hazard-prone intersections.more » « less
We develop a method to compute synthetic kilonova light curves that combine numerical relativity simulations of neutron star mergers and the SNEC radiation–hydrodynamics code. We describe our implementation of initial and boundary conditions, r-process heating, and opacities for kilonova simulations. We validate our approach by carefully checking that energy conservation is satisfied and by comparing the SNEC results with those of two semi-analytic light-curve models. We apply our code to the calculation of colour light curves for three binaries having different mass ratios (equal and unequal mass) and different merger outcome (short-lived and long-lived remnants). We study the sensitivity of our results to hydrodynamic effects, nuclear physics uncertainties in the heating rates, and duration of the merger simulations. We find that hydrodynamics effects are typically negligible and that homologous expansion is a good approximation in most cases. However, pressure forces can amplify the impact of uncertainties in the radioactive heating rates. We also study the impact of shocks possibly launched into the outflows by a relativistic jet. None of our models match AT2017gfo, the kilonova in GW170817. This points to possible deficiencies in our merger simulations and kilonova models that neglect non-LTE effects and possible additional energy injection from the merger remnant and to the need to go beyond the assumption of spherical symmetry adopted in this work.
A novel modeling framework that simultaneously improves accuracy, predictability, and computational efficiency is presented. It embraces the benefits of three modeling techniques integrated together for the first time: surrogate modeling, parameter inference, and data assimilation. The use of polynomial chaos expansion (PCE) surrogates significantly decreases computational time. Parameter inference allows for model faster convergence, reduced uncertainty, and superior accuracy of simulated results. Ensemble Kalman filters assimilate errors that occur during forecasting. To examine the applicability and effectiveness of the integrated framework, we developed 18 approaches according to how surrogate models are constructed, what type of parameter distributions are used as model inputs, and whether model parameters are updated during the data assimilation procedure. We conclude that (1) PCE must be built over various forcing and flow conditions, and in contrast to previous studies, it does not need to be rebuilt at each time step; (2) model parameter specification that relies on constrained, posterior information of parameters (so‐called
Selectedspecification) can significantly improve forecasting performance and reduce uncertainty bounds compared to Randomspecification using prior information of parameters; and (3) no substantial differences in results exist between single and dual ensemble Kalman filters, but the latter better simulates flood peaks. The use of PCE effectively compensates for the computational load added by the parameter inference and data assimilation (up to ~80 times faster). Therefore, the presented approach contributes to a shift in modeling paradigm arguing that complex, high‐fidelity hydrologic and hydraulic models should be increasingly adopted for real‐time and ensemble flood forecasting.
null (Ed.)Recent advances in computing algorithms and hardware have rekindled interest in developing high-accuracy, low-cost surrogate models for simulating physical systems. The idea is to replace expensive numerical integration of complex coupled partial differential equations at fine time scales performed on supercomputers, with machine-learned surrogates that efficiently and accurately forecast future system states using data sampled from the underlying system. One particularly popular technique being explored within the weather and climate modelling community is the echo state network (ESN), an attractive alternative to other well-known deep learning architectures. Using the classical Lorenz 63 system, and the three tier multi-scale Lorenz 96 system (Thornes T, Duben P, Palmer T. 2017 Q. J. R. Meteorol. Soc. 143 , 897–908. ( doi:10.1002/qj.2974 )) as benchmarks, we realize that previously studied state-of-the-art ESNs operate in two distinct regimes, corresponding to low and high spectral radius (LSR/HSR) for the sparse, randomly generated, reservoir recurrence matrix. Using knowledge of the mathematical structure of the Lorenz systems along with systematic ablation and hyperparameter sensitivity analyses, we show that state-of-the-art LSR-ESNs reduce to a polynomial regression model which we call Domain-Driven Regularized Regression (D2R2). Interestingly, D2R2 is a generalization of the well-known SINDy algorithm (Brunton SL, Proctor JL, Kutz JN. 2016 Proc. Natl Acad. Sci. USA 113 , 3932–3937. ( doi:10.1073/pnas.1517384113 )). We also show experimentally that LSR-ESNs (Chattopadhyay A, Hassanzadeh P, Subramanian D. 2019 ( http://arxiv.org/abs/1906.08829 )) outperform HSR ESNs (Pathak J, Hunt B, Girvan M, Lu Z, Ott E. 2018 Phys. Rev. Lett. 120 , 024102. ( doi:10.1103/PhysRevLett.120.024102 )) while D2R2 dominates both approaches. A significant goal in constructing surrogates is to cope with barriers to scaling in weather prediction and simulation of dynamical systems that are imposed by time and energy consumption in supercomputers. Inexact computing has emerged as a novel approach to helping with scaling. In this paper, we evaluate the performance of three models (LSR-ESN, HSR-ESN and D2R2) by varying the precision or word size of the computation as our inexactness-controlling parameter. For precisions of 64, 32 and 16 bits, we show that, surprisingly, the least expensive D2R2 method yields the most robust results and the greatest savings compared to ESNs. Specifically, D2R2 achieves 68 × in computational savings, with an additional 2 × if precision reductions are also employed, outperforming ESN variants by a large margin. This article is part of the theme issue ‘Machine learning for weather and climate modelling’.more » « less
We present an improved version of the 3D Monte Carlo radiative transfer code possis to model kilonovae from neutron star mergers, wherein nuclear heating rates, thermalization efficiencies, and wavelength-dependent opacities depend on local properties of the ejecta and time. Using an axially symmetric two-component ejecta model, we explore how simplistic assumptions on heating rates, thermalization efficiencies, and opacities often found in the literature affect kilonova spectra and light curves. Specifically, we compute five models: one (FIDUCIAL) with an appropriate treatment of these three quantities, one (SIMPLE-HEAT) with uniform heating rates throughout the ejecta, one (SIMPLE-THERM) with a constant and uniform thermalization efficiency, one (SIMPLE-OPAC) with grey opacities, and one (SIMPLE-ALL) with all these three simplistic assumptions combined. We find that deviations from the FIDUCIAL model are of several (∼1–10) magnitudes and are generally larger for the SIMPLE-OPAC and SIMPLE-ALL compared to the SIMPLE-THERM and SIMPLE-HEAT models. The discrepancies generally increase from a face-on to an edge-on view of the system, from early to late epochs and from infrared to ultraviolet/optical wavelengths. This work indicates that kilonova studies using either of these simplistic assumptions ought to be treated with caution and that appropriate systematic uncertainties ought to be added to kilonova light curves when performing inference on ejecta parameters.