skip to main content

Title: Optimal Algorithms for Computing Average Temperatures
Abstract A numerical algorithm is presented for computing average global temperature (or other quantities of interest such as average precipitation) from measurements taken at speci_ed locations and times. The algorithm is proven to be in a certain sense optimal. The analysis of the optimal algorithm provides a sharp a priori bound on the error between the computed value and the true average global temperature. This a priori bound involves a computable compatibility constant which assesses the quality of the measurements for the chosen model. The optimal algorithm is constructed by solving a convex minimization problem and involves a set of functions selected a priori in relation to the model. It is shown that the solution promotes sparsity and hence utilizes a smaller number of well-chosen data sites than those provided. The algorithm is then applied to canonical data sets and mathematically generic models for the computation of average temperature and average precipitation over given regions and given time intervals. A comparison is provided between the proposed algorithms and existing methods.
; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Mathematics of Climate and Weather Forecasting
Page Range or eLocation-ID:
34 to 44
Sponsoring Org:
National Science Foundation
More Like this
  1. The HTTP adaptive streaming technique opened the door to cope with the fluctuating network conditions during the streaming process by dynamically adjusting the volume of the future chunks to be downloaded. The bitrate selection in this adjustment inevitably involves the task of predicting the future throughput of a video session, owing to which various heuristic solutions have been explored. The ultimate goal of the present work is to explore the theoretical upper bounds of the QoE that any ABR algorithm can possibly reach, therefore providing an essential step to benchmarking the performance evaluation of ABR algorithms. In our setting, the QoE is defined in terms of a linear combination of the average perceptual quality and the buffering ratio. The optimization problem is proven to be NP-hard when the perceptual quality is defined by chunk size and conditions are given under which the problem becomes polynomially solvable. Enriched by a global lower bound, a pseudo-polynomial time algorithm along the dynamic programming approach is presented. When the minimum buffering is given higher priority over higher perceptual quality, the problem is shown to be also NP-hard, and the above algorithm is simplified and enhanced by a sequence of lower bounds on the completionmore »time of chunk downloading, which, according to our experiment, brings a 36.0% performance improvement in terms of computation time. To handle large amounts of data more efficiently, a polynomial-time algorithm is also introduced to approximate the optimal values when minimum buffering is prioritized. Besides its performance guarantee, this algorithm is shown to reach 99.938% close to the optimal results, while taking only 0.024% of the computation time compared to the exact algorithm in dynamic programming.« less
  2. Abstract
    Excessive phosphorus (P) applications to croplands can contribute to eutrophication of surface waters through surface runoff and subsurface (leaching) losses. We analyzed leaching losses of total dissolved P (TDP) from no-till corn, hybrid poplar (Populus nigra X P. maximowiczii), switchgrass (Panicum virgatum), miscanthus (Miscanthus giganteus), native grasses, and restored prairie, all planted in 2008 on former cropland in Michigan, USA. All crops except corn (13 kg P ha−1 year−1) were grown without P fertilization. Biomass was harvested at the end of each growing season except for poplar. Soil water at 1.2 m depth was sampled weekly to biweekly for TDP determination during March–November 2009–2016 using tension lysimeters. Soil test P (0–25 cm depth) was measured every autumn. Soil water TDP concentrations were usually below levels where eutrophication of surface waters is frequently observed (> 0.02 mg L−1) but often higher than in deep groundwater or nearby streams and lakes. Rates of P leaching, estimated from measured concentrations and modeled drainage, did not differ statistically among cropping systems across years; 7-year cropping system means ranged from 0.035 to 0.072 kg P ha−1 year−1 with large interannual variation. Leached P was positively related to STP, which decreased over the 7 years in all systems. These results indicate that both P-fertilized and unfertilized cropping systems mayMore>>
  3. Abstract Land surface processes are vital to the performance of regional climate models in dynamic downscaling application. In this study, we investigate the sensitivity of the simulation by using the weather research and forecasting (WRF) model at 10-km resolution to the land surface schemes over Central Asia. The WRF model was run for 19 summers from 2000 to 2018 configured with four different land surface schemes including CLM4, Noah-MP, Pleim-Xiu and SSiB, hereafter referred as Exp-CLM4, Exp-Noah-MP, Exp-PX and Exp-SSiB respectively. The initial and boundary conditions for the WRF model simulations were provided by the National Centers for Environmental Prediction Final (NCEP-FNL) Operational Global Analysis data. The ERA-Interim reanalysis (ERAI), the GHCN-CAMS and the CRU gridded data were used to comprehensively evaluate the WRF simulations. Compared with the reanalysis and observational data, the WRF model can reasonably reproduce the spatial patterns of summer mean 2-m temperature, precipitation, and large- scale atmospheric circulation. The simulations, however, are sensitive to the option of land surface scheme. The performance of Exp-CLM4 and Exp-SSiB are better than that of Exp-Noah-MP and Exp-PX assessed by Multivariable Integrated Evaluation (MVIE) method. To comprehensively understand the dynamic and physical mechanisms for the WRF model’s sensitivity to landmore »surface schemes, the differences in the surface energy balance between Ave-CLM4-SSiB (the ensemble average of Exp-CLM4 and Exp-SSiB) and Ave-NoanMP-PX (the ensemble average of Exp-Noah-MP and Exp-PX) are analyzed in detail. The results demonstrate that the sensible and latent heat fluxes are respectively lower by 30.42 W·m −2 and higher by 14.86 W·m −2 in Ave-CLM4-SSiB than that in Ave-NoahMP-PX. As a result, large differences in geopotential height occur over the simulation domain. The simulated wind fields are subsequently influenced by the geostrophic adjustment process, thus the simulations of 2-m temperature, surface skin temperature and precipitation are respectively lower by about 2.08 ℃, 2.23 ℃ and 18.56 mm·month −1 in Ave-CLM4-SSiB than that in Ave-NoahMP-PX over Central Asia continent.« less
  4. We study the problem of distributed task allocation by workers in an ant colony in a setting of limited capabilities and noisy environment feedback. We assume that each task has a demand that should be satisfied but not exceeded, i.e., there is an optimal number of ants that should be working on this task at a given time. The goal is to assign a near-optimal number of workers to each task in a distributed manner without explicit access to the value of the demand nor to the number of ants working on the task. We seek to answer the question of how the quality of task allocation depends on the accuracy of assessing by the ants whether too many (overload) or not enough (lack) ants are currently working on a given task. In our model, each ant receives a binary feedback that depends on the deficit, defined as the difference between the demand and the current number of workers in the task. The feedback is modeled as a random variable that takes values lack or overload with probability given by a sigmoid function of the deficit. The higher the overload or lack of workers for a task, the more likelymore »it is that an ant receives the correct feedback from this task; the closer the deficit is to zero, the less reliable the feedback becomes. Each ant receives the feedback independently about one chosen task. We measure the performance of task allocation algorithms using the notion of inaccuracy, defined as the number of steps in which the deficit of some task is beyond certain threshold. We propose a simple, constant-memory, self-stabilizing, distributed algorithm that converges from any initial assignment to a near-optimal assignment under noisy feedback and keeps the deficit small for all tasks in almost every step. We also prove a lower bound for any constant-memory algorithm, which matches, up to a constant factor, the accuracy achieved by our algorithm.« less
  5. We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents' costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alternatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 (Anshelevich et al. 2015). We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst-case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in themore »worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.« less