We consider the placement, delivery promise, and fulfillment decisions of an online retailer. We have a set of products with given numbers of units to be placed at capacitated fulfillment centers. Once we make the placement decisions, we face demands for the products arriving from different demand regions randomly over time. In response to each demand, we pick a delivery promise to offer, which determines the probability that the demand converts into sales as well as choose a fulfillment center to use to serve the demand. Our goal is to decide where to place the units at the beginning of the selling horizon and to find a policy to make delivery promise and fulfillment decisions over the selling horizon so that we maximize the total expected profit. We give an approximation strategy to obtain solutions with performance guarantees for this joint placement, delivery promise, and fulfillment problem. In our approximation strategy, we construct a bounding function that upper bounds the total expected profit from the delivery promise and fulfillment policy when viewed as a function of the placement decisions. To make the placement decisions, we maximize the bounding function subject to the capacity constraints at the fulfillment centers. To make the delivery promise and fulfillment decisions, we construct a policy that obtains a constant fraction of the bounding function. Using our approximation strategy with appropriate bounding functions, we obtain a solution with a constant factor performance guarantee, but if the size of the system, measured by the numbers of units that we need to place and capacities of the fulfillment centers, is large, then we get an asymptotically optimal solution. We compare our approximation strategy with approaches that ignore the interactions between the placement, delivery promise, and fulfillment decisions as well as heuristics that are based on Lagrangian relaxation, demonstrating that our approximation strategy compares quite favorably.
more »
« less
Spectral gaps on complete Riemannian manifolds
In this short note, we survey some basic results related to the New Weyl criterion for the essential spectrum. We then use the language of Gromov-Hausdorff convergence to prove a spectral gap theorem.
more »
« less
- Award ID(s):
- 1908513
- PAR ID:
- 10273519
- Date Published:
- Journal Name:
- Contemporary mathematics
- Volume:
- 756
- ISSN:
- 2705-1064
- Page Range / eLocation ID:
- 57-67
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract To advance justice, equity, diversity, and inclusion in science, we must first understand and improve the dominant-culture frameworks that impede progress and, second, we must intentionally create more equitable models. The present authors call ourselves the ICBOs and Allies Workgroup (ICBOs stands for independent community-based organizations), and we represent communities historically excluded from the sciences. Together with institutional allies and advisors, we began our research because we wanted our voices to be heard, and we hoped to bring a different perspective to doing science with and not on communities. We created a community framework to guide our research and we led all aspects of our work, from creating research protocols to analyzing and interpreting the data to disseminating the results. We share our research framework, methods, and results so that science institutions can better understand how to intentionally create more equitable research partnerships with our communities.more » « less
-
In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic fairness in order to address a wider range of harms not recognized by existing technical or legal definitions. Second, we argue that the epistemic lens helps to identify the epistemic goals of inquiries into algorithmic fairness. There are two distinct contexts within which we examine algorithmic harm: at times, we seek to understand and describe the world as it is, and, at other times, we seek to build a more just future. The epistemic lens can serve to direct our attention to the epistemic frameworks that shape our interpretations of the world as it is and the ways we envision possible futures. Clarity with respect to which epistemic context is relevant in a given inquiry can further help inform choices among the different ways of measuring and addressing algorithmic harms. We introduce this framework with the goal of initiating new research directions bridging philosophical, legal, and technical approaches to understanding and mitigating algorithmic harms.more » « less
-
null (Ed.)For a class of Cyber-Physical Systems (CPSs), we address the problem of performing computations over the cloud without revealing private information about the structure and operation of the system. We model CPSs as a collection of input-output dynamical systems (the system operation modes). Depending on the mode the system is operating on, the output trajectory is generated by one of these systems in response to driving inputs. Output measurements and driving inputs are sent to the cloud for processing purposes. We capture this "processing" through some function (of the input-output trajectory) that we require the cloud to compute accurately - referred here as the trajectory utility. However, for privacy reasons, we would like to keep the mode private, i.e., we do not want the cloud to correctly identify what mode of the CPS produced a given trajectory. To this end, we distort trajectories before transmission and send the corrupted data to the cloud. We provide mathematical tools (based on output-regulation techniques) to properly design distorting mechanisms so that: 1) the original and distorted trajectories lead to the same utility; and the distorted data leads the cloud to misclassify the mode.more » « less
-
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models’ generalization, as we observe empirically. To estimate the model’s dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.1 The proposed algorithm improves the model’s generalization on three datasets: Colored MNIST, Model- Net40, and NVIDIA Dynamic Hand Gesture.more » « less
An official website of the United States government

