We present Whisper, a system for privacy-preserving collection of aggregate statistics. Like prior systems, a Whisper deployment consists of a small set of non-colluding servers; these servers compute aggregate statistics over data from a large number of users without learning the data of any individual user. Whisper's main contribution is that its server-to-server communication cost and its server-side storage costs scale sublinearly with the total number of users. In particular, prior systems required the servers to exchange a few bits of information to verify the well-formedness of each client submission. In contrast, Whisper uses silently verifiable proofs, a new type of proof system on secret-shared data that allows the servers to verify an arbitrarily large batch of proofs by exchanging a single 128-bit string. This improvement comes with increased client-to-server communication, which, in cloud computing, is typically cheaper (or even free) than the cost of egress for server-to-server communication. To reduce server storage, Whisper approximates certain statistics using small-space sketching data structures. Applying randomized sketches in an environment with adversarial clients requires a careful and novel security analysis. In a deployment with two servers and 100,000 clients of which 1% are malicious, Whisper can improve server-to-server communication for vector sum by three orders of magnitude while each client's communication increases by only 10%.
more »
« less
A Deep Reinforcement Learning Approach for Production Scheduling in Computer Server Industry
Computer Server Industry is characterized by extensive test processes to ensure high quality and reliability of the servers. Computer Server Industry production systems utilize Configure-To-Order (CTO), also known as fabrication/fulfillment, strategy which provides an effective balance between demand and supply by synchronizing the flow of materials, equipment, and labor throughout the production process. In the fabrication stage, components or sub-assemblies are produced, tested, and assembled based on a projected production plan. They are then kept in stock until an actual order is received from a customer. In the fulfillment stage, final products are assembled according to actual customer orders. Assignment of products to test cells during the fulfillment stage can be a challenging task due to high quality requirement and limited resources. Current practices tend to assign products to test cells based on a specific criterion such as on-time shipment or maximum test cell occupancy, which can result in higher levels of energy consumption or delayed orders. This paper introduces a Deep Reinforcement Learning (DRL) approach to effectively assign servers to test cells considering a multi-objective reward function that combines multiple criteria. A proposed simulation model serves as the environment with which the DRL agent interacts, learning a policy that develops a test schedule for the products. The proposed approach is tested with a case study from a high-end server manufacturing environment. Sensitivity analysis is conducted to analyze the impact of the different values of the system’s variables on its performance.
more »
« less
- Award ID(s):
- 2038325
- PAR ID:
- 10553460
- Publisher / Repository:
- International Manufacturing Science and Engineering Conference
- Date Published:
- Volume:
- Vol. 88117
- ISBN:
- 978-0-7918-8811-7
- Page Range / eLocation ID:
- p. V002T07A011
- Format(s):
- Medium: X
- Location:
- Knoxville, Tennessee, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the market's auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data.more » « less
-
Abstract Coke drums are critical units in the delayed coking process to produce lightweight oil products from heavy residual oil. The fulfillment of the designed coke drum lifetime is often obstructed by low-cycle fatigue damage over cyclic thermal and mechanical loading. Considering the tremendous cost of drum replacement and production loss due to shutdown, the coke drum lifetime extension is of great economic significance in the oil and gas industry. A research project regarding coke drum fabrication and repair was initiated in the Manufacturing & Materials Joining Innovation Center (MA2JIC) at the Ohio State University in 2016. The project includes two phases of work. The first phase of the study (2016∼2019) focused on the external weld repair of coke drum materials, while the ongoing second phase of the study (2019∼2023) addressed coke drum fabrication and repair. A novel low-cycle fatigue testing approach was developed using Gleeble thermo-mechanical simulator and was applied to evaluating the performance of coke drum base materials and welded joints under cyclic deformation. The project goal is to improve the fundamental understanding of materials and joint performance that allows the optimization of coke drum design, fabrication, and repair. In this technical paper, the key methodologies and achievements of the project will be introduced, and some future work will be proposed for the next step.more » « less
-
Abstract We propose easy‐to‐implement heuristics for time‐constrained applications of a problem referred to in the literature as the facility location problem with immobile servers, stochastic demand, and congestion, the service system design problem, or the immobile server problem (ISP). The problem is typically posed as one of allocating capacity to a set of M/M/1 queues to which customers with stochastic demand are assigned with the objective of minimizing a cost function composed of a fixed capacity‐acquisition cost, a variable customer‐assignment cost, and an expected‐waiting‐time cost. The expected‐waiting‐time cost results in a nonlinear term in the objective function of the standard binary programming formulation of the problem. Thus, the solution approaches proposed in the literature are either sophisticated linearization or relaxation schemes, or metaheuristics. In this study, we demonstrate that an ensemble of straightforward, greedy heuristics can rapidly find high‐quality solutions. In addition to filling a gap in the literature on ISP heuristics, new stopping criteria for an existing cutting plane algorithm are proposed and tested, and a new mixed‐integer linear model requiring no iterating algorithm is developed. In many cases, our heuristic approach finds solutions of the same or better quality than those found by exact methods implemented with expensive, state‐of‐the‐art mathematical programming software, in particular a commercial nonlinear mixed‐integer linear programming solver, given a five‐minute time limit.more » « less
-
Recent research has highlighted the effectiveness of advanced building controls in reducing the energy consumption of heating, ventilation, and air-conditioning (HVAC) systems. Among advanced building control strategies, deep reinforcement learning control (DRL) shows the potential to achieve energy savings for HVAC systems and has emerged as a promising strategy. However, training DRL requires an interactive environment for the agent, which is challenging to achieve with real buildings due to time and response speed constraints. To address this challenge, a simulation environment serving as a training environment is needed, even though the DRL algorithm does not necessarily need a model. The error between the model and the real building is inevitable in this process, which may influence the efficiency of the DRL controller. To investigate the impact of model error, a virtual testbed was established. A high- fidelity Modelica-based model is developed serving as the virtual building. Three reduced-order models (ROMs) (i.e., 3R2C, Light Gradient Boosting Machine (LightGBM) and artificial neural network (ANN) models) were trained with the historical data generated from the virtual building and were embedded in the training environments of DRL. The sensitivity of ROMs and the Modelica model to random and periodical actions were tested and compared. Deploying the policy trained based on a ROM-based environment, which stands for a surrogate model in reality, into the Modelica-based virtual building testing environment, which stands for real-building, is a practical approach to implementing the DRL control. The performance of the practical DRL controller is compared with rule-based control (RBC) and an ideal DRL controller which was trained and deployed both in the virtual building environment. In the final episode with best rewards of the case study, the 3R2C, LightGBM, and ANN-based DRL outperform the RBC by 7.4%, 14.4%, and 11.4%, respectively in terms of the reward, comprising the weighted sum of energy cost, temperature violations, and the slew rate of the control signal, but falls short of the ideal Modelica-based DRL controller which outperforms RBC by 29.5%. The DRL controllers based on data-driven models are highly unstable with higher maximum rewards but much lower average rewards which might be caused by the significant prediction defect in certain action regions of the data-driven model.more » « less
An official website of the United States government

