- Award ID(s):
- 1635014
- PAR ID:
- 10086131
- Date Published:
- Journal Name:
- 2018 IEEE Conference on Decision and Control (CDC)
- Page Range / eLocation ID:
- 6686 to 6691
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Given a linear dynamical system, we consider the problem of selecting (at design-time) an optimal set of sensors (subject to certain budget constraints) to minimize the trace of the steady state error covariance matrix of the Kalman filter. Previous work has shown that this problem is NP-hard for certain classes of systems and sensor costs; in this paper, we show that the problem remains NP-hard even for the special case where the system is stable and all sensor costs are identical. Furthermore, we show the stronger result that there is no constant-factor (polynomial-time) approximation algorithm for this problem. This contrasts with other classes of sensor selection problems studied in the literature, which typically pursue constant-factor approximations by leveraging greedy algorithms and submodularity of the cost function. Here, we provide a specific example showing that greedy algorithms can perform arbitrarily poorly for the problem of design-time sensor selection for Kalman filtering.more » « less
-
Network cache allocation and management are important aspects of an Information-Centric Network (ICN) design, such as one based on Named Data Networking (NDN). We address the problem of optimal cache size allocation and content placement in an ICN in order to maximize the caching gain resulting from routing cost savings. While prior art assumes a given cache size at each network node and focuses on content placement, we study the problem when a global, network-wide cache storage budget is given and we solve for the optimal per-node cache allocation. This problem arises in cloud-based network settings where each network node is virtualized and housed within a cloud data center node with associated dynamic storage resources acquired from the cloud node as needed. As the offline centralized version of the optimal cache allocation problem is NP-hard, we develop a distributed adaptive algorithm that provides an approximate solution within a constant factor from the optimal. Performance evaluation of the algorithm is carried out through extensive simulations over multiple network topologies, demonstrating that our proposal significantly outperforms existing cache allocation algorithms.more » « less
-
Lengauer, Thomas (Ed.)
Abstract Summary Target identification by enzymes (TIE) problem aims to identify the set of enzymes in a given metabolic network, such that their inhibition eliminates a given set of target compounds associated with a disease while incurring minimum damage to the rest of the compounds. This is a NP-hard problem, and thus optimal solutions using classical computers fail to scale to large metabolic networks. In this article, we develop the first quantum optimization solution, called QuTIE (quantum optimization for target identification by enzymes), to this NP-hard problem. We do that by developing an equivalent formulation of the TIE problem in quadratic unconstrained binary optimization form. We then map it to a logical graph, and embed the logical graph on a quantum hardware graph. Our experimental results on 27 metabolic networks from Escherichia coli, Homo sapiens, and Mus musculus show that QuTIE yields solutions that are optimal or almost optimal. Our experiments also demonstrate that QuTIE can successfully identify enzyme targets already verified in wet-lab experiments for 14 major disease classes.
Availability and implementation Code and sample data are available at: https://github.com/ngominhhoang/Quantum-Target-Identification-by-Enzymes.
-
The HTTP adaptive streaming technique opened the door to cope with the fluctuating network conditions during the streaming process by dynamically adjusting the volume of the future chunks to be downloaded. The bitrate selection in this adjustment inevitably involves the task of predicting the future throughput of a video session, owing to which various heuristic solutions have been explored. The ultimate goal of the present work is to explore the theoretical upper bounds of the QoE that any ABR algorithm can possibly reach, therefore providing an essential step to benchmarking the performance evaluation of ABR algorithms. In our setting, the QoE is defined in terms of a linear combination of the average perceptual quality and the buffering ratio. The optimization problem is proven to be NP-hard when the perceptual quality is defined by chunk size and conditions are given under which the problem becomes polynomially solvable. Enriched by a global lower bound, a pseudo-polynomial time algorithm along the dynamic programming approach is presented. When the minimum buffering is given higher priority over higher perceptual quality, the problem is shown to be also NP-hard, and the above algorithm is simplified and enhanced by a sequence of lower bounds on the completion time of chunk downloading, which, according to our experiment, brings a 36.0% performance improvement in terms of computation time. To handle large amounts of data more efficiently, a polynomial-time algorithm is also introduced to approximate the optimal values when minimum buffering is prioritized. Besides its performance guarantee, this algorithm is shown to reach 99.938% close to the optimal results, while taking only 0.024% of the computation time compared to the exact algorithm in dynamic programming.more » « less
-
We study the problem of covering barrier points by mobile sensors. Each sensor is represented by a point in the plane with the same covering range [Formula: see text] so that any point within distance [Formula: see text] from the sensor can be covered by the sensor. Given a set [Formula: see text] of [Formula: see text] points (called “barrier points”) and a set [Formula: see text] of [Formula: see text] points (representing the “sensors”) in the plane, the problem is to move the sensors so that each barrier point is covered by at least one sensor and the maximum movement of all sensors is minimized. The problem is NP-hard. In this paper, we consider two line-constrained variations of the problem and present efficient algorithms that improve the previous work. In the first problem, all sensors are given on a line [Formula: see text] and are required to move on [Formula: see text] only while the barrier points can be anywhere in the plane. We propose an [Formula: see text] time algorithm for the problem. We also consider the weighted case where each sensor has a weight; we give an [Formula: see text] time algorithm for this case. In the second problem, all barrier points are on [Formula: see text] while all sensors are in the plane but are required to move onto [Formula: see text] to cover all barrier points. We also solve the weighted case in [Formula: see text] time.