skip to main content


Title: Chained structure of directed graphs with applications to social and transportation networks
Abstract

The need to determine the structure of a graph arises in many applications. This paper studies directed graphs and defines the notions of$$\ell$$-chained and$$\{\ell ,k\}$${,k}-chained directed graphs. These notions reveal structural properties of directed graphs that shed light on how the nodes of the graph are connected. Applications include city planning, information transmission, and disease propagation. We also discuss the notion of in-center and out-center vertices of a directed graph, which are vertices at the center of the graph. Computed examples provide illustrations, among which is the investigation of a bus network for a city.

 
more » « less
NSF-PAR ID:
10370892
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Applied Network Science
Volume:
7
Issue:
1
ISSN:
2364-8228
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Approximate integer programming is the following: For a given convex body$$K \subseteq {\mathbb {R}}^n$$KRn, either determine whether$$K \cap {\mathbb {Z}}^n$$KZnis empty, or find an integer point in the convex body$$2\cdot (K - c) +c$$2·(K-c)+cwhich isK, scaled by 2 from its center of gravityc. Approximate integer programming can be solved in time$$2^{O(n)}$$2O(n)while the fastest known methods for exact integer programming run in time$$2^{O(n)} \cdot n^n$$2O(n)·nn. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point$$x^* \in (K \cap {\mathbb {Z}}^n)$$x(KZn)can be found in time$$2^{O(n)}$$2O(n), provided that theremaindersof each component$$x_i^* \mod \ell $$ximodfor some arbitrarily fixed$$\ell \ge 5(n+1)$$5(n+1)of$$x^*$$xare given. The algorithm is based on acutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a$$2^{O(n)}n^n$$2O(n)nnalgorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a newasymmetric approximate Carathéodory theoremthat might be of interest on its own. Our second method concerns integer programming problems in equation-standard form$$Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n$$Ax=b,0xu,xZn. Such a problem can be reduced to the solution of$$\prod _i O(\log u_i +1)$$iO(logui+1)approximate integer programming problems. This implies, for example thatknapsackorsubset-sumproblems withpolynomial variable range$$0 \le x_i \le p(n)$$0xip(n)can be solved in time$$(\log n)^{O(n)}$$(logn)O(n). For these problems, the best running time so far was$$n^n \cdot 2^{O(n)}$$nn·2O(n).

     
    more » « less
  2. Abstract

    Sparsity finds applications in diverse areas such as statistics, machine learning, and signal processing. Computations over sparse structures are less complex compared to their dense counterparts and need less storage. This paper proposes a heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the$$\ell _{p}$$pquasi-norm, where$$00<p<1. An iterative two-block algorithm for minimizing the$$\ell _{p}$$pquasi-norm subject to convex constraints is proposed. The proposed algorithm requires solving for the roots of a scalar degree polynomial as opposed to applying a soft thresholding operator in the case of$$\ell _{1}$$1norm minimization. The algorithm’s merit relies on its ability to solve the$$\ell _{p}$$pquasi-norm minimization subject to any convex constraints set. For the specific case of constraints defined by differentiable functions with Lipschitz continuous gradient, a second, faster algorithm is proposed. Using a proximal gradient step, we mitigate the convex projection step and hence enhance the algorithm’s speed while proving its convergence. We present various applications where the proposed algorithm excels, namely, sparse signal reconstruction, system identification, and matrix completion. The results demonstrate the significant gains obtained by the proposed algorithm compared to other$$\ell _{p}$$pquasi-norm based methods presented in previous literature.

     
    more » « less
  3. Abstract

    The double differential cross sections of the Drell–Yan lepton pair ($$\ell ^+\ell ^-$$+-, dielectron or dimuon) production are measured as functions of the invariant mass$$m_{\ell \ell }$$m, transverse momentum$$p_{\textrm{T}} (\ell \ell )$$pT(), and$$\varphi ^{*}_{\eta }$$φη. The$$\varphi ^{*}_{\eta }$$φηobservable, derived from angular measurements of the leptons and highly correlated with$$p_{\textrm{T}} (\ell \ell )$$pT(), is used to probe the low-$$p_{\textrm{T}} (\ell \ell )$$pT()region in a complementary way. Dilepton masses up to 1$$\,\text {Te\hspace{-.08em}V}$$TeVare investigated. Additionally, a measurement is performed requiring at least one jet in the final state. To benefit from partial cancellation of the systematic uncertainty, the ratios of the differential cross sections for various$$m_{\ell \ell }$$mranges to those in the Z mass peak interval are presented. The collected data correspond to an integrated luminosity of 36.3$$\,\text {fb}^{-1}$$fb-1of proton–proton collisions recorded with the CMS detector at the LHC at a centre-of-mass energy of 13$$\,\text {Te\hspace{-.08em}V}$$TeV. Measurements are compared with predictions based on perturbative quantum chromodynamics, including soft-gluon resummation.

     
    more » « less
  4. Abstract

    This paper reports a search for Higgs boson pair (hh) production in association with a vector boson ($$W\; {\text {o}r}\; Z$$WorZ) using 139 fb$$^{-1}$$-1of proton–proton collision data at$$\sqrt{s}=13\,\text {TeV}$$s=13TeVrecorded with the ATLAS detector at the Large Hadron Collider. The search is performed in final states in which the vector boson decays leptonically ($$W\rightarrow \ell \nu ,\, Z\rightarrow \ell \ell ,\nu \nu $$Wν,Z,ννwith$$\ell =e, \mu $$=e,μ) and the Higgs bosons each decay into a pair ofb-quarks. It targetsVhhsignals from both non-resonanthhproduction, present in the Standard Model (SM), and resonanthhproduction, as predicted in some SM extensions. A 95% confidence-level upper limit of 183 (87) times the SM cross-section is observed (expected) for non-resonantVhhproduction when assuming the kinematics are as expected in the SM. Constraints are also placed on Higgs boson coupling modifiers. For the resonant search, upper limits on the production cross-sections are derived for two specific models: one is the production of a vector boson along with a neutral heavy scalar resonanceH, in the mass range 260–1000 GeV, that decays intohh, and the other is the production of a heavier neutral pseudoscalar resonanceAthat decays into aZboson andHboson, where theAboson mass is 360–800 GeV and theHboson mass is 260–400 GeV. Constraints are also derived in the parameter space of two-Higgs-doublet models.

     
    more » « less
  5. Abstract

    We study the sparsity of the solutions to systems of linear Diophantine equations with and without non-negativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the$$\ell _0$$0-norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$0-norm of solutions to systems$$A\varvec{x}=\varvec{b}$$Ax=b, where$$A\in \mathbb {Z}^{m\times n}$$AZm×n,$${\varvec{b}}\in \mathbb {Z}^m$$bZmand$$\varvec{x}$$xis either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\ell _0$$0-norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\mathbb {R}$$R, to other subdomains such as$$\mathbb {Z}$$Z. We show that these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.

     
    more » « less