skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The DOI auto-population feature in the Public Access Repository (PAR) will be unavailable from 4:00 PM ET on Tuesday, July 8 until 4:00 PM ET on Wednesday, July 9 due to scheduled maintenance. We apologize for the inconvenience caused.


Search for: All records

Creators/Authors contains: "Subramanian, Vijay"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT Given a reversible Markov chain on states, and another chain obtained by perturbing each row of by at most in total variation, we study the total variation distance between the two stationary distributions, . We show that for chains withcutoff, converges to 0, is asymptotically at most (with a sequence of perturbations for which convergence to this bound occurs), and converges to 1, respectively, if the product of and the mixing time of converges to 0, , and , respectively. This echoes recent results for specific random walks that exhibit cutoff, suggesting that cutoff is the key property underlying such results. Moreover, we show is maximized byrestart perturbations, for which “restarts” at a random state with probability at each step. Finally, we show thatpre‐cutoffis (almost) equivalent to a notion of “sensitivity to restart perturbations,” suggesting that chains with sharper convergence to stationarity are inherently less robust. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  2. Free, publicly-accessible full text available December 16, 2025
  3. We consider a long-term average profit–maximizing admission control problem in an M/M/1 queuing system with unknown service and arrival rates. With a fixed reward collected upon service completion and a cost per unit of time enforced on customers waiting in the queue, a dispatcher decides upon arrivals whether to admit the arriving customer or not based on the full history of observations of the queue length of the system. Naor [Naor P (1969) The regulation of queue size by levying tolls. Econometrica 37(1):15–24] shows that, if all the parameters of the model are known, then it is optimal to use a static threshold policy: admit if the queue length is less than a predetermined threshold and otherwise not. We propose a learning-based dispatching algorithm and characterize its regret with respect to optimal dispatch policies for the full-information model of Naor [Naor P (1969) The regulation of queue size by levying tolls. Econometrica 37(1):15–24]. We show that the algorithm achieves an O(1) regret when all optimal thresholds with full information are nonzero and achieves an [Formula: see text] regret for any specified [Formula: see text] in the case that an optimal threshold with full information is 0 (i.e., an optimal policy is to reject all arrivals), where N is the number of arrivals. 
    more » « less
  4. Altafini, Claudio; Como, Giacomo; Hendrickx, Julien M.; Olshevsky, Alexander; Tahbez-Salehi, Alireza (Ed.)
    We study a social learning model in which agents iteratively update their beliefs about the true state of the world using private signals and the beliefs of other agents in a non-Bayesian manner. Some agents are stubborn, meaning they attempt to convince others of an erroneous true state (modeling fake news). We show that while agents learn the true state on short timescales, they "forget" it and believe the erroneous state to be true on longer timescales. Using these results, we devise strategies for seeding stubborn agents so as to disrupt learning, which outperform intuitive heuristics and give novel insights regarding vulnerabilities in social learning. 
    more » « less