skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Strategic Classification under Unknown Personalized Manipulation
We consider strategic classification, where agents can strategically manipulate their feature vector to a limited extent in order to be classified as positive. Unlike most prior work, our work considers manipulations to be personalized, meaning that agents can have different levels of manipulation abilities (e.g., varying radii for ball manipulations), and unknown to the learner. We formalize the learning problem in an interaction model where the learner first deploys a classifier and the agent manipulates the feature vector within their manipulation set to game the deployed classifier. We investigate various scenarios in terms of the information available to the learner during the interaction, such as observing the original feature vector before or after deployment, observing the manipulated feature vector, or not seeing either the original or the manipulated feature vector, and provide online mistake bounds and PAC sample complexity in these scenarios for ball manipulations.  more » « less
Award ID(s):
2212968
PAR ID:
10511445
Author(s) / Creator(s):
; ;
Publisher / Repository:
37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Date Published:
Journal Name:
Advances in neural information processing systems
ISSN:
1049-5258
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $$\ell_2$$ and weighted $$\ell_1$$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown. 
    more » « less
  2. We study the problem of online binary classification where strategic agents can manipulate their observable features in predefined ways, modeled by a manipulation graph, in order to receive a positive classification. We show this setting differs in fundamental ways from classic (non-strategic) online classification. For instance, whereas in the non-strategic case, a mistake bound of ln |H| is achievable via the halving algorithm when the target function belongs to a known class H, we show that no deterministic algorithm can achieve a mistake bound o(Δ) in the strategic setting, where Δ is the maximum degree of the manipulation graph (even when |H| = O(Δ)). We complement this with a general algorithm achieving mistake bound O(Δ ln |H|). We also extend this to the agnostic setting, and show that this algorithm achieves a Δ multiplicative regret (mistake bound of O(Δ · OPT + Δ · ln |H|)), and that no deterministic algorithm can achieve o(Δ) multiplicative regret. Next, we study two randomized models based on whether the random choices are made before or after agents respond, and show they exhibit fundamental differences. In the first, fractional model, at each round the learner deterministically chooses a probability distribution over classifiers inducing expected values on each vertex (probabilities of being classified as positive), which the strategic agents respond to. We show that any learner in this model has to suffer linear regret. On the other hand, in the second randomized algorithms model, while the adversary who selects the next agent must respond to the learner's probability distribution over classifiers, the agent then responds to the actual hypothesis classifier drawn from this distribution. Surprisingly, we show this model is more advantageous to the learner, and we design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries. 
    more » « less
  3. Direct manipulation is a programming paradigm in which the programmer conveys the intended program behavior by modifying program values at runtime. The programming environment then finds a modification of the original program that yields the manipulated values. In this paper, we propose the first framework for direct manipulation of imperative programs. First, we introduce direct state manipulation, which allows programmers to visualize the trace of a buggy program on an input, and modify variable values at a location. Second, we propose a synthesis technique based on program sketching and quantitative objectives to efficiently find the “closest” program to the original one that is consistent with the manipulated values. We formalize the problem and build a tool JDial based on the Sketch synthesizer. We investigate the effectiveness of direct manipulation by using JDial to fix benchmarks from introductory programming assignments. In our evaluation, we observe that direct state manipulations are an effective specification mechanism: even when provided with a single state manipulation, JDial can produce desired program modifications for 66% of our benchmarks while techniques based only on test cases always fail. 
    more » « less
  4. AbstractLife history theory predicts that increased investment in current offspring decreases future fecundity or survival. Avian parental investment decisions have been studied either via brood size manipulation or direct manipulation of parental energetic costs (also known as handicapping). However, we have limited experimental data on the potential interactive effects of these manipulations on parent behavior. Additionally, we know little about how these manipulations affect spatial foraging behavior away from the nest. We simultaneously manipulated brood size and parental costs (via added weight in the form of a GPS tag) in wild female barn swallows (Hirundo rustica). We measured multiple aspects of parent behavior at and away from the nest while controlling for measures of weather conditions. We found no significant interactive effects of manipulated brood size and parental costs. Both sexes increased their visitation rate with brood size, but nestlings in enlarged broods grew significantly less post-brood size manipulation than those in reduced broods. Foraging range area was highly variable among GPS-tagged females but was unaffected by brood size. As such, increased visitation rate in response to brood size may be more energetically costly for far-ranging females. GPS-tagged females did not alter their visitation rate relative to un-tagged birds, but their mates had higher visitation rates. This suggests that GPS tagging may affect some unmeasured aspect of female behavior, such as prey delivery. Our findings indicate that investigation of foraging tactics alongside visitation rate is critical to understanding parental investment and the benefits and costs of reproduction. Significance statementAvian parental investment decisions have been studied by either brood size manipulation or direct manipulation of parental costs, but rarely both simultaneously. We simultaneously manipulated brood size and parental costs (via addition of a GPS tag) in a wild avian system, allowing us to examine interactive effects of these manipulations. Additionally, studies of parental investment often examine behaviors at the nest, but measurements of parental care behavior away from the nest are rare. Our study is unique in that we measured multiple aspects of parental care, including spatial foraging behavior tracked with GPS tags. We found no interactive effects of manipulated brood size and parental costs on visitation rate or nestling growth, and spatial foraging behavior of females was individually variable. Documenting foraging tactics alongside visitation rate is critical to understanding parental investment because the same visitation rate might be more costly for far-ranging females. 
    more » « less
  5. Networks that provide agents with access to a common database of the agents' actions enable an agent to easily learn by observing the actions of others, but are also susceptible to manipulation by “fake” agents. Prior work has studied a model for the impact of such fake agents on ordinary (rational) agents in a sequential Bayesian observational learning framework. That model assumes that ordinary agents do not have an ex-ante bias in their actions and that they follow their private information in case of an ex-post tie between actions. This paper builds on that work to study the effect of fake agents on the welfare obtained by ordinary agents under different ex-ante biases and different tie-breaking rules. We show that varying either of these can lead to cases where, unlike in the prior work, the addition of fake agents leads to a gain in welfare. This implies that in such cases, if fake agents are absent or are not adequately present, an altruistic platform could artificially introduce fake actions to effect improved learning. 
    more » « less