skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: What Makes a Rule Complex
We study the complexity of rules by paying experimental subjects to implement a series of algorithms and then eliciting their willingness-to-pay to avoid implementing them again in the future. The design allows us to examine hypotheses from the theoretical “automata” literature about the characteristics of rules that generate complexity costs. We find substantial aversion to complexity and a number of regularities in the characteristics of rules that make them complex and costly for subjects. Experience with a rule, the way a rule is represented, and the context in which a rule is implemented (mentally versus physically) also influence complexity.  more » « less
Award ID(s):
1949366
PAR ID:
10329921
Author(s) / Creator(s):
Date Published:
Journal Name:
American economic review
Volume:
110
Issue:
12
ISSN:
2640-2068
Page Range / eLocation ID:
3913-3951
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Part-of-speech (POS) tagging is the foundation of many natural language processing applications. Rule-based POS tagging is a wellknown solution, which assigns tags to the words using a set of predefined rules. Many researchers favor statistical-based approaches over rule-based methods for better empirical accuracy. However, until now, the computational cost of rule-based POS tagging has made it difficult to study whether more complex rules or larger rulesets could lead to accuracy competitive with statistical approaches. In this paper, we leverage two hardware accelerators, the Automata Processor (AP) and Field Programmable Gate Arrays (FPGA), to accelerate rule-based POS tagging by converting rules to regular expressions and exploiting the highly-parallel regular-expressionmatching ability of these accelerators. We study the relationship between rule set size and accuracy, and observe that adding more rules only poses minimal overhead on the AP and FPGA. This allows a substantial increase in the number and complexity of rules, leading to accuracy improvement. Our experiments on Treebank and Brown corpora achieve up to 2,600X and 1,914X speedups on the AP and on the FPGA respectively over rule-based methods on the CPU in the rule-matching stage, up to 58× speedup over the Perceptron POS tagger on the CPU in total testing time, and up to 253× speedup over the LSTM tagger on the GPU in total testing time, while showing a competitive accuracy compared to neural-network and statistical solutions. 
    more » « less
  2. Learning logical rules is critical to improving reasoning in KGs. This is due to their ability to provide logical and interpretable explanations when used for predictions, as well as their ability to generalize to other tasks, domains, and data. While recent methods have been proposed to learn logical rules, the majority of these methods are either restricted by their computational complexity and cannot handle the large search space of large-scale KGs, or show poor generalization when exposed to data outside the training set. In this paper, we propose an endto-end neural model for learning compositional logical rules called NCRL. NCRL detects the best compositional structure of a rule body, and breaks it into small compositions in order to infer the rule head. By recurrently merging compositions in the rule body with a recurrent attention unit, NCRL finally predicts a single rule head. Experimental results show that NCRL learns high-quality rules, as well as being generalizable. Specifically, we show that NCRL is scalable, efficient, and yields state-of-the-art results for knowledge graph completion on large-scale KGs. Moreover, we test NCRL for systematic generalization by learning to reason on small-scale observed graphs and evaluating on larger unseen ones. 
    more » « less
  3. The governance of many online communities relies on rules created by participants. However, prior work provides limited evidence about how these self-governance efforts compare and relate to one another across communities. Studies tend either to analyze communities as discrete entities or consider communities that coexist within a hierarchically-managed platform. In this paper, we investigate both comparative and relational dimensions of self-governance in similar communities. We use exhaustive trace data from the five largest language editions of Wikipedia over almost 20 years since their founding, and consider both patterns in rule-making and overlaps in rule sets. We find similar rule-making activity across the five communities that replicates and extends prior work on English language Wikipedia alone. However, we also find that these Wikipedias have increasingly unique rule sets, even as editing activity concentrates on rules shared between them. Self-governing communities aligned in key ways may share a common core of rules and rule-making practices as they develop and sustain institutional variations. 
    more » « less
  4. We study the design of core-selecting payment rules for combinatorial auctions, a challenging setting where no strategyproof rules exist. We show that the rule most commonly used in practice, the Quadratic rule, can be improved on in terms of efficiency, incentives, and revenue. We present a new computational search framework for finding good mechanisms, and we apply it toward a search for good core-selecting rules. Within our framework, we use an algorithmic Bayes–Nash equilibrium solver to evaluate 366 rules across 31 settings to identify rules that outperform the Quadratic rule. Our main finding is that our best-performing rules are large-style rules—that is, they provide bidders with large values with better incentives than does the Quadratic rule. Finally, we identify two particularly well-performing rules and suggest that they may be considered for practical implementation in place of the Quadratic rule. 
    more » « less
  5. We propose a system that assists a user in constructing transparent information extraction models, consisting of patterns (or rules) written in a declarative language, through program synthesis. Users of our system can specify their requirements through the use of examples, which are collected with a search interface. The rule-synthesis system proposes rule candidates and the results of applying them on a textual corpus; the user has the option to accept the candidate, request another option, or adjust the examples provided to the system. Through an interactive evaluation, we show that our approach generates high-precision rules even in a 1-shot setting. On a second evaluation on a widely-used relation extraction dataset (TACRED), our method generates rules that outperform considerably manually written patterns. Our code, demo, and documentation is available at https://clulab.github.io/odinsynth/. 
    more » « less