Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm’s recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.more » « less
-
Abstract The use of algorithms and automated systems, especially those leveraging artificial intelligence (AI), has been exploding in the public sector, but their use has been controversial. Ethicists, public advocates, and legal scholars have debated whether biases in AI systems should bar their use or if the potential net benefits, especially toward traditionally disadvantaged groups, justify even greater expansion. While this debate has become voluminous, no scholars of which we are aware have conducted experiments with the groups affected by these policies about how they view the trade-offs. We conduct a set of two conjoint experiments with a high-quality sample of 973 Americans who identify as Black or African American in which we randomize the levels of inter-group disparity in outcomes and the net effect on such adverse outcomes in two highly controversial contexts: pre-trial detention and traffic camera ticketing. The results suggest that respondents are willing to tolerate some level of disparity in outcomes in exchange for certain net improvements for their community. These results turn this debate from an abstract ethical argument into an evaluation of political feasibility and policy design based on empirics.more » « less
-
Demeniconi; Carlotta; Nitesh V. Chawla (Ed.)The motives and means of explicit state censorship have been well studied, both quantitatively and qualitatively. Self-censorship by media outlets, however, has not received nearly as much attention, mostly because it is difficult to systematically detect. We develop a novel approach to identify news media self-censorship by using social media as a sensor. We develop a hypothesis testing framework to identify and evaluate censored clusters of keywords and a near-linear-time algorithm (called GraphDPD) to identify the highest-scoring clusters as indicators of censorship. We evaluate the accuracy of our framework, versus other state-of-the-art algorithms, using both semi-synthetic and real-world data from Mexico and Venezuela during Year 2014. These tests demonstrate the capacity of our framework to identify self-censorship and provide an indicator of broader media freedom. The results of this study lay the foundation for detection, study, and policy-response to self-censorship.more » « less
An official website of the United States government
