Despite little evidence of efficacy, public information campaigns have been a popular strategy for deterring migration. Advertising campaigns to dissuade would-be migrants from leaving home or seeking asylum are increasingly prevalent around the world, and Australia has devoted millions of dollars to these campaigns. Perhaps the most famous is the campaign launched in 2014, with the message: “No Way. You will not make Australia home.” In this article, I develop the concept of enforcement infrastructure to illustrate the relationships, technologies, actors, and policies that together facilitate enforcement of Australia’s borders and produce campaigns such as the “No Way” campaign. Just as infrastructure facilitates the production of value in other contexts, so too does the creation of enforcement infrastructure produce different types of value in the context of enforcement. Mapping the enforcement infrastructure highlights the different types of value produced by this constellation of actors, from profitable market research to reinforcing colonial logics of exclusion.
more »
« less
Environment: Critical Reflections on a Concept
Is the environment worth the effort? The environment often seems far too easy, far too obligatory, and far too footloose a concept to warrant serious attention. It somehow evokes both bookish abstraction and populist rousing, it cobbles together science and advocacy only to blunt their conjoined insights, and it continues to elude fixed definition even while basking in stately recognition. The banalities of this mess can give the impression that the environment has no real history, no critical content, and heralds no true rupture of thought and practice. The environment, in the eyes of some, is mere advertising. If there is a story to the environment, others suggest, it’s largely one of misplaced materialism, middle class aesthetics, and first world problems. Such has been the sentiment, such has been the dismissal. In the rush to move past the environment, few have attended to the history of the concept. This is curious as the constitution of the environment remains a surprisingly recent achievement. In the late 1960s and early 1970s, the environment shifted from an erudite shorthand for the influence of context to the premier diagnostic of a troubling new world of induced precarity (whether called Umwelt, l’environnement, medio ambiente, huanjing, mazingira, or lingkungan). The environment – a term “once so infrequent and now becoming so universal,” as the director of the Nature Conservatory commented in 1970 (Nicholson: 5) – soon came to monopolize popular and scientific understandings of damaged life and the states’ obligation to it worldwide. Even as the environment has been immensely productive for research and policy in the following decades, the formation of the environment itself remains understudied. In the United States, this is particularly clear in two aspects of the environment: 1) the role of fossil fuels in making the environment visible, factual, and politically operable; and 2) the precocious if weightless critique authorized by the environment.
more »
« less
- Award ID(s):
- 1832973
- PAR ID:
- 10133815
- Date Published:
- Journal Name:
- IASK working papers
- Issue:
- 64
- ISSN:
- 2676-8895
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Statistical learning (SL) is typically assumed to be a core mechanism by which organisms learn covarying structures and recurrent patterns in the environment, with the main purpose of facilitating processing of expected events. Within this theoretical framework, the environment is viewed as relatively stable, and SL ‘captures’ the regularities therein through implicit unsupervised learning by mere exposure. Focusing primarily on language— the domain in which SL theory has been most influential—we review evidence that the environment is far from fixed: it is dynamic, in continual flux, and learners are far from passive absorbers of regularities; they interact with their environments, thereby selecting and even altering the patterns they learn from. We therefore argue for an alternative cognitive architecture, where SL serves as a subcomponent of an information foraging (IF) system. IF aims to detect and assimilate novel recurrent patterns in the input that deviate from randomness, for which SL supplies a baseline. The broad implications of this viewpoint and their relevance to recent debates in cognitive neuroscience are discussed.more » « less
-
Abstract Recently, a class of algorithms combining classical fixed‐point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as . So far, a complete mathematical explanation for this success has proven elusive. The family of methods has not yet been extended to the important case of linear system solves. In this paper, we propose a new scheme based on repeated random sparsification that is capable of solving sparse linear systems in arbitrarily high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster‐than‐Monte Carlo convergence rate and justifies use of the scheme even when the solution is too large to store as a dense vector.more » « less
-
One of the most popular existing models for task allocation in ant colonies is the so-called threshold-based task allocation model. Here, each ant has a fixed, and possibly distinct, threshold. Each task has a fixed demand which represents the number of ants required to perform the task.1Thestimulusanant receives for a task is defined as the demand of the task minus the number of ants currently working at the task. An ant joins a task if the stimulus of the task exceeds the ant’s threshold.A large body of results has studied this model for over four decades; however, most of the theoretical works focuses on the study of two tasks. Interestingly, no work in this line of research shows that the number of ants working at a task eventually converges towards the demand nor does any work bound the distance to the demands over time.In this work, we study precisely this convergence. Our results show that while the threshold-based model works fine in the case of two tasks (for certain distributions of thresholds); the threshold model no longer works for the case of more than two tasks. In fact, we show that there is no possible setting of thresholds that yields a satisfactory deficit (demand minus number of ants working on the task) for each task.This is in stark contrast to other theoretical results in the same setting [CDLN14] that rely on state-machines, i.e., some form of small memory together with probabilistic decisions. Note that, the classical threshold model assumes no states or memory (apart from the bare minimum number of states required to encode which task an ant is working on). The resulting task allocation is near-optimal and much better than what is possible using joining thresholds. This remains true even in a noisy environment [DLM+18]. While the deficit is not the only important metric, it is conceivably one of the most important metrics to guarantee the survival of a colony: for example if the number of workers assigned for foraging stays significantly below the demand, then starvation may occur. Moreover, our results do not imply that ants do not use thresholds; we merely argue that relying on thresholds yields a considerable worse performance.more » « less
-
Abstract Training of neural networks (NNs) has emerged as a major consumer of both computational and energy resources. Quantum computers were coined as a root to facilitate training, but no experimental evidence has been presented so far. Here we demonstrate that quantum annealing platforms, such as D-Wave, can enable fast and efficient training of classical NNs, which are then deployable on conventional hardware. From a physics perspective, NN training can be viewed as a dynamical phase transition: the system evolves from an initial spin glass state to a highly ordered, trained state. This process involves eliminating numerous undesired minima in its energy landscape. The advantage of annealing devices is their ability to rapidly find multiple deep states. We found that this quantum training achieves superior performance scaling compared to classical backpropagation methods, with a clearly higher scaling exponent (1.01 vs. 0.78). It may be further increased up to a factor of 2 with a fully coherent quantum platform using a variant of the Grover algorithm. Furthermore, we argue that even a modestly sized annealer can be beneficial to train a deep NN by being applied sequentially to a few layers at a time.more » « less
An official website of the United States government

