Abstract Artificial intelligence (AI) represents technologies with human‐like cognitive abilities to learn, perform, and make decisions. AI in precision agriculture (PA) enables farmers and farm managers to deploy highly targeted and precise farming practices based on site‐specific agroclimatic field measurements. The foundational and applied development of AI has matured considerably over the last 30 years. The time is now right to engage seriously with the ethics and responsible practice of AI for the well‐being of farmers and farm managers. In this paper, we identify and discuss both challenges and opportunities for improving farmers’ trust in those providing AI solutions for PA. We highlight that farmers’ trust can be moderated by how the benefits and risks of AI are perceived, shared, and distributed. We propose four recommendations for improving farmers’ trust. First, AI developers should improve model transparency and explainability. Second, clear responsibility and accountability should be assigned to AI decisions. Third, concerns about the fairness of AI need to be overcome to improve human‐machine partnerships in agriculture. Finally, regulation and voluntary compliance of data ownership, privacy, and security are needed, if AI systems are to become accepted and used by farmers.
more »
« less
Abstraction and analogy‐making in artificial intelligence
Abstract Conceptual abstraction and analogy‐making are key abilities underlying humans' abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite a long history of research on constructing artificial intelligence (AI) systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.
more »
« less
- Award ID(s):
- 2020103
- PAR ID:
- 10252846
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Annals of the New York Academy of Sciences
- Volume:
- 1505
- Issue:
- 1
- ISSN:
- 0077-8923
- Format(s):
- Medium: X Size: p. 79-101
- Size(s):
- p. 79-101
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The abilities to form and abstract concepts is key to human intelligence, but such abilities remain lacking in state-of-the-art AI systems. There has been substantial research on conceptual abstraction in AI, particularly using idealized domains such as Raven's Progressive Matrices and Bongard problems, but even when AI systems succeed on such problems, the systems are rarely evaluated in depth to see if they have actually grasped the concepts they are meant to capture. In this paper we describe an in-depth evaluation benchmark for the Abstraction and Reasoning Corpus (ARC), a collection of few-shot abstraction and analogy problems developed by Chollet [2019]. In particular, we describe ConceptARC, a new, publicly available benchmark in the ARC domain that systematically assesses abstraction and generalization abilities on a number of basic spatial and semantic concepts. ConceptARC differs from the original ARC dataset in that it is specifically organized around "concept groups" -- sets of problems that focus on specific concepts and that are vary in complexity and level of abstraction. We report results on testing humans on this benchmark as well as three machine solvers: the top two programs from a 2021 ARC competition and OpenAI's GPT-4. Our results show that humans substantially outperform the machine solvers on this benchmark, showing abilities to abstract and generalize concepts that are not yet captured by AI systems. We believe that this benchmark will spur improvements in the development of AI systems for conceptual abstraction and in the effective evaluation of such systems.more » « less
-
Abstract Artificial intelligence (AI) has long held the promise of imitating, replacing, or even surpassing human intelligence. Now that the abilities of AI systems have started to approach this initial aspiration, organization and management scholars face a challenge in how to theorize this technology, which potentially changes the way we view technology: not as a tool, but as something that enters previously human‐only domains. To navigate this theorizing challenge, we adopt the problematizing review method by engaging in a selective and critical reading of the theoretical contributions regarding AI, in the most influential organization and management journals. We examine how the literature has grounded itself with AI as the root metaphor and what field assumptions about AI are shared – or contested – in the field. We uncover two core assumptions of rationality and anthropomorphism, around which fruitful debates are already emerging. We discuss these two assumptions and their organizational boundary conditions in the context of theorizing AI. Finally, we invite scholars to build distinctive organization and management theory scaffolding within the broader social science of AI.more » « less
-
null (Ed.)Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others.more » « less
-
This paper presents an experiential learning pedagogy that teaches undergraduate business management information systems students hands-on AI skills through the lens of sustainability. The learning modules aim to empower undergraduate business students to gain interest and confidence in AI knowledge, skills, and careers, to sharpen their higher order thinking abilities, and to help them gain a deeper understanding of sustainability issues. Students learn AI through developing chatbots that address pressing sustainability issues within their own communities. Results of the pilot study indicate that students have increased self-efficacy in AI, more positive attitudes towards AI learning and AI-related careers, enhanced sustainability awareness, and more confidence in their ability to innovate.more » « less
An official website of the United States government
