Generative artificial intelligence (GenAI) is increasingly becoming a part of work practices across the technology industry and being used across a range of industries. This has necessitated the need to better understand how GenAI is being used by professionals in the field so that we can better prepare students for the workforce. An improved understanding of the use of GenAI in practice can help provide guidance on the design of GenAI literacy efforts including how to integrate it within courses and curriculum, what aspects of GenAI to teach, and even how to teach it. This paper presents a field study that compares the use of GenAI across three different functions - product development, software engineering, and digital content creation - to identify how GenAI is currently being used in the industry. This study takes a human augmentation approach with a focus on human cognition and addresses three research questions: how is GenAI augmenting work practices; what knowledge is important and how are workers learning; and what are the implications for training the future workforce. Findings show a wide variance in the use of GenAI and in the level of computing knowledge of users. In some industries GenAI is being used in a highly technical manner with deployment of fine-tuned models across domains. Whereas in others, only off-the-shelf applications are being used for generating content. This means that the need for what to know about GenAI varies, and so does the background knowledge needed to utilize it. For the purposes of teaching and learning, our findings indicated that different levels of GenAI understanding needs to be integrated into courses. From a faculty perspective, the work has implications for training faculty so that they are aware of the advances and how students are possibly, as early adopters, already using GenAI to augment their learning practices.
more »
« less
An Introductory Guide to Developing GenAI Services for Higher Education
This paper reports on the lessons learned from developing and deploying campus-wide large language model (LLM) services at Purdue University for generative AI (GenAI) applications in education and research. We present a frame- work for identifying an LLM solution suite and identify key considerations related to developing custom solutions. While the GenAI ecosystem continues to evolve, the framework is intended to provide a tool- and organization-agnostic approach to guide leaders in conversations and strategy for future work and collaboration in this emerging field.
more »
« less
- Award ID(s):
- 2005632
- PAR ID:
- 10639607
- Publisher / Repository:
- Proceedings of Gateways 2024
- Date Published:
- Subject(s) / Keyword(s):
- Large Language Model Deployment GenAI Cloud Computing Software Design Requirements Analysis Information Retrieval Language Models Software Management
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract As generative artificial intelligence (GenAI) tools such as ChatGPT become more capable and accessible, their use in educational settings is likely to grow. However, the academic community lacks a comprehensive understanding of the perceptions and attitudes of students and instructors toward these new tools. In the Fall 2023 semester, we surveyed 982 students and 76 faculty at a large public university in the United States, focusing on topics such as perceived ease of use, ethical concerns, the impact of GenAI on learning, and differences in responses by role, gender, and discipline. We found that students and faculty did not differ significantly in their attitudes toward GenAI in higher education, except regarding ease of use, hedonic motivation, habit, and interest in exploring new technologies. Students and instructors also used GenAI for coursework or teaching at similar rates, although regular use of these tools was still low across both groups. Among students, we found significant differences in attitudes between males in STEM majors and females in non-STEM majors. These findings underscore the importance of considering demographic and disciplinary diversity when developing policies and practices for integrating GenAI in educational contexts, as GenAI may influence learning outcomes differently across various groups of students. This study contributes to the broader understanding of how GenAI can be leveraged in higher education while highlighting potential areas of inequality that need to be addressed as these tools become more widely used.more » « less
-
The introduction of generative artificial intelligence (GenAI) has been met with a mix of reactions by higher education institutions, ranging from consternation and resistance to wholehearted acceptance. Previous work has looked at the discourse and policies adopted by universities across the U.S. as well as educators, along with the inclusion of GenAI-related content and topics in higher education. Building on previous research, this study reports findings from a survey of engineering educators on their use of and perspectives toward generative AI. Specifically, we surveyed 98 educators from engineering, computer science, and education who participated in a workshop on GenAI in Engineering Education to learn about their perspectives on using these tools for teaching and research. We asked them about their use of and comfort with GenAI, their overall perspectives on GenAI, the challenges and potential harms of using it for teaching, learning, and research, and examined whether their approach to using and integrating GenAI in their classroom influenced their experiences with GenAI and perceptions of it. Consistent with other research in GenAI education, we found that while the majority of participants were somewhat familiar with GenAI, reported use varied considerably. We found that educators harbored mostly hopeful and positive views about the potential of GenAI. We also found that those who engaged more with their students on the topic of GenAI, both as communicators (those who spoke directly with their students) and as incorporators (those who included it in their syllabus), tend to be more positive about its contribution to learning, while also being more attuned to its potential abuses. These findings suggest that integrating and engaging with generative AI is essential to foster productive interactions between instructors and students around this technology. Our work ultimately contributes to the evolving discourse on GenAI use, integration, and avoidance within educational settings. Through exploratory quantitative research, we have identified specific areas for further investigation.more » « less
-
Generative artificial intelligence (GenAI) systems introduce new possibilities for enhancing professionals’ workflows, enabling novel forms of human–AI co-creation. However, professionals often strug- gle to learn to work with GenAI systems effectively. While research has begun to explore the design of interfaces that support users in learning to co-create with GenAI, we lack systematic approaches to investigate the effectiveness of these supports. In this paper, we present a systematic approach for studying how to support learn- ing to co-create with GenAI systems, informed by methods and concepts from the learning sciences. Through an experimental case study, we demonstrate how our approach can be used to study and compare the impacts of different types of learning supports in the context of text-to-image GenAI models. Reflecting on these results, we discuss directions for future work aimed at improving interfaces for human–AI co-creation.more » « less
-
Novice programming students frequently engage in help-seeking to find information and learn about programming concepts. Among the available resources, generative AI (GenAI) chatbots appear resourceful, widely accessible, and less intimidating than human tutors. Programming instructors are actively integrating these tools into classrooms. However, our understanding of how novice programming students trust GenAI chatbots-and the factors influencing their usage-remains limited. To address this gap, we investigated the learning resource selection process of 20 novice programming students tasked with studying a programming topic. We split our participants into two groups: one using ChatGPT (n=10) and the other using a human tutor via Discord (n=10). We found that participants held strong positive perceptions of ChatGPT's speed and convenience but were wary of its inconsistent accuracy, making them reluctant to rely on it for learning entirely new topics. Accordingly, they generally preferred more trustworthy resources for learning (e.g., instructors, tutors), preferring ChatGPT for low-stakes situations or more introductory and common topics. We conclude by offering guidance to instructors on integrating LLM-based chatbots into their curricula-emphasizing verification and situational use-and to developers on designing chatbots that better address novices' trust and reliability concerns.more » « less
An official website of the United States government

