skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Robots, skill demand and manufacturing in US regional labour markets
Abstract Advances in robotics and artificial intelligence (AI) technology have spurred a re-examination of technology’s impacts on jobs and the economy. This article reviews several key contributions to the current jobs/AI debate, discusses their limitations and offers a modified approach, analysing two quantitative models in tandem. One uses robot stock data from the International Federation of Robotics as the primary indicator of robot use, whereas the other uses online job postings requiring robot-related skills. Together, the models suggest that since the Great Recession ended, robots have contributed positively to manufacturing employment in the USA at the metropolitan level.  more » « less
Award ID(s):
1637737
PAR ID:
10196295
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Cambridge Journal of Regions, Economy and Society
Volume:
13
Issue:
1
ISSN:
1752-1378
Page Range / eLocation ID:
77 to 97
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In social robotics, a pivotal focus is enabling robots to engage with humans in a more natural and seamless manner. The emergence of advanced large language models (LLMs) has driven significant advancements in integrating natural language understanding capabilities into social robots. This paper presents a system for speech-guided sequential planning in pick and place tasks, which are found across a range of application areas. The proposed system uses Large Language Model Meta AI (Llama3) to interpret voice commands by extracting essential details through parsing and decoding the commands into sequential actions. These actions are sent to DRL-VO, a learning-based control policy built on the Robot Operating System (ROS) that allows a robot to autonomously navigate through social spaces with static infrastructure and crowds of people. We demonstrate the effectiveness of the system in simulation experiment using Turtlebot 2 in ROS1 and Turtlebot 3 in ROS2. We conduct hardware trials using a Clearpath Robotics Jackal UGV, highlighting its potential for real-world deployment in scenarios requiring flexible and interactive robotic behaviors. 
    more » « less
  2. This paper considers the cultivation of ethical identities among future engineers and computer scientists, particularly those whose professional practice will extensively intersect with emerging technologies enabled by artificial intelligence (AI). Many current engineering and computer science students will go on to participate in the development and refinement of AI, machine learning, robotics, and related technologies, thereby helping to shape the future directions of these applications. Researchers have demonstrated the actual and potential deleterious effects that these technologies can have on individuals and communities. Together, these trends present a timely opportunity to steer AI and robotic design in directions that confront, or at least do not extend, patterns of discrimination, marginalization, and exclusion. Examining ethics interventions in AI and robotics education may yield insights into challenges and opportunities for cultivating ethical engineers. We present our ongoing research on engineering ethics education, examine how our work is situated with respect to current AI and robotics applications, and discuss a curricular module in “Robot Ethics” that was designed to achieve interdisciplinary learning objectives. Finally, we offer recommendations for more effective engineering ethics education, with a specific focus on emerging technologies. 
    more » « less
  3. Artificial Intelligence (AI) enhanced systems are widely adopted in post-secondary education, however, tools and activities have only recently become accessible for teaching AI and machine learning (ML) concepts to K-12 students. Research on K-12 AI education has largely included student attitudes toward AI careers, AI ethics, and student use of various existing AI agents such as voice assistants; most of which has focused on high school and middle school. There is no consensus on which AI and Machine Learning concepts are grade-appropriate for elementary-aged students or how elementary students explore and make sense of AI and ML tools. AI is a rapidly evolving technology and as future decision-makers, children will need to be AI literate[1]. In this paper, we will present elementary students’ sense-making of simple machine-learning concepts. Through this project, we hope to generate a new model for introducing AI concepts to elementary students into school curricula and provide tangible, trainable representations of ML for students to explore in the physical world. In our first year, our focus has been on simpler machine learning algorithms. Our desire is to empower students to not only use AI tools but also to understand how they operate. We believe that appropriate activities can help late elementary-aged students develop foundational AI knowledge namely (1) how a robot senses the world, and (2) how a robot represents data for making decisions. Educational robotics programs have been repeatedly shown to result in positive learning impacts and increased interest[2]. In this pilot study, we leveraged the LEGO® Education SPIKE™ Prime for introducing ML concepts to upper elementary students. Through pilot testing in three one-week summer programs, we iteratively developed a limited display interface for supervised learning using the nearest neighbor algorithm. We collected videos to perform a qualitative evaluation. Based on analyzing student behavior and the process of students trained in robotics, we found some students show interest in exploring pre-trained ML models and training new models while building personally relevant robotic creations and developing solutions to engineering tasks. While students were interested in using the ML tools for complex tasks, they seemed to prefer to use block programming or manual motor controls where they felt it was practical. 
    more » « less
  4. We survey applications of pretrained foundation models in robotics. Traditional deep learning models in robotics are trained on small datasets tailored for specific tasks, which limits their adaptability across diverse applications. In contrast, foundation models pretrained on internet-scale data appear to have superior generalization capabilities, and in some instances display an emergent ability to find zero-shot solutions to problems that are not present in the training data. Foundation models may hold the potential to enhance various components of the robot autonomy stack, from perception to decision-making and control. For example, large language models can generate code or provide common sense reasoning, while vision-language models enable open-vocabulary visual recognition. However, significant open research challenges remain, particularly around the scarcity of robot-relevant training data, safety guarantees and uncertainty quantification, and real-time execution. In this survey, we study recent papers that have used or built foundation models to solve robotics problems. We explore how foundation models contribute to improving robot capabilities in the domains of perception, decision-making, and control. We discuss the challenges hindering the adoption of foundation models in robot autonomy and provide opportunities and potential pathways for future advancements. The GitHub project corresponding to this paper can be found here: https://github.com/robotics-survey/Awesome-Robotics-Foundation-Models . 
    more » « less
  5. Advancements in robotics and AI have increased the demand for interactive robots in healthcare and assistive applications. However, ensuring safe and effective physical human-robot interactions (pHRIs) remains challenging due to the complexities of human motor communication and intent recognition. Traditional physics-based models struggle to capture the dynamic nature of human force interactions, limiting robotic adaptability. To address these limitations, neural networks (NNs) have been explored for force-movement intention prediction. While multi-layer perceptron (MLP) networks show potential, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks effectively model sequential dependencies, while Convolutional Neural Networks (CNNs) enhance spatial feature extraction from human force data. Building on these strengths, this study introduces a hybrid LSTM-CNN framework to improve force-movement intention prediction, increasing accuracy from 69% to 86% through effective denoising and advanced architectures. The combined CNN-LSTM network proved particularly effective in handling individualized force-velocity relationships and presents a generalizable model paving the way for more adaptive strategies in robot guidance. These findings highlight the importance of integrating spatial and temporal modeling to enhance robot precision, responsiveness, and human-robot collaboration. Index Terms —- Physical Human-Robot Interaction, Intention Detection, Machine Learning, Long-Short Term Memory (LSTM) 
    more » « less