Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Developing whole-body tactile skins for robots remains a challenging task, as existing solutions often prioritize modular, one-size-fits-all designs, which, while versatile, fail to account for the robot’s specific shape and the unique demands of its operational context. In this work, we introduce GenTact Toolbox, a computational pipeline for creating versatile wholebody tactile skins tailored to both robot shape and application domain. Our method includes procedural mesh generation for conforming to a robot’s topology, task-driven simulation to refine sensor distribution, and multi-material 3D printing for shape-agnostic fabrication. We validate our approach by creating and deploying six capacitive sensing skins on a Franka Research 3 robot arm in a human-robot interaction scenario. This work represents a shift from “one-size-fits-all” tactile sensors toward context-driven, highly adaptable designs that can be customized for a wide range of robotic systems and applications. The project website is available at https://hiro-group.ronc.one/gentacttoolboxmore » « lessFree, publicly-accessible full text available May 19, 2026
- 
            Free, publicly-accessible full text available March 4, 2026
- 
            Estimating the location of contact is a primary function of artificial tactile sensing apparatuses that perceive the environment through touch. Existing contact localization methods use flat geometry and uniform sensor distributions as a simplifying assumption, limiting their ability to be used on 3D surfaces with variable density sensing arrays. This paper studies contact localization on an artificial skin embedded with mutual capacitance tactile sensors, arranged non-uniformly in an unknown distribution along a semi-conical 3D geometry. A fully connected neural network is trained to localize the touching points on the embedded tactile sensors. The studied online model achieves a localization error of 5.7 ± 3.0 mm. This research contributes a versatile tool and robust solution for contact localization that is ambiguous in shape and internal sensor distribution.more » « lessFree, publicly-accessible full text available December 15, 2025
- 
            Robot Imitation Learning (IL) is a crucial technique in robot learning, where agents learn by mimicking human demonstrations. However, IL encounters scalability challenges stemming from both non-user-friendly demonstration collection methods and the extensive time required to amass a sufficient number of demonstrations for effective training. In response, we introduce the Augmented Reality for Collection and generAtion of DEmonstrations (ARCADE) framework, designed to scale up demonstration collection for robot manipulation tasks. Our framework combines two key capabilities: 1) it leverages AR to make demonstration collection as simple as users performing daily tasks using their hands, and 2) it enables the automatic generation of additional synthetic demonstrations from a single human-derived demonstration, significantly reducing user effort and time. We assess ARCADE's performance on a real Fetch robot across three robotics tasks: 3-Waypoints-Reach, Push, and Pick-And-Place. Using our framework, we were able to rapidly train a policy using vanilla Behavioral Cloning (BC), a classic IL algorithm, which excelled across these three tasks. We also deploy ARCADE on a real household task, Pouring-Water, achieving an 80% success rate.more » « less
- 
            Chemical manufacturing is a growing field that contributes to many industries and employs tens of thousands of researchers in wet labs. Automation tools for synthetic chemistry are of interest not only for their potential impact on efficiency and productivity, but also on human resources and safety, since synthetic chemistry poses a number of occupational risks and is largely inaccessible to researchers with physical disabilities. Currently, most automation tools for synthetic chemistry are either designed to perform highly specialized tasks or they are designed as closed-loop systems with minimal interaction between human and machine during a synthesis procedure. We are pursuing an alternative, human-centered approach to robotic tools for synthetic chemistry, in which general-purpose collaborative robots (cobots) offer diverse forms of support to human researchers in the lab. In order to design frameworks for productive scientist-cobot collaborations, we need a deep understanding of the task space in synthetic chemistry labs and the impact of these various activities on the researchers. Based on observations and surveys from a group of experimental scientists, we have identified and analyzed 10 manual tasks commonly performed by researchers in the wet lab, each of which may be broken down into a sequence of sub-tasks. We conducted an in-depth analysis of the two most frequently performed sub-tasks: liquids dispensing and solids handling. Through subcoding, we identified 40 liquid dispensing typologies and 18 solid handling typologies, and evaluated the burden associated with each of these sub-tasks using the NASA TLX. These data will be of value for the design of human-centered automation tools that support, rather than displace, researchers performing manual tasks in the lab, in order to foster a safer and more accessible lab environment.more » « less
- 
            Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents from high level text queries. However, these models typically do not consider the robot’s environment, resulting in generated plans that may not actually be executable, due to ambiguities in the planned actions or environmental constraints. In this paper, we propose an approach to generate environmentally-aware action plans that agents are better able to execute. Our approach involves integrating environmental objects and object relations as additional inputs into LLM action plan generation to provide the system with an awareness of its surroundings, resulting in plans where each generated action is mapped to objects present in the scene. We also design a novel scoring function that, along with generating the action steps and associating them with objects, helps the system disambiguate among object instances and take into account their states. We evaluated our approach using the VirtualHome simulator and the ActivityPrograms knowledge base and found that action plans generated from our system had a 310% improvement in executability and a 147% improvement in correctness over prior work. The complete code and a demo of our method is publicly available at https://github.com/hri-ironlab/scene_aware_language_planner.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available