skip to main content


Search for: All records

Creators/Authors contains: "Hoffman, Guy"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Biological skin has numerous functions like protection, sensing, expression, and regulation. On the contrary, a robot’s skin is usually regarded as a passive and static separation between the body and environment. In this article, we explore the design opportunities of a robot’s skin as a socially expressive medium. Inspired by living organisms, we discuss the roles of interactive robotic skin from four perspectives: expression, perception, regulation, and mechanical action. We focus on the expressive function of skin to sketch design concepts and present a flexible technical method for embodiment. The proposed method integrates pneumatically actuated dynamic textures on soft skin, with forms and kinematic patterns generating a variety of visual and haptic expressions. We demonstrate the proposed design space with six texture-changing skin prototypes and discuss their expressive capacities. 
    more » « less
    Free, publicly-accessible full text available June 30, 2024
  2. Touchibo is a modular robotic platform for enriching interpersonal communication in human-robot group activities, suitable for children with mixed visual abilities. Touchibo incorporates several modalities, including dynamic textures, scent, audio, and light. Two prototypes are demonstrated for supporting storytelling activities and mediating group conversations between children with and without visual impairment. Our goal is to provide an inclusive platform for children to interact with each other, perceive their emotions, and become more aware of how they impact others. 
    more » « less
  3. null (Ed.)
    This article presents the design process of a supernumerary wearable robotic forearm (WRF), along with methods for stabilizing the robot’s end-effector using human motion prediction. The device acts as a lightweight “third arm” for the user, extending their reach during handovers and manipulation in close-range collaborative activities. It was developed iteratively, following a user-centered design process that included an online survey, contextual inquiry, and an in-person usability study. Simulations show that the WRF significantly enhances a wearer’s reachable workspace volume, while remaining within biomechanical ergonomic load limits during typical usage scenarios. While operating the device in such scenarios, the user introduces disturbances in its pose due to their body movements. We present two methods to overcome these disturbances: autoregressive (AR) time series and a recurrent neural network (RNN). These models were used for forecasting the wearer’s body movements to compensate for disturbances, with prediction horizons determined through linear system identification. The models were trained offline on a subset of the KIT Human Motion Database, and tested in five usage scenarios to keep the 3D pose of the WRF’s end-effector static. The addition of the predictive models reduced the end-effector position errors by up to 26% compared to direct feedback control. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
    In this paper we explore what role humans might play in designing tools for reinforcement learning (RL) agents to interact with the world. Recent work has explored RL methods that optimize a robot’s morphology while learning to control it, effectively dividing an RL agent’s environment into the external world and the agent’s interface with the world. Taking a user-centered design (UCD) approach, we explore the potential of a human, instead of an algorithm, redesigning the agent’s tool. Using UCD to design for a machine learning agent brings up several research questions, including what it means to understand an RL agent’s experience, beliefs, tendencies, and goals. After discussing these questions, we then present a system we developed to study humans designing a 2D racecar for an RL autonomous driver. We conclude with findings and insights from exploratory pilots with twelve users using this system. 
    more » « less
  6. null (Ed.)
    This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home. 
    more » « less
  7. null (Ed.)
    We address the challenge of inferring the design intentions of a human by an intelligent virtual agent that collaborates with the human. First, we propose a dynamic Bayesian network model that relates design intentions, objectives, and solutions during a human's exploration of a problem space. We then train the model on design behaviors generated by a search agent and use the model parameters to infer the design intentions in a test set of real human behaviors. We find that our model is able to infer the exact intentions across three objectives associated with a sequence of design outcomes 31.3% of the time. Inference accuracy is 50.9% for the top two predictions and 67.2% for the top three predictions. For any singular intention over an objective, the model's mean F1-score is 0.719. This provides a reasonable foundation for an intelligent virtual agent to infer design intentions purely from design outcomes toward establishing joint intentions with a human designer. These results also shed light on the potential benefits and pitfalls in using simulated data to train a model for human design intentions. 
    more » « less
  8. For a wearable robotic arm to autonomously assist a human, it has to be able to stabilize its end-effector in light of the human’s independent activities. This paper presents a method for stabilizing the end-effector in planar assembly and pick-and-place tasks. Ideally, given an accurate positioning of the end effector and the wearable robot attachment point, human disturbances could be compensated by using a simple feedback control strategy. Realistically, system delays in both sensing and actuation suggest a predictive approach. In this work, we characterize the actuators of a wearable robotic arm and estimate these delays using linear models. We then consider the motion of the human arm as an autoregressive process to predict the deviation in the robot’s base position at a time horizon equivalent to the estimated delay. Generating set points for the end-effector using this predictive model, we report reduced position errors of 19.4% (x) and 20.1% (y) compared to a feedback control strategy without prediction. 
    more » « less
  9. This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration. 
    more » « less