Product disassembly is essential for remanufacturing operations and recovery of end-of-use devices. However, disassembly has often been performed manually with significant safety issues for human workers. Recently, human-robot collaboration has become popular to reduce the human workload and handle hazardous materials. However, due to the current limitations of robots, they are not fully capable of performing every disassembly task. It is critical to determine whether a robot can accomplish a specific disassembly task. This study develops a disassembly score which represents how easy is to disassemble a component by robots, considering the attributes of the component along with the robotic capability. Five factors, including component weight, shape, size, accessibility, and positioning, are considered when developing the disassembly score. Further, the relationship between the five factors and robotic capabilities, such as grabbing and placing, is discussed. The MaxViT (Multi-Axis Vision Transformer) model is used to determine component sizes through image processing of the XPS 8700 desktop, demonstrating the potential for automating disassembly score generation. Moreover, the proposed disassembly score is discussed in terms of determining the appropriate work setting for disassembly operations, under three main categories: human-robot collaboration (HRC), semi-HRC, and worker-only settings. A framework for calculating disassembly time, considering human-robot collaboration, is also proposed.
more »
« less
Case-based Robotic Architecture with Multiple Underlying Ethical Frameworks for Human-Robot Interaction
As robots are becoming more intelligent and more commonly used, it is critical for robots to behave ethically in human-robot interactions. However, there is a lack of agreement on a correct moral theory to guide human behavior, let alone robots. This paper introduces a robotic architecture that leverages cases drawn from different ethical frameworks to guide the ethical decision-making process and select the appropriate robotic action based on the specific situation. We also present an architecture implementation design used on a pill sorting task for older adults, where the robot needs to decide if it is appropriate to provide false encouragement so that the adults continue to be engaged in the training task.
more »
« less
- Award ID(s):
- 1848974
- PAR ID:
- 10394583
- Date Published:
- Journal Name:
- 7th International Conference on Robot Ethics and Standards
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Ethical decision-making is difficult, certainly for robots let alone humans. If a robot's ethical decision-making process is going to be designed based on some approximation of how humans operate, then the assumption is that a good model of how humans make an ethical choice is readily available. Yet no single ethical framework seems sufficient to capture the diversity of human ethical decision making. Our work seeks to develop the computational underpinnings that will allow a robot to use multiple ethical frameworks that guide it towards doing the right thing. As a step towards this goal, we have collected data investigating how regular adults and ethics experts approach ethical decisions related to the use in a healthcare and game playing scenario. The decisions made by the former group is intended to represent an approximation of a folk morality approach to these dilemmas. On the other hand, experts were asked to judge what decision would result if a person was using one of several different types of ethical frameworks. The resulting data may reveal which features of the pill sorting and game playing scenarios contribute to similarities and differences between expert and non-expert responses. This type of approach to programming a robot may one day be able to rely on specific features of an interaction to determine which ethical framework to use in the robot's decision making.more » « less
-
What Happens When Robots Punish? Evaluating Human Task Performance During Robot-Initiated PunishmentThis article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment.more » « less
-
This work proposes the development of a robot to perform appropriate tasks to assist low income older adults based on the merging of two previous studies, one which focused on task investigation and deployment of mobile robots in elder care facilities and the other on design investigation for a socially assistive robot using low-cost and modular hardware and software design. We identified that hydration, walking and socialization were tasks appropriate for the robot and most impactful to the older adults. Another outcome was the level of importance of the HRI component in the implementation of these tasks, thus merging both studies to initially investigate preferences in service robots for elder care is proposed.more » « less
-
This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject’s ethical choices (“folk morality”); in the second survey, expert responses were based on the expert’s application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and “folk morality” were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies.more » « less