- Award ID(s):
- 1828355
- Publication Date:
- NSF-PAR ID:
- 10310578
- Journal Name:
- 16th International Manufacturing Science and Engineering Conference
- Sponsoring Org:
- National Science Foundation
More Like this
-
In modern industrial manufacturing processes, robotic manipulators are routinely used in the assembly, packaging, and material handling operations. During production, changing end-of-arm tooling is frequently necessary for process flexibility and reuse of robotic resources. In conventional operation, a tool changer is sometimes employed to load and unload end-effectors, however, the robot must be manually taught to locate the tool changers by operators via a teach pendant. During tool change teaching, the operator takes considerable effort and time to align the master and tool side of the coupler by adjusting the motion speed of the robotic arm and observing the alignment from different viewpoints. In this paper, a custom robotic system, the NeXus, was programmed to locate and change tools automatically via an RGB-D camera. The NeXus was configured as a multi-robot system for multiple tasks including assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobot prototypes. Thus, different tools are employed by an industrial robotic arm to position grippers, printers, and other types of end-effectors in the workspace. To improve the precision and cycle-time of the robotic tool change, we mounted an eye-in-hand RGB-D camera and employed visual servoing to automate the tool change process. We thenmore »
-
Research in creative robotics continues to expand across all creative domains, including art, music and language. Creative robots are primarily designed to be task specific, with limited research into the implications of their design outside their core task. In the case of a musical robot, this includes when a human sees and interacts with the robot before and after the performance, as well as in between pieces. These non-musical interaction tasks such as the presence of a robot during musical equipment set up, play a key role in the human perception of the robot however have received only limited attention. In this paper, we describe a new audio system using emotional musical prosody, designed to match the creative process of a musical robot for use before, between and after musical performances. Our generation system relies on the creation of a custom dataset for musical prosody. This system is designed foremost to operate in real time and allow rapid generation and dialogue exchange between human and robot. For this reason, the system combines symbolic deep learning through a Conditional Convolution Variational Auto-encoder, with an emotion-tagged audio sampler. We then compare this to a SOTA text-to-speech system in our robotic platform, Shimonmore »
-
Abstract Modern marine biologists seeking to study or interact with deep-sea organisms are confronted with few options beyond industrial robotic arms, claws, and suction samplers. This limits biological interactions to a subset of “rugged” and mostly immotile fauna. As the deep sea is one of the most biologically diverse and least studied ecosystems on the planet, there is much room for innovation in facilitating delicate interactions with a multitude of organisms. The biodiversity and physiology of shallow marine systems, such as coral reefs, are common study targets due to the easier nature of access; SCUBA diving allows for
in situ delicate human interactions. Beyond the range of technical SCUBA (~150 m), the ability to achieve the same level of human dexterity using robotic systems becomes critically important. The deep ocean is navigated primarily by manned submersibles or remotely operated vehicles, which currently offer few options for delicate manipulation. Here we present results in developing a soft robotic manipulator for deep-sea biological sampling. This low-power glove-controlled soft robot was designed with the future marine biologist in mind, where science can be conducted at a comparable or better means than via a human diver and at depths well beyond the limits of SCUBA. The technologymore » -
Since the COVID-19 Pandemic began, there have been several efforts to create new technology to mitigate the impact of the COVID-19 Pandemic around the world. One of those efforts is to design a new task force, robots, to deal with fundamental goals such as public safety, clinical care, and continuity of work. However, those characteristics need new products based on features that create them more innovatively and creatively. Those products could be designed using the S4 concept (sensing, smart, sustainable, and social features) presented as a concept able to create a new generation of products. This paper presents a low-cost robot, Robocov, designed as a rapid response against the COVID-19 Pandemic at Tecnologico de Monterrey, Mexico, with implementations of artificial intelligence and the S4 concept for the design. Robocov can achieve numerous tasks using the S4 concept that provides flexibility in hardware and software. Thus, Robocov can impact positivity public safety, clinical care, continuity of work, quality of life, laboratory and supply chain automation, and non-hospital care. The mechanical structure and software development allow Robocov to complete support tasks effectively so Robocov can be integrated as a technological tool for achieving the new normality’s required conditions according to government regulations.more »
-
Abstract As artificial intelligence and industrial automation are developing, human–robot collaboration (HRC) with advanced interaction capabilities has become an increasingly significant area of research. In this paper, we design and develop a real-time, multi-model HRC system using speech and gestures. A set of 16 dynamic gestures is designed for communication from a human to an industrial robot. A data set of dynamic gestures is designed and constructed, and it will be shared with the community. A convolutional neural network is developed to recognize the dynamic gestures in real time using the motion history image and deep learning methods. An improved open-source speech recognizer is used for real-time speech recognition of the human worker. An integration strategy is proposed to integrate the gesture and speech recognition results, and a software interface is designed for system visualization. A multi-threading architecture is constructed for simultaneously operating multiple tasks, including gesture and speech data collection and recognition, data integration, robot control, and software interface operation. The various methods and algorithms are integrated to develop the HRC system, with a platform constructed to demonstrate the system performance. The experimental results validate the feasibility and effectiveness of the proposed algorithms and the HRC system.