skip to main content


Title: Functions of Essential Genes and a Scale-Free Protein Interaction Network Revealed by Structure-Based Function and Interaction Prediction for a Minimal Genome
Award ID(s):
1901191 2030790 2025426
NSF-PAR ID:
10230322
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of Proteome Research
Volume:
20
Issue:
2
ISSN:
1535-3893
Page Range / eLocation ID:
1178 to 1189
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This work challenges the common assumption in physical human-robot interaction (pHRI) that the movement intention of a human user can be simply modeled with dynamic equations relating forces to movements, regardless of the user. Studies in physical human-human interaction (pHHI) suggest that interaction forces carry sophisticated information that reveals motor skills and roles in the partnership and even promotes adaptation and motor learning. In this view, simple force-displacement equations often used in pHRI studies may not be sufficient. To test this, this work measured and analyzed the interaction forces (F) between two humans as the leader guided the blindfolded follower on a randomly chosen path. The actual trajectory of the follower was transformed to the velocity commands (V) that would allow a hypothetical robot follower to track the same trajectory. Then, possible analytical relationships between F and V were obtained using neural network training. Results suggest that while F helps predict V, the relationship is not straightforward, that seemingly irrelevant components of F may be important, that force-velocity relationships are unique to each human follower, and that human neural control of movement may affect the prediction of the movement intent. It is suggested that user-specific, stereotype-free controllers may more accurately decode human intent in pHRI. 
    more » « less
  2. This paper presents a mobile-based solution that integrates 3D vision and voice interaction to assist people who are blind or have low vision to explore and interact with their surroundings. The key components of the system are the two 3D vision modules: the 3D object detection module integrates a deep-learning based 2D object detector with ARKit-based point cloud generation, and an interest direction recognition module integrates hand/finger recognition and ARKit-based 3D direction estimation. The integrated system consists of a voice interface, a task scheduler, and an instruction generator. The voice interface contains a customized user request mapping module that maps the user’s input voice into one of the four primary system operation modes (exploration, search, navigation, and settings adjustment). The task scheduler coordinates with two web services that host the two vision modules to allocate resources for computation based on the user request and network connectivity strength. Finally, the instruction generator computes the corresponding instructions based on the user request and results from the two vision modules. The system is capable of running in real time on mobile devices. We have shown preliminary experimental results on the performance of the voice to user request mapping module and the two vision modules. 
    more » « less
  3. In this demo we present IRIS, an open-source framework that provides a set of simple and modular document operators that can be combined in various ways to create more interesting and advanced functionality otherwise unavailable during most information search sessions. Those functionalities include summarization, ranking, filtering and query. The goal is to support users looking for, collecting, and synthesizing information. The system is also easily extendable, allowing for customized functionality for users during information sessions and researchers studying higher levels of abstraction for information retrieval. The demo shows the front end interactions using a browser plug-in that offers new interactions with documents during search sessions, as well as the back-end components driving the system. 
    more » « less
  4. Objective This study investigated drivers’ subjective feelings and decision making in mixed traffic by quantifying driver’s driving style and type of interaction. Background Human-driven vehicles (HVs) will share the road with automated vehicles (AVs) in mixed traffic. Previous studies focused on simulating the impacts of AVs on traffic flow, investigating car-following situations, and using simulation analysis lacking experimental tests of human drivers. Method Thirty-six drivers were classified into three driver groups (aggressive, moderate, and defensive drivers) and experienced HV-AV interaction and HV-HV interaction in a supervised web-based experiment. Drivers’ subjective feelings and decision making were collected via questionnaires. Results Results revealed that aggressive and moderate drivers felt significantly more anxious, less comfortable, and were more likely to behave aggressively in HV-AV interaction than in HV-HV interaction. Aggressive drivers were also more likely to take advantage of AVs on the road. In contrast, no such differences were found for defensive drivers indicating they were not significantly influenced by the type of vehicles with which they were interacting. Conclusion Driving style and type of interaction significantly influenced drivers’ subjective feelings and decision making in mixed traffic. This study brought insights into how human drivers perceive and interact with AVs and HVs on the road and how human drivers take advantage of AVs. Application This study provided a foundation for developing guidelines for mixed transportation systems to improve driver safety and user experience. 
    more » « less