The coronavirus disease 2019 (COVID-19) epidemic poses a threat to the everyday life of people worldwide and brings challenges to the global health system. During this outbreak, it is critical to find creative ways to extend the reach of informatics into every person in society. Although there are many websites and mobile applications for this purpose, they are insufficient in reaching vulnerable populations like older adults who are not familiar with using new technologies to access information. In this paper, we propose an AI-enabled chatbot assistant that delivers real-time, useful, context-aware, and personalized information about COVID-19 to users, especially older adults. To use the assistant, a user simply speaks to it through a mobile phone or a smart speaker. This natural and interactive interface does not require the user to have any technical background. The virtual assistant was evaluated in the lab environment through various types of use cases. Preliminary qualitative test results demonstrate a reasonable precision and recall rate. 
                        more » 
                        « less   
                    
                            
                            ODO: Design of Multimodal Chatbot for an Experiential Media System
                        
                    
    
            This paper presents the design of a multimodal chatbot for use in an interactive theater performance. This chatbot has an architecture consisting of vision and natural language processing capabilities, as well as embodiment in a non-anthropomorphic movable LED array set in a stage. Designed for interaction with up to five users at a time, the system can perform tasks including face detection and emotion classification, tracking of crowd movement through mobile phones, and real-time conversation to guide users through a nonlinear story and interactive games. The final prototype, named ODO, is a tangible embodiment of a distributed multimedia system that solves several technical challenges to provide users with a unique experience through novel interaction. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1830730
- PAR ID:
- 10193712
- Date Published:
- Journal Name:
- Multimodal technologies and interaction
- ISSN:
- 2414-4088
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human- AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.more » « less
- 
            Li, Yang; Hilliges, Otmar (Ed.)We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human-AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.more » « less
- 
            Social chatbots are designed to build emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversation between a hypothetical chatbot and user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to post-observation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, thought these effects were small. Importantly, transparency appeared to have a larger effect in increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users.more » « less
- 
            Service robots often perform their main functions in public settings, interacting with more than one person at a time. How these robots should handle the affairs of individual users while also behaving appropriately when others are present is an open question. One option is to design for flexible agent embodiment: letting agents take control of different robots as people move between contexts. Through structured User Enactments, we explored how agents embodied within a single robot might interact with multiple people. Participants interacted with a robot embodied by a singular service agent, agents that re-embody in different robots and devices, and agents that co-embody within the same robot. Findings reveal key insights about the promise of re-embodiment and co-embodiment as design paradigms as well as what people value during interactions with service robots that use personalization.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    