skip to main content

Title: Evaluating the Usability of Pervasive Conversational User Interfaces for Virtual Mentoring. Springer.
To improve the academic and professional achievement of underrepresented minorities in computing, a newfound interest in innovative mentoring practices has captivated STEM education researchers. Studies suggest that virtual mentoring conversational agents can be leveraged across multiple platforms to provide supplemental mentorship, offsetting the lack of access to in-person mentorship in disadvantaged communities. A within-subjects mixed-method experiment was carried out to assess the usability of a mentoring conversational agent. Mobile interfaces (Twitter and SMS) were compared to each other and against a web-based embodied conversational agent (ECA). Results suggest that mobile interfaces are more usable than the web-based ECA. The findings from this study help to identify areas for improvement in virtual learning alternatives and other potential applications for pervasive conversational interfaces.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the HCI International 2019 Conference on Human-Computer Interaction
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Critical for the early diagnosis of genetic disorders, a Family Health History (FHx) can be collected in several ways including electronic FHx tools, which aid easy editing and sharing by linking with other information management portals. The user acceptance of such systems is critical, especially among older adults experiencing motor and cognitive issues. This study investigated two types of FHx interfaces, standard and Virtual Conversational Agent (VCA), using 30 young (between 18 and 30) and 24 older participants (over 60). Workload, usability and performance data were collected. Even though participants required less time to complete three of five tasks on the standard interface, the VCA interface performed better in terms of subjective workload and usability. Additionally, 67% of the older adults preferred the VCA interface since it provided context-based guidance during the data collection process. The results from this study have implications for the use of virtual assistants in FHx and other areas of data collection. 
    more » « less
  2. Abstract Background

    Providing adaptive scaffolds to help learners develop effective self‐regulated learning (SRL) behaviours has been an important goal for intelligent learning environments. Adaptive scaffolding is especially important in open‐ended learning environments (OELE), where novice learners often face difficulties in completing their learning tasks.


    This paper presents a systematic framework for adaptive scaffolding in Betty's Brain, a learning‐by‐teaching OELE for middle school science, where students construct a causal model to teach a virtual agent, generically named Betty. We evaluate the adaptive scaffolding framework and discuss its implications on the development of more effective scaffolds for SRL in OELEs.


    We detect key cognitive/metacognitiveinflection points, that is, moments where students' behaviours and performance change during learning, often suggesting an inability to apply effective learning strategies. At inflection points, Mr. Davis (a mentor agent in Betty's Brain) or Betty (the teachable agent) provides context‐specific conversational feedback, focusing on strategies to help the student become a more productive learner, or encouragement to support positive emotions. We conduct a classroom study with 98 middle schoolers to analyse the impact of adaptive scaffolds on students' learning behaviours and performance. We analyse how students with differential pre‐to‐post learning outcomes receive and use the scaffolds to support their subsequent learning process in Betty's Brain.

    Results and Conclusions

    Adaptive scaffolding produced mixed results, with some scaffolds (viz., strategic hints that supported debugging and assessment of causal models) being generally more useful to students than others (viz., encouragement prompts). Additionally, there were differences in how students with high versus low learning outcomes responded to some hints, as suggested by the differences in their learning behaviours and performance in the intervals after scaffolding. Overall, our findings suggest how adaptive scaffolding in OELEs like Betty's Brain can be further improved to better support SRL behaviours and narrow the learning outcomes gap between high and low performing students.


    This paper contributes to our understanding and impact of adaptive scaffolding in OELEs. The results of our study indicate that successful scaffolding has to combine context‐sensitive inflection points with conversational feedback that is tailored to the students' current proficiency levels and needs. Also, our conceptual framework can be used to design adaptive scaffolds that help students develop and apply SRL behaviours in other computer‐based learning environments.

    more » « less
  3. A major challenge in designing conversational agents is to handle unknown concepts in user utterances. This is particularly difficult for general-purpose task-oriented agents, as the unknown concepts and the tasks can be outside of the agent’s existing domain of knowledge. In this work, we propose a new multi-modal mixed-initiative approach towards this problem. Our agent Pumice guides the user to recursively explain unknown concepts through conversations, and to ground these concepts by demonstrating on the graphical user interfaces (GUIs) of existing third-party mobile apps. Pumice also supports the generalization of learned concepts to other different contexts and task domains. 
    more » « less
  4. null (Ed.)
    Embodied conversational agents (ECAs) provide an interface modality on smartphones that may be particularly effective for tasks with significant social, affective, reflective, and narrative aspects, such as health education and behavior change counseling. However, the conversational medium is significantly slower than conventional graphical user interfaces (GUIs) for brief, time-sensitive tasks. We conducted a randomized experiment to determine user preferences in performing two kinds of health-related tasks—one affective and narrative in nature and one transactional—and gave participants a choice of a conventional GUI or a functionally equivalent ECA on a smartphone to complete the task. We found significant main effects of task type and user preference on user choice of modality, with participants choosing the conventional GUI more often for transactional and time-sensitive tasks. 
    more » « less
  5. null (Ed.)
    We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human- AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android. 
    more » « less