In interactive e-learning environments such as Intelligent Tutoring Systems, there are pedagogical decisions to make at two main levels of granularity: whole problems and single steps. Recent years have seen growing interest in data-driven techniques for such pedagogical decision making, which can dynamically tailor students’ learning experiences. Most existing data-driven approaches, however, treat these pedagogical decisions equally, or independently, disregarding the long-term impact that tutor decisions may have across these two levels of granularity. In this paper, we propose and apply an offline, off-policy Gaussian Processes based Hierarchical Reinforcement Learning (HRL) framework to induce a hierarchical pedagogical policy that makes decisions at both problem and step levels. In an empirical classroom study with 180 students, our results show that the HRL policy is significantly more effective than a Deep Q-Network (DQN) induced policy and a random yet reasonable baseline policy.
more »
« less
Hierarchical Reinforcement Learning for Pedagogical Policy Induction.
Abstract: In interactive e-learning environments such as Intelligent Tutoring Systems, there are pedagogical decisions to make at two main levels of granularity: whole problems and single steps. In recent years, there is growing interest in applying datadriven techniques for adaptive decision making that can dynamically tailor students’ learning experiences. Most existing data-driven approaches, however, treat these pedagogical decisions equally, or independently, disregarding the long-term impact that tutor decisions may have across these two levels of granularity. In this paper, we propose and apply an offline Gaussian Processes based Hierarchical Reinforcement Learning (HRL) framework to induce a hierarchical pedagogical policy that makes decisions at both problem and step levels. An empirical classroom study shows that the HRL policy is significantly more effective than a Deep QNetwork (DQN) induced policy and a random yet reasonable baseline policy.
more »
« less
- Award ID(s):
- 1651909
- PAR ID:
- 10214150
- Date Published:
- Journal Name:
- n Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI-2020)
- Page Range / eLocation ID:
- pp.4691-4695
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract.In interactive e-learning environments such as Intelligent Tutor-ing Systems, there are pedagogical decisions to make at two main levels of granularity: whole problems and single steps. Recent years have seen grow-ing interest in data-driven techniques for such pedagogical decision making, which can dynamically tailor students’ learning experiences. Most existing data-driven approaches, however, treat these pedagogical decisions equally, or independently, disregarding the long-term impact that tutor decisions may have across these two levels of granularity. In this paper, we propose and apply an offline, off-policy Gaussian Processes based Hierarchical ReinforcementLearning (HRL) framework to induce a hierarchical pedagogical policy that makes decisions at both problem and step levels. In an empirical classroom study with 180 students, our results show that the HRL policy is significantly more effective than a Deep Q-Network (DQN) induced policy and a random yet reasonable baseline policy.more » « less
-
null (Ed.)Abstract: Motivated by the recent advances of reinforcement learning and the traditional grounded Self Determination Theory (SDT), we explored the impact of hierarchical reinforcement learning (HRL) induced pedagogical policies and data-driven explanations of the HRL-induced policies on student experience in an Intelligent Tutoring System (ITS). We explored their impacts first independently and then jointly. Overall our results showed that 1) the HRL induced policies could significantly improve students' learning performance, and 2) explaining the tutor's decisions to students through data-driven explanations could improve the student-system interaction in terms of students' engagement and autonomy.more » « less
-
For many forms of e-learning environments, the system's behaviors can be viewed as a sequential decision process wherein, at each discrete step, the system is responsible for deciding the next system action when there are multiple ones available. Each of these system decisions aects the user's successive actions and performance and some of them are more important than others. Thus, this raises an open ques- tion: how can we identify the critical system interactive de- cisions that are linked to student learning from a long trajec- tory of decisions? In this work, we proposed and evaluated Critical-Reinforcement Learning (Critical-RL), an adversar- ial deep reinforcement learning (ADRL) based framework to identify critical decisions and induce compact yet eective policies. Speci cally, it induces a pair of adversarial policies based upon Deep Q-Network (DQN) with opposite goals: one is to improve student learning while the other is to hin- der; critical decisions are identi ed by comparing the two adversarial policies and using their corresponding Q-value dierences; nally, a Critical policy is induced by giving op- timal action on critical decisions but random yet reason- able decisions on others. We evaluated the eectiveness of Critical policy against a random yet reasonable (Random) policy. While no signi cant dierence was found between the two condition, it is probably because of small sample sizes. Much to our surprise, we found that students often experience so-called Critical phase: a consecutive sequence of critical decisions with the same action. Students were further divided into High vs. Low based on the number of Critical phases they experienced and our results showed that while no signi cant was found between the two Low groups, the High Critical group learned signi cantly more than the High Random group.more » « less
-
Hierarchical reinforcement learning (HRL) is only effective for long-horizon problems when high-level skills can be reliably sequentially executed. Unfortunately, learning reliably composable skills is difficult, because all the components of every skill are constantly changing during learning. We propose three methods for improving the composability of learned skills: representing skill initiation regions using a combination of pessimistic and optimistic classifiers; learning re-targetable policies that are robust to non-stationary subgoal regions; and learning robust option policies using model-based RL. We test these improvements on four sparse-reward maze navigation tasks involving a simulated quadrupedal robot. Each method successively improves the robustness of a baseline skill discovery method, substantially outperforming state-of-the-art flat and hierarchical methods.more » « less
An official website of the United States government

