Summary Across three experiments featuring naturalistic concepts (psychology concepts) and naïve learners, we extend previous research showing an effect of the sequence of study on learning outcomes, by demonstrating that the sequence of examples during study changes the representation the learner creates of the study materials. We compared participants' performance in test tasks requiring different representations and evaluated which sequence yields better learning in which type of tests. We found that interleaved study, in which examples from different concepts are mixed, leads to the creation of relatively interrelated concepts that are represented by contrast to each other and based on discriminating properties. Conversely, blocked study, in which several examples of the same concept are presented together, leads to the creation of relatively isolated concepts that are represented in terms of their central and characteristic properties. These results argue for the integrated investigation of the benefits of different sequences of study as depending on the characteristics of the study and testing situation.
more »
« less
Learning From Multiple Representations: Roles of Task Interventions and Individual Differences
Learning from multiple representations (MRs) is not an easy task for most people, despite how easy it is for experts. Different combinations of representations (e.g., text + photograph, graph + formula, map + diagram) pose different challenges for learners, but across the literature researchers find these to be challenging learning tasks. Each representation typically includes some unique information, as well as some information shared with the other representation(s). Finding one piece of information is only somewhat challenging, but linking information across representations and especially making inferences are very challenging and important parts of using multiple representations for learning. Coordination of multiple representations skills are rarely taught in classrooms, despite the fact that learners are frequently tested on them. Learning from MRs depends on the specific learning tasks posed, learner characteristics, the specifics of which representation(s) are used, and the design of each representation. These various factors act separately and in combination (which can be compensatory, additive, or interactive). Learning tasks can be differentially effective depending on learner characteristics, especially prior knowledge, self-regulation, and age/grade. Learning tasks should be designed keeping this differential effectiveness in mind, and researchers should test for such interactions.
more »
« less
- Award ID(s):
- 1661231
- PAR ID:
- 10154658
- Date Published:
- Journal Name:
- Handbook of learning from multiple representations and perspectives
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich NetworksProc. 2023 ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (Ed.)Representation learning on networks aims to derive a meaningful vector representation for each node, thereby facilitating downstream tasks such as link prediction, node classification, and node clustering. In heterogeneous text-rich networks, this task is more challenging due to (1) presence or absence of text: Some nodes are associated with rich textual information, while others are not; (2) diversity of types: Nodes and edges of multiple types form a heterogeneous network structure. As pretrained language models (PLMs) have demonstrated their effectiveness in obtaining widely generalizable text representations, a substantial amount of effort has been made to incorporate PLMs into representation learning on text-rich networks. However, few of them can jointly consider heterogeneous structure (network) information as well as rich textual semantic information of each node effectively. In this paper, we propose Heterformer, a Heterogeneous Network-Empowered Transformer that performs contextualized text encoding and heterogeneous structure encoding in a unified model. Specifically, we inject heterogeneous structure information into each Transformer layer when encoding node texts. Meanwhile, Heterformer is capable of characterizing node/edge type heterogeneity and encoding nodes with or without texts. We conduct comprehensive experiments on three tasks (i.e., link prediction, node classification, and node clustering) on three large-scale datasets from different domains, where Heterformer outperforms competitive baselines significantly and consistently.more » « less
-
In recent years, large language models (LLMs) have seen rapid advancement and adoption, and are increasingly being used in educational contexts. In this perspective article, we explore the open challenge of leveraging LLMs to create personalized learning environments that support the “whole learner” by modeling and adapting to both cognitive and non-cognitive characteristics. We identify three key challenges toward this vision: (1) improving the interpretability of LLMs' representations of whole learners, (2) implementing adaptive technologies that can leverage such representations to provide tailored pedagogical support, and (3) authoring and evaluating LLM-based educational agents. For interpretability, we discuss approaches for explaining LLM behaviors in terms of their internal representations of learners; for adaptation, we examine how LLMs can be used to provide context-aware feedback and scaffold non-cognitive skills through natural language interactions; and for authoring, we highlight the opportunities and challenges involved in using natural language instructions to specify behaviors of educational agents. Addressing these challenges will enable personalized AI tutors that can enhance learning by accounting for each student's unique background, abilities, motivations, and socioemotional needs.more » « less
-
How does the conceptual structure of external representations contribute to learning? This investigation considered the influence of generative concept sorting (Study 1, n=58) and of external structure information (Study 2, n=120) moderated by perceived difficulty. In Study 1, undergraduate students completed a perceived difficulty survey and comprehension pretest, then a sorting task, and finally a comprehension posttest. Results showed that both perceived difficulty and comprehension pretest significantly predicted comprehension posttest performance. Learners who perceived that history is difficult attained significantly greater posttest scores and had more expert-like networks. In Study 2, participants completed the perceived difficulty survey and comprehension pretest, then read a text with different external structure support, either an expert network or an equivalent outline of the text, and finally completed a sorting task posttest and a comprehension posttest. In study 2, there was no significant difference for external structure support on posttest comprehension (outline = network), but reading with an outline led to a linear topic order conceptual structure of the text, while reading with a network led a more expert-like relational structure. As in Study 1, comprehension pretest and perceived difficulty significantly predicted posttest performance, but in contrast to Study 1, learners who perceived that history is easy attained significantly greater posttest scores. For theory building purposes, post-reading mental representations matched the form of the external representation used when reading. Practitioners should consider using generative sorting tasks when relearning history content.more » « less
-
Graph few-shot learning is of great importance among various graph learning tasks. Under the few-shot scenario, models are often required to conduct classification given limited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node representations. Consequently, the learned representations for the same nodes are identical in all meta-tasks. Since the class sets are different across meta-tasks, node representations should be task-specific to promote classification performance. Therefore, to adaptively learn node representations across meta-tasks, we propose a novel framework that learns a task-specific structure for each meta-task. To handle the variety of nodes across meta-tasks, we extract relevant nodes and learn task-specific structures based on node influence and mutual information. In this way, we can learn node representations with the task-specific structure tailored for each meta-task. We further conduct extensive experiments on five node classification datasets under both single- and multiple-graph settings to validate the superiority of our framework over the state-of-the-art baselines.more » « less
An official website of the United States government

