Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Many real-world networks evolve over time, and predicting the evolution of such networks remains a challenging task. Graph Neural Networks (GNNs) have shown empirical success for learning on static graphs, but they lack the ability to effectively learn from nodes and edges with different timestamps. Consequently, the prediction of future properties in temporal graphs remains a relatively under-explored area. In this paper, we aim to bridge this gap by introducing a principled framework, named GraphPulse. The framework combines two important techniques for the analysis of temporal graphs within a Newtonian framework. First, we employ the Mapper method, a key tool in topological data analysis, to extract essential clustering information from graph nodes. Next, we harness the sequential modeling capabilities of Recurrent Neural Networks (RNNs) for temporal reasoning regarding the graph's evolution. Through extensive experimentation, we demonstrate that our model enhances the ROC-AUC metric by 10.2% in comparison to the top-performing state-of-the-art method across various temporal networks. We provide the implementation of GraphPulse at https://github.com/kiarashamsi/GraphPulse.more » « less
-
The game is intended for students who do not necessarily have any prior background in computer science. Assuming the role of agents, two players exchange messages over a network to try to agree on a meeting time and location, while an adversary interferes with their plan. Following the Dolev-Yao model, the adversary has full control of the network: they can see all messages and modify, block, or forward them. We designed the game as a web application, where groups of three students play the game, taking turns being the adversary. The adversary is a legitimate communicant on the network, and the agents do not know who is the other agent and who is the adversary. Through gameplay, we expect students to be able to (1) identify the dangers of communicating through a computer network, (2) describe the capabilities of a Dolev-Yao adversary, and (3) apply three cryptographic primitives: symmetric encryption, asymmetric encryption, and digital signatures. We conducted surveys, focus groups, and interviews to evaluate the effectiveness of the game in achieving the learning objectives. The game helped students achieve the first two learning objectives, as well as using symmetric encryption. We found that students enjoyed playing MeetingMayhem. We are revising MeetingMayhem to improve its user interface and to better support students to learn about asymmetric encryption and digital signatures.more » « less
-
Abstract The effective mass at the Fermi level is measured in the strongly interacting two-dimensional (2D) electron system in ultra-clean SiGe/Si/SiGe quantum wells in the low-temperature limit in tilted magnetic fields. At low electron densities, the effective mass is found to be strongly enhanced and independent of the degree of spin polarization, which indicates that the mass enhancement is not related to the electrons’ spins. The observed effect turns out to be universal for silicon-based 2D electron systems, regardless of random potential, and cannot be explained by existing theories.more » « less
-
Inspired by humans’ exceptional ability to master arithmetic and generalize to new problems, we present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines’ capability of learning generalizable concepts at three levels: perception, syntax, and semantics. In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts w.r.t. the three levels. Further, we design a few-shot learning split to determine whether or not models can rapidly learn new concepts and generalize them to more complex scenarios. To comprehend existing models’ limitations, we undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3 (with the chain of thought prompting). The results indicate that current models struggle to extrapolate to long-range syntactic dependency and semantics. Models exhibit a considerable gap toward human-level generalization when evaluated with new concepts in a few-shot setting. Moreover, we discover that it is infeasible to solve HINT by merely scaling up the dataset and the model size; this strategy contributes little to the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3 experiments, the chain of thought prompting exhibits impressive results and significantly boosts the test accuracy. We believe the HINT dataset and the experimental findings are of great interest to the learning community on systematic generalization.more » « less
-
Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at https://inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.more » « less
-
Abstract In‐stream wood structures, such as single logs, river steps, and debris dams, are known to drive hyporheic flow, defined as the flow that goes into the subsurface region and then back to the free‐flowing surface water. The hyporheic flow plays an important role in regulating water quality and biogeochemical cycles in rivers. Here, we investigated the impact of a channel‐spanning porous log jam, representing piles of wood logs, on hyporheic flow through a combination of direct visualization and theories. Specifically, we developed a method using refractive index‐matched sediment to directly visualize the hyporheic flow around and below a porous log jam, formed by piles of cylindrical rods, in a laboratory flume. We tracked the velocity of a fluorescent dye moving through the transparent sediment underneath the log jam. In addition, we measured the water surface profile and the spatially varying flow velocity near the log jam. Our results show that the normalized log jam‐induced hyporheic flux remained smaller than 10% at Froude numbers () below 0.06 and increased by a factor of five with increasing at . We combined the mass and momentum conservation equations of surface flow with Darcy's equation to explain the dependency of the log jam‐induced hyporheic flux on . Further, we observed that at , the water surface dropped noticeably and the turbulent kinetic energy increased immediately on the downstream side of the log jam. These findings will facilitate future quantification of hyporheic flow caused by channel‐spanning porous log jams.more » « less
-
Abstract The increase in the resistivity with decreasing temperature followed by a drop by more than one order of magnitude is observed on the metallic side near the zero-magnetic-field metal-insulator transition in a strongly interacting two-dimensional electron system in ultra-clean SiGe/Si/SiGe quantum wells. We find that the temperature $$T_{\text {max}}$$ T max , at which the resistivity exhibits a maximum, is close to the renormalized Fermi temperature. However, rather than increasing along with the Fermi temperature, the value $$T_{\text {max}}$$ T max decreases appreciably for spinless electrons in spin-polarizing (parallel) magnetic fields. The observed behaviour of $$T_{\text {max}}$$ T max cannot be described by existing theories. The results indicate the spin-related origin of the effect.more » « less