skip to main content

Search for: All records

Creators/Authors contains: "Tian, Tian"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This poster paper describes the authors’ single-year National Science Foundation (NSF) project DRL-1825007 titled, “DCL: Synthesis and Design Workshop on Digitally-Mediated Team Learning” which has been conducted as one of nine awards within NSF-18-017: Principles for the Design of Digital STEM Learning Environments. Beginning in September 2018, the project conducted the activities herein to deliver a three-day workshop on Digitally-Mediated Team Learning (DMTL) to convene, invigorate, and task interdisciplinary science and engineering researchers, developers, and educators to coalesce the leading strategies for digital team learning. The deliverable of the workshop is a White Paper composed to identify one-year, three-year, andmore »five-year research and practice roadmaps for highly-adaptable environments for computer-supported collaborative learning within STEM curricula. As subject to the chronology of events, highlights of the White Paper’s outcomes will be showcased within the poster itself.« less
  2. Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additionalmore »prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.« less
  3. Free, publicly-accessible full text available January 16, 2021