skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: DeepContext: A Context-aware, Cross-platform, and Cross-framework Tool for Performance Profiling and Analysis of Deep Learning Workloads
Award ID(s):
2125813
PAR ID:
10662475
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Page Range / eLocation ID:
48 to 63
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The success of supervised learning requires large-scale ground truth labels which are very expensive, time- consuming, or may need special skills to annotate. To address this issue, many self- or un-supervised methods are developed. Unlike most existing self-supervised methods to learn only 2D image features or only 3D point cloud features, this paper presents a novel and effective self-supervised learning approach to jointly learn both 2D image features and 3D point cloud features by exploiting cross-modality and cross-view correspondences without using any human annotated labels. Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network. Two types of features are fed into a two-layer fully connected neural network to estimate the cross-modality correspondence. The three networks are jointly trained (i.e. cross-modality) by verifying whether two sampled data of different modalities belong to the same object, meanwhile, the 2D convolutional neural network is additionally optimized through minimizing intra-object distance while maximizing inter-object distance of rendered images in different views (i.e. cross-view). The effectiveness of the learned 2D and 3D features is evaluated by transferring them on five different tasks including multi-view 2D shape recognition, 3D shape recognition, multi-view 2D shape retrieval, 3D shape retrieval, and 3D part-segmentation. Extensive evaluations on all the five different tasks across different datasets demonstrate strong generalization and effectiveness of the learned 2D and 3D features by the proposed self-supervised method. 
    more » « less
  2. null (Ed.)
    Contact with racial outgroups is thought to reduce the cross-race recognition deficit (CRD), the tendency for people to recognize same-race (i.e., ingroup) faces more accurately than cross-race (i.e., outgroup) faces. In 2001, Meissner and Brigham conducted a meta-analysis in which they examined this question and found a meta-analytic effect of r = −.13. We conduct a new meta-analysis based on 20 years of additional data to update the estimate of this relationship and examine theoretical and methodological moderators of the effect. We find a meta-analytic effect of r = −.15. In line with theoretical predictions, we find some evidence that the magnitude of this relationship is stronger when contact occurs during childhood rather than adulthood. We find no evidence that the relationship differs for measures of holistic/configural processing compared with normal processing. Finally, we find that the magnitude of the relationship depends on the operationalization of contact and that it is strongest when contact is manipulated. We consider recommendations for further research on this topic. 
    more » « less
  3. null (Ed.)
  4. As artificial intelligence and robotics are increasingly integrated in graduate research and education, graduate students across disciplines need to develop a “technological literacy” in how they work along with the ethical understanding needed to navigate these technologies responsibly. To satisfy this need, the corresponding and last author has developed a graduate-level course on AI ethics and human-robot interaction (HRI) designed for students from a variety of disciplines and backgrounds. The paper offers an overview of the course, detailing its content, institutional context, and the rationale behind its development. It describes the curriculum structure, including key themes and learning objectives, and the pedagogical approaches and assessment methods utilized in the course. The paper concludes with reflections from the instructor on the lessons learned from teaching the course and the experiences gained throughout the learning process. 
    more » « less