skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An Empirical Study of Common Challenges in Developing Deep Learning Applications
Recent advances in deep learning promote the innovation of many intelligent systems and applications such as autonomous driving and image recognition. Despite enormous efforts and investments in this field, a fundamental question remains under-investigated - what challenges do developers commonly face when building deep learning applications? To seek an answer, this paper presents a large-scale empirical study of deep learning questions in a popular Q&A website, Stack Overflow. We manually inspect a sample of 715 questions and identify seven kinds of frequently asked questions. We further build a classification model to quantify the distribution of different kinds of deep learning questions in the entire set of 39,628 deep learning questions. We find that program crashes, model migration, and implementation questions are the top three most frequently asked questions. After carefully examining accepted answers of these questions, we summarize five main root causes that may deserve attention from the research community, including API misuse, incorrect hyperparameter selection, GPU computation, static graph computation, and limited debugging and profiling support. Our results highlight the need for new techniques such as cross-framework differential testing to improve software development productivity and software reliability in deep learning.  more » « less
Award ID(s):
1764077
PAR ID:
10173704
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE)
Page Range / eLocation ID:
104 to 115
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep learning has gained substantial popularity in recent years. Developers mainly rely on libraries and tools to add deep learning capabilities to their software. What kinds of bugs are frequently found in such software? What are the root causes of such bugs? What impacts do such bugs have? Which stages of deep learning pipeline are more bug prone? Are there any antipatterns? Understanding such characteristics of bugs in deep learning software has the potential to foster the development of better deep learning platforms, debugging mechanisms, development practices, and encourage the development of analysis and verification frameworks. Therefore, we study 2716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, root causes of bugs, impacts of bugs, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. The key findings of our study include: data bug and logic bug are the most severe bug types in deep learning software appearing more than 48% of the times, major root causes of these bugs are Incorrect Model Parameter (IPS) and Structural Inefficiency (SI) showing up more than 43% of the times.We have also found that the bugs in the usage of deep learning libraries have some common antipatterns. 
    more » « less
  2. Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset. 
    more » « less
  3. High-performance computing is a driving force behind scientific innovation and discovery. However, as the number of users and the complexity of high-performance computing systems grow, so does the volume and variability of technical issues handled by sup- port teams. The evolving nature of these issues presents a need for automated tools that can extract clear, accurate, and relevant fre- quently asked questions directly from support tickets. This need was addressed by developing a novel pipeline that incorporates seman- tic clustering, representation learning, and large language models. While prior research laid strong foundations across classification, clustering and large language model-based questions & answers, our work augments these efforts by integrating semantic clustering, domain-specific summarization, and multi-stage generation into a scalable pipeline for autonomous technical support. To prioritize high-impact issues, the pipeline began by filtering tickets based on anomaly frequency and recency. It then leveraged an instruction- tuned large language model to clean and summarize each ticket into a structured issue-resolution pair. Next, unsupervised semantic clus- tering was performed to identify subclusters of semantically similar tickets within broader topic clusters. A large language model-based generation module was then applied to create frequently asked questions representing the most dominant issues. A structured evaluation by subject matter experts indicated that our approach transformed technical support tickets into understandable, factu- ally sound, and pertinent frequently asked questions. The ability to extract fine-grained insights from raw ticket data enhances the scalability, efficiency, and responsiveness of technical support work- flows in high-performance computing environments, ultimately enabling faster troubleshooting and more accessible pathways to scientific discovery. 
    more » « less
  4. During the preschool years, children’s question-explanation exchanges with teachers serve as a powerful mechanism for their early STEM knowledge acquisition. Utilizing naturalistic longitudinal classroom data, we examined how such conversations in an inquiry-based preschool classroom change during an extended scientific inquiry unit. We were particularly interested in information-seeking questions (causal, e.g. “How will you construct a pathway?”; fact-based, e.g., “Where’s the marble?”). Videos (n = 18; 14 hours) were collected during a three-week inquiry unit on forces and motion and transcribed in CLAN-CHILDES software at the utterance level. Utterances were coded for delivery (question vs. statement) and content (e.g., fact-based, causal). Although teachers ask more questions than children, we found a significant increase in information-seeking questions during Weeks 2 and 3. We explored the content of information-seeking questions and found that the majority of these questions were asked by teachers, and focused on facts. However, the timing of fact-based and causal questions varied. Whereas more causal questions occurred in earlier weeks, more fact-based questions were asked towards the end of the inquiry. These findings provide insight into how children’s and teacher’s questions develop during an inquiry, informing our understanding of early science learning. Even in an inquiry-learning environment, teachers guide interactions, asking questions to support children’s learning. Children’s information-seeking questions increase during certain weeks, suggesting that providing opportunities to ask questions may allow children to be more active in constructing knowledge. Such findings are important for considering how science questions are naturally embedded in an inquiry-based learning classroom. 
    more » « less
  5. Trusted execution environments (TEEs) have been proposed to protect GPU computation for machine learning applications operating on sensitive data. However, existing GPU TEE solutions either require CPU and/or GPU hardware modification to realize TEEs for GPUs, which prevents current systems from adopting them, or rely on untrusted system software such as GPU device drivers. In this paper, we propose using CPU secure enclaves, e.g., Intel SGX, to build GPU TEEs without modifications to existing hardware. To tackle the fundamental limitations of these enclaves, such as no support for I/O operations, we design and develop GEVisor, a formally verified security reference monitor software to enable a trusted I/O path between enclaves and GPU without trusting the GPU device driver. GEVisor operates in the Virtual Machine Extension (VMX) root mode, monitors the host system software to prevent unauthorized access to the GPU code and data outside the enclave, and isolates the enclave GPU context from other contexts during GPU computation. We implement and evaluate GEVisor on a commodity machine with an Intel SGX CPU and an NVIDIA Pascal GPU. Our experimental results show that our approach maintains an average overhead of 13.1% for deep learning and 18% for GPU benchmarks compared to native GPU computation while providing GPU TEEs for existing CPU and GPU hardware. 
    more » « less