skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research
An increasingly popular set of techniques adopted by software engineering (SE) researchers to automate development tasks are those rooted in the concept of Deep Learning (DL). The popularity of such techniques largely stems from their automated feature engineering capabilities, which aid in modeling software artifacts. However, due to the rapid pace at which DL techniques have been adopted, it is difficult to distill the current successes, failures, and opportunities of the current research landscape. In an effort to bring clarity to this cross-cutting area of work, from its modern inception to the present, this article presents a systematic literature review of research at the intersection of SE & DL. The review canvasses work appearing in the most prominent SE and DL conferences and journals and spans 128 papers across 23 unique SE tasks. We center our analysis around the components of learning , a set of principles that governs the application of machine learning techniques (ML) to a given problem domain, discussing several aspects of the surveyed work at a granular level. The end result of our analysis is a research roadmap that both delineates the foundations of DL techniques applied to SE research and highlights likely areas of fertile exploration for the future.  more » « less
Award ID(s):
2007246
PAR ID:
10438698
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Software Engineering and Methodology
Volume:
31
Issue:
2
ISSN:
1049-331X
Page Range / eLocation ID:
1 to 58
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deep Learning (DL) techniques are increasingly being incorporated in critical software systems today. DL software is buggy too. Recent work in SE has characterized these bugs, studied fix patterns, and proposed detection and localization strategies. In this work, we introduce a preventative measure. We propose design by contract for DL libraries, DL Contract for short, to document the properties of DL libraries and provide developers with a mechanism to identify bugs during development. While DL Contract builds on the traditional design by contract techniques, we need to address unique challenges. In particular, we need to document properties of the training process that are not visible at the functional interface of the DL libraries. To solve these problems, we have introduced mechanisms that allow developers to specify properties of the model architecture, data, and training process. We have designed and implemented DL Contract for Python-based DL libraries and used it to document the properties of Keras, a well-known DL library. We evaluate DL Contract in terms of effectiveness, runtime overhead, and usability. To evaluate the utility of DL Contract, we have developed 15 sample contracts specifically for training problems and structural bugs. We have adopted four well-vetted benchmarks from prior works on DL bug detection and repair. For the effectiveness, DL Contract correctly detects 259 bugs in 272 real-world buggy programs, from well-vetted benchmarks provided in prior work on DL bug detection and repair. We found that the DL Contract overhead is fairly minimal for the used benchmarks. Lastly, to evaluate the usability, we conducted a survey of twenty participants who have used DL Contract to find and fix bugs. The results reveal that DL Contract can be very helpful to DL application developers when debugging their code. 
    more » « less
  2. Information Retrieval (IR) plays a pivotal role indiverse Software Engineering (SE) tasks, e.g., bug localization and triaging, bug report routing, code retrieval, requirements analysis, etc. SE tasks operate on diverse types of documents including code, text, stack-traces, and structured, semi-structured and unstructured meta-data that often contain specialized vocabularies. As the performance of any IR-based tool critically depends on the underlying document types, and given the diversity of SE corpora, it is essential to understand which models work best for which types of SE documents and tasks.We empirically investigate the interaction between IR models and document types for two representative SE tasks (bug localization and relevant project search), carefully chosen as they require a diverse set of SE artifacts (mixtures of code and text),and confirm that the models’ performance varies significantly with mix of document types. Leveraging this insight, we propose a generalized framework, SRCH, to automatically select the most favorable IR model(s) for a given SE task. We evaluate SRCH w.r.t. these two tasks and confirm its effectiveness. Our preliminary user study shows that SRCH’s intelligent adaption of the IR model(s) to the task at hand not only improves precision and recall for SE tasks but may also improve users’ satisfaction. 
    more » « less
  3. This article provides a systematic review of research related to Human–Computer Interaction techniques supporting training and learning in various domains including medicine, healthcare, and engineering. The focus is on HCI techniques involving Extended Reality (XR) technology which encompasses Virtual Reality, Augmented Reality, and Mixed Reality. HCI-based research is assuming more importance with the rapid adoption of XR tools and techniques in various training and learning contexts including education. There are many challenges in the adoption of HCI approaches, which results in a need to have a comprehensive and systematic review of such HCI methods in various domains. This article addresses this need by providing a systematic literature review of a cross-s Q1 ection of HCI approaches involving proposed so far. The PRISMA-guided search strategy identified 1156 articles for abstract review. Irrelevant abstracts were discarded. The whole body of each article was reviewed for the remaining articles, and those that were not linked to the scope of our specific issue were also eliminated. Following the application of inclusion/exclusion criteria, 69 publications were chosen for review. This article has been divided into the following sections: Introduction; Research methodology; Literature review; Threats of validity; Future research and Conclusion. Detailed classifications (pertaining to HCI criteria and concepts, such as affordance; training, and learning techniques) have also been included based on different parameters based on the analysis of research techniques adopted by various investigators. The article concludes with a discussion of the key challenges for this HCI area along with future research directions. A review of the research outcomes from these publications underscores the potential for greater success when such HCI-based approaches are adopted during such 3D-based training interactions. Such a higher degree of success may be due to the emphasis on the design of userfriendly (and user-centric) training environments, interactions, and processes that positively impact the cognitive abilities of users and their respective learning/training experiences. We discovered data validating XR-HCI as an ascending method that brings a new paradigm by enhancing skills and safety while reducing costs and learning time through replies to three exploratory study questions. We believe that the findings of this study will aid academics in developing new research avenues that will assist XR-HCI applications to mature and become more widely adopted. 
    more » « less
  4. Abstract Exploring new techniques to improve the prediction of tropical cyclone (TC) formation is essential for operational practice. Using convolutional neural networks, this study shows that deep learning can provide a promising capability for predicting TC formation from a given set of large-scale environments at certain forecast lead times. Specifically, two common deep-learning architectures including the residual net (ResNet) and UNet are used to examine TC formation in the Pacific Ocean. With a set of large-scale environments extracted from the NCEP–NCAR reanalysis during 2008–21 as input and the TC labels obtained from the best track data, we show that both ResNet and UNet reach their maximum forecast skill at the 12–18-h forecast lead time. Moreover, both architectures perform best when using a large domain covering most of the Pacific Ocean for input data, as compared to a smaller subdomain in the western Pacific. Given its ability to provide additional information about TC formation location, UNet performs generally worse than ResNet across the accuracy metrics. The deep learning approach in this study presents an alternative way to predict TC formation beyond the traditional vortex-tracking methods in the current numerical weather prediction. Significance StatementThis study presents a new approach for predicting tropical cyclone (TC) formation based on deep learning (DL). Using two common DL architectures in visualization research and a set of large-scale environments in the Pacific Ocean extracted from the reanalysis data, we show that DL has an optimal capability of predicting TC formation at the 12–18-h lead time. Examining the DL performance for different domain sizes shows that the use of a large domain size for input data can help capture some far-field information needed for predicting TCG. The DL approach in this study demonstrates an alternative way to predict or detect TC formation beyond the traditional vortex-tracking methods used in the current numerical weather prediction. 
    more » « less
  5. Background: Software Package Registries (SPRs) are an integral part of the software supply chain. These collaborative platforms unite contributors, users, and packages, and they streamline pack- age management. Much engineering work focuses on synthesizing packages from SPRs into a downstream project. Prior work has thoroughly characterized the SPRs associated with traditional soft- ware, such as NPM (JavaScript) and PyPI (Python). Pre-Trained Model (PTM) Registries are an emerging class of SPR of increasing importance, because they support the deep learning supply chain. Aims: A growing body of empirical research has examined PTM reg- istries from various angles, such as vulnerabilities, reuse processes, and evolution. However, no existing research synthesizes them to provide a systematic understanding of the current knowledge. Furthermore, much of the existing research includes unsupported qualitative claims and lacks sufficient quantitative analysis. Our research aims to fill these gaps by providing a thorough knowledge synthesis and use it to inform further quantitative analysis. Methods: To consolidate existing knowledge on PTM reuse, we first conduct a systematic literature review (SLR). We then observe that some of the claims are qualitative and lack quantitative evi- dence. We identify quantifiable metrics assoiated with those claims, and measure in order to substantiate these claims. Results: From our SLR, we identify 12 claims about PTM reuse on the HuggingFace platform, 4 of which lack quantitative validation. We successfully test 3 of these claims through a quantitative analysis, and directly compare one with traditional software. Our findings corroborate qualitative claims with quantitative measurements. Our two most notable findings are: (1) PTMs have a significantly higher turnover rate than traditional software, indicating a dynamic and rapidly evolving reuse environment within the PTM ecosystem; and (2) There is a strong correlation between documentation quality and PTM popularity. Conclusions: Our findings validate several qual- itative research claims with concrete metrics, confirming prior qualitative and case study research. Our measures show further dynamics of PTM reuse, motivating further research infrastructure and new kinds of measurements. 
    more » « less