skip to main content


Title: Managing data constraints in database-backed web applications
Database-backed web applications manipulate large amounts of persistent data, and such applications often contain constraints that restrict data length, data value, and other data properties. Such constraints are critical in ensuring the reliability and usability of these applications. In this paper, we present a comprehensive study on where data constraints are expressed, what they are about, how often they evolve, and how their violations are handled. The results show that developers struggle with maintaining consistent data constraints and checking them across different components and versions of their web applications, leading to various problems. Guided by our study, we developed checking tools and API enhancements that can automatically detect such problems and improve the quality of such applications.  more » « less
Award ID(s):
2027516
PAR ID:
10162185
Author(s) / Creator(s):
Date Published:
Journal Name:
ICSE
ISSN:
2184-2728
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Many web applications use databases for persistent data storage, and using Object Relational Mapping (ORM) frameworks is a common way to develop such database-backed web applications. Unfortunately, developing efficient ORM applications is challenging, as the ORM framework hides the underlying database query generation and execution. This problem is becoming more severe as these applications need to process an increasingly large amount of persistent data. Recent research has targeted specific aspects of performance problems in ORM applications. However, there has not been any systematic study to identify common performance anti-patterns in real-world such applications, how they affect resulting application performance, and remedies for them. In this paper, we try to answer these questions through a comprehensive study of 12 representative real-world ORM applications. We generalize 9 ORM performance anti-patterns from more than 200 performance issues that we obtain by studying their bug-tracking systems and profiling their latest versions. To prove our point, we manually fix 64 performance issues in their latest versions and obtain a median speedup of 2× (and up to 39× max) with fewer than 5 lines of code change in most cases. Many of the issues we found have been confirmed by developers, and we have implemented ways to identify other code fragments with similar issues as well. 
    more » « less
  2. null (Ed.)
    With phishing attacks, password breaches, and brute-force login attacks presenting constant threats, it is clear that passwords alone are inadequate for protecting the web applications entrusted with our personal data. Instead, web applications should practice defense in depth and give users multiple ways to secure their accounts. In this paper we propose login rituals, which define actions that a user must take to authenticate, and web tripwires, which define actions that a user must not take to remain authenticated. These actions outline expected behavior of users familiar with their individual setups on applications they use often. We show how we can detect and prevent intrusions from web attackers lacking this familiarity with their victim's behavior. We design a modular and application-agnostic system that incorporates these two mechanisms, allowing us to add an additional layer of deception-based security to existing web applications without modifying the applications themselves. Next to testing our system and evaluating its performance when applied to five popular open-source web applications, we demonstrate the promising nature of these mechanisms through a user study. Specifically, we evaluate the detection rate of tripwires against simulated attackers, 88% of whom clicked on at least one tripwire. We also observe web users' creation of personalized login rituals and evaluate the practicality and memorability of these rituals over time. Out of 39 user-created rituals, all of them are unique and 79% of users were able to reproduce their rituals even a week after creation. 
    more » « less
  3. This report will analyze issues related to web browser security and privacy. The web browser applications that will be looked at are Google Chrome, Bing, Mozilla Firefox, Internet Explorer, Microsoft Edge, Safari, and Opera. In recent months web browsers have increased the number of daily users. With the increase in daily users who may not be as well versed in data security and privacy, comes an increase in attacks. This study will discuss the pros and cons of each web browser, how many have been hacked, how often they have been hacked, why they have been hacked, security flaws, and more. The study utilizes research and a user survey to make a proper analysis and provide recommendations on the topic. 
    more » « less
  4. Abstract

    Several techniques are used by requirements engineering practitioners to address difficult problems such as specifying precise requirements while using inherently ambiguous natural language text and ensuring the consistency of requirements. Often, these problems are addressed by building processes/tools that combine multiple techniques where the output from 1 technique becomes the input to the next. While powerful, these techniques are not without problems. Inherent errors in each technique may leak into the subsequent step of the process. We model and study 1 such process, for checking the consistency of temporal requirements, and assess error leakage and wasted time. We perform an analysis of the input factors of our model to determine the effect that sources of uncertainty may have on the final accuracy of the consistency checking process. Convinced that error leakage exists and negatively impacts the results of the overall consistency checking process, we perform a second simulation to assess its impact on the analysts' efforts to check requirements consistency. We show that analyst's effort varies depending on the precision and recall of the subprocesses and that the number and capability of analysts affect their effort. We share insights gained and discuss applicability to other processes built of piped techniques.

     
    more » « less
  5. Biodiversity informatics workbenches and aggregators that make their data externally accessible via application programming interfaces (APIs) facilitate the development of customized applications that fit the needs of a diverse range of communities. In the past, the technical skills required to host web-facing applications placed constraints on many researchers: they either needed to find technical help, or expand their own skills. These limits are now significantly reduced when free or low-cost web-site hosting is combined with small, well-documented applications that require minimal configuration to setup. We illustrate two applications that take advantage of this approach: an interactive key engine (presently named "distinguish") and TaxonPages, a taxon page service application. Both applications make use of TaxonWorks' API. We discuss the limits, e.g., the user must be online to access the data behind the application, and advantages of this approach, e.g., the application server can be served locally, on the users' own computer, and the underlying data are all accessible in more technical formats. 
    more » « less