skip to main content


Title: Interactive hydrological modelling and simulation on client-side web systems: an educational case study
Abstract

Computational hydrological models and simulations are fundamental pieces of the workflow of contemporary hydroscience research, education, and professional engineering activities. In support of hydrological modelling efforts, web-enabled tools for data processing, storage, computation, and visualization have proliferated. Most of these efforts rely on server resources for computation and data tasks and client-side resources for visualization. However, continued advancements of in-browser, client-side compute performance present an opportunity to further leverage client-side resources. Towards this end, we present an operational rainfall-runoff model and simulation engine running entirely on the client side using the JavaScript programming language. To demonstrate potential uses, we also present an easy-to-use in-browser interface designed for hydroscience education. Although the use case presented here is self-contained, the core technologies can extend to leverage multi-core processing on single machines and parallelization capabilities of multiple clients or JavaScript-enabled servers. These possibilities suggest that client-side hydrological simulation can play a central role in a dynamic, interconnected ecosystem of web-ready hydrological tools.

 
more » « less
Award ID(s):
1835338
NSF-PAR ID:
10378538
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
DOI PREFIX: 10.2166
Date Published:
Journal Name:
Journal of Hydroinformatics
Volume:
24
Issue:
6
ISSN:
1464-7141
Page Range / eLocation ID:
p. 1194-1206
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Web pages today commonly include large amounts of JavaScript code in order to offer users a dynamic experience. These scripts often make pages slow to load, partly due to a fundamental inefficiency in how browsers process JavaScript content: browsers make it easy for web developers to reason about page state by serially executing all scripts on any frame in a page, but as a result, fail to leverage the multiple CPU cores that are readily available even on low-end phones. In this paper, we show how to address this inefficiency without requiring pages to be rewritten or browsers to be modified. The key to our solution, Horcrux, is to account for the non-determinism intrinsic to web page loads and the constraints placed by the browser’s API for parallelism. Horcrux-compliant web servers perform offline analysis of all the JavaScript code on any frame they serve to conservatively identify, for every JavaScript function, the union of the page state that the function could access across all loads of that page. Horcrux’s JavaScript scheduler then uses this information to judiciously parallelize JavaScript execution on the client-side so that the end-state is identical to that of a serial execution, while minimizing coordination and offloading overheads. Across a wide range of pages, phones, and mobile networks covering web workloads in both developed and emerging regions, Horcrux reduces median browser computation delays by 31-44% and page load times by 18-37%. 
    more » « less
  2. Modern web applications are distributed across a browser-based client and a cloud-based server. Distribution provides access to remote resources, accessed over the web and shared by clients. Much of the complexity of inspecting and evolving web applications lies in their distributed nature. Also, the majority of mature program analysis and transformation tools works only with centralized software. Inspired by business process re-engineering, in which remote operations can be insourced back in house to restructure and outsource anew, we bring an analogous approach to the re-engineering of web applications. Our target domain are full-stack JavaScript applications that implement both the client and server code in this language. Our approach is enabled by Client Insourcing, a novel automatic refactoring that creates a semantically equivalent centralized version of a distributed application. This centralized version is then inspected, modified, and redistributed to meet new requirements. After describing the design and implementation of Client Insourcing, we demonstrate its utility and value in addressing changes in security, reliability, and performance requirements. By reducing the complexity of the non-trivial program inspection and evolution tasks performed to meet these requirements, our approach can become a helpful aid in the re-engineering of web applications in this domain. 
    more » « less
  3. Mobile web browsing remains slow despite many efforts to accelerate page loads. Like others, we find that client-side computation (in particular, JavaScript execution) is a key culprit. Prior solutions to mitigate computation overheads, however, suffer from security, privacy, and deployability issues, hindering their adoption. To sidestep these issues, we propose a browser-based solution in which every client reuses identical computations from its prior page loads. Our analysis across roughly 230 pages reveals that, even on a modern smartphone, such an approach could reduce client-side computation by a median of 49% on pages which are most in need of such optimizations. 
    more » « less
  4. Secure multi-party computation (MPC) is a cryptographic primitive that enables several parties to compute jointly over their collective private data sets. MPC’s objective is to federate trust over several computing entities such that a large threshold (e.g., a majority) must collude before sensitive or private input data can be breached. Over the past decade, several general and special-purpose software frameworks have been developed that provide data contributors with control over deciding whom to trust to perform the calculation and (separately) to receive the output. However, one crucial component remains centralized within all existing MPC frameworks: the distribution of the MPC software application itself. For desktop applications, trust in the code must be determined once at download time. For web-based JavaScript applications subject to trust on every use, all data contributors across several invocations of MPC must maintain centralized trust in a single code delivery service. In this work, we design and implement a federated code delivery mechanism for web-based MPC such that data contributors only execute code that has been accredited by several trusted auditors (the contributor aborts if consensus is not reached). Our client-side Chrome browser extension is independent of any MPC scheme and has a trusted computing base of fewer than 100 lines of code. 
    more » « less
  5. Summary

    Statistical inference involves drawing scientifically‐based conclusions describing natural processes or observable phenomena from datasets with intrinsic random variation. We designed, implemented, and validated a new portable randomization‐based statistical inference infrastructure (http://socr.umich.edu/HTML5/Resampling_Webapp) that blends research‐driven data analytics and interactive learning, and provides a backend computational library for managing large amounts of simulated or user‐provided data.

    We designed, implemented and validated a new portable randomization‐based statistical inference infrastructure (http://socr.umich.edu/HTML5/Resampling_Webapp) that blends research‐driven data analytics and interactive learning, and provides a backend computational library for managing large amounts of simulated or user‐provided data. The core of this framework is a modern randomization webapp, which may be invoked on any device supporting a JavaScript‐enabled web browser. We demonstrate the use of these resources to analyse proportion, mean and other statistics using simulated (virtual experiments) and observed (e.g. Acute Myocardial Infarction, Job Rankings) data. Finally, we draw parallels between parametric inference methods and their distribution‐free alternatives.

    The Randomization and Resampling webapp can be used for data analytics, as well as for formal, in‐class and informal, out‐of‐the‐classroom learning and teaching of different scientific concepts. Such concepts include sampling, random variation, computational statistical inference and data‐driven analytics. The entire scientific community may utilize, test, expand, modify or embed these resources (data, source‐code, learning activity, webapp) without any restrictions.

     
    more » « less