skip to main content


Title: Workshop Report: Rethinking NSF’s Computational Ecosystem for 21st Century Science and Engineering
This report summarizes the discussions from a workshop convened at NSF on May 30-31, 2018 in Alexandria, VA. The overarching objective of the workshop was to rethink the nature and composition of the NSF-supported computational ecosystem given changing application requirements and resources and technology landscapes. The workshop included roughly 50 participants, drawn from high-performance computing (HPC) centers, campus computing facilities, cloud service providers (academic and commercial), and distributed resource providers. Participants spanned both large research institutions and smaller universities. Organized by Daniel Reed (University of Utah, chair), David Lifka (Cornell University), David Swanson (University of Nebraska), Rommie Amaro (UCSD), and Nancy Wilkins-Diehr (UCSD/SDSC), the workshop was motivated by the following observations. First, there have been dramatic changes in the number and nature of applications using NSF-funded resources, as well as their resource needs. As a result, there are new demands on the type (e.g., data centric) and location (e.g., close to the data or the users) of the resources as well as new usage modes (e.g., on-demand and elastic). Second, there have been dramatic changes in the landscape of technologies, resources, and delivery mechanisms, spanning large scientific instruments, ubiquitous sensors, and cloud services, among others.  more » « less
Award ID(s):
1836997
PAR ID:
10384269
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Reed, Daniel A.; Lifka, David; Swanson, David; Amaro, Rommie; Wilkins-Diehr, Nancy
Date Published:
Journal Name:
NSF Workshop Reports
ISSN:
9999-999X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. On August 9-10, 2023, the Thomas J. O’Keefe Institute for Sustainable Supply of Strategic Minerals at Missouri University of Science and Technology (Missouri S&T) hosted the third annual workshop on ‘Resilient Supply of Critical Minerals’. The workshop was funded by the National Science Foundation (NSF) and was attended by 218 participants. 128 participants attended in-person in the Havener Center on the Missouri S&T campus in Rolla, Missouri, USA. Another 90 participants attended online via Zoom. Fourteen participants (including nine students) received travel support through the NSF grant to attend the conference in Rolla. Additionally, the online participation fee was waived for another six students and early career researchers to attend the workshop virtually. Out of the 218 participants, 190 stated their sectors of employment during registration showing that 87 participants were from academia (32 students), 62 from the private sector and 41 from government agencies. Four topical sessions were covered: A. The Critical Mineral Potential of the USA: Evaluation of existing, and exploration for new resources. B. Mineral Processing and Recycling: Maximizing critical mineral recovery from existing production streams. C. Critical Mineral Policies: Toward effective and responsible governance. D. Resource Sustainability: Ethical and environmentally sustainable supply of critical minerals. Each topical session was composed of two keynote lectures and complemented by oral and poster presentations by the workshop participants. Additionally, a panel discussion with panelists from academia, the private sector and government agencies was held that discussed ‘How to grow the American critical minerals workforce’. The 2023 workshop was followed by a post-workshop field trip to the lead-zinc mining operations of the Doe Run Company in southeast Missouri that was attended by 18 workshop participants from academia (n=10; including 4 students), the private sector (n=4), and government institutions (n=4). Discussions during the workshop led to the following suggestions to increase the domestic supply of critical minerals: (i) Research to better understand the geologic critical mineral potential of the USA, including primary reserves/resources, historic mine wastes, and mineral exploration potential. (ii) Development of novel extraction techniques targeted at the recovery of critical minerals as co-products from existing production streams, mine waste materials, and recyclables. (iii) Faster and more transparent permitting processes for mining and mineral processing operations. (iv) A more environmentally sustainable and ethical approach to mining and mineral processing. (v) Development of a highly skilled critical minerals workforce. This workshop report provides a detailed summary of the workshop discussions and describes a way forward for this workshop series for 2024 and beyond. 
    more » « less
  2. Ever since the commercial offerings of the Cloud started appearing in 2006, the landscape of cloud computing has been undergoing remarkable changes with the emergence of many different types of service offerings, developer productivity enhancement tools, and new application classes as well as the manifestation of cloud functionality closer to the user at the edge. The notion of utility computing, however, has remained constant throughout its evolution, which means that cloud users always seek to save costs of leasing cloud resources while maximizing their use. On the other hand, cloud providers try to maximize their profits while assuring service-level objectives of the cloud-hosted applications and keeping operational costs low. All these outcomes require systematic and sound cloud engineering principles. The aim of this paper is to highlight the importance of cloud engineering, survey the landscape of best practices in cloud engineering and its evolution, discuss many of the existing cloud engineering advances, and identify both the inherent technical challenges and research opportunities for the future of cloud computing in general and cloud engineering in particular. 
    more » « less
  3. In the Internet of Things (loT) era, edge computing is a promising paradigm to improve the quality of service for latency sensitive applications by filling gaps between the loT devices and the cloud infrastructure. Highly geo-distributed edge computing resources that are managed by independent and competing service providers pose new challenges in terms of resource allocation and effective resource sharing to achieve a globally efficient resource allocation. In this paper, we propose a novel blockchain-based model for allocating computing resources in an edge computing platform that allows service providers to establish resource sharing contracts with edge infrastructure providers apriori using smart contracts in Ethereum. The smart contract in the proposed model acts as the auctioneer and replaces the trusted third-party to handle the auction. The blockchain-based auctioning protocol increases the transparency of the auction-based resource allocation for the participating edge service and infrastructure providers. The design of sealed bids and bid revealing methods in the proposed protocol make it possible for the participating bidders to place their bids without revealing their true valuation of the goods. The truthful auction design and the utility-aware bidding strategies incorporated in the proposed model enables the edge service providers and edge infrastructure providers to maximize their utilities. We implement a prototype of the model on a real blockchain test bed and our extensive experiments demonstrate the effectiveness, scalability and performance efficiency of the proposed approach. 
    more » « less
  4. Cloud computing has motivated renewed interest in resource allocation problems with new consumption models. A common goal is to share a resource, such as CPU or I/O bandwidth, among distinct users with different demand patterns as well as different quality of service requirements. To ensure these service requirements, cloud offerings often come with a service level agreement (SLA) between the provider and the users. A SLA specifies the amount of a resource a user is entitled to utilize. In many cloud settings, providers would like to operate resources at high utilization while simultaneously respecting individual SLAs. There is typically a trade-off between these two objectives; for example, utilization can be increased by shifting away resources from idle users to “scavenger” workload, but with the risk of the former then becoming active again. We study this fundamental tradeoff by formulating a resource allocation model that captures basic properties of cloud computing systems, including SLAs, highly limited feedback about the state of the system, and variable and unpredictable input sequences. Our main result is a simple and practical algorithm that achieves near-optimal performance on the above two objectives. First, we guarantee nearly optimal utilization of the resource even if compared with the omniscient offline dynamic optimum. Second, we simultaneously satisfy all individual SLAs up to a small error. The main algorithmic tool is a multiplicative weight update algorithm and a primal-dual argument to obtain its guarantees. We also provide numerical validation on real data to demonstrate the performance of our algorithm in practical applications. 
    more » « less
  5. The low cost and rapid provisioning capabilities have made the cloud a desirable platform to launch complex scientific applications. However, resource utilization optimization is a significant challenge for cloud service providers, since the earlier focus is provided on optimizing resources for the applications that run on the cloud, with a low emphasis being provided on optimizing resource utilization of the cloud computing internal processes. Code refactoring has been associated with improving the maintenance and understanding of software code. However, analyzing the impact of the refactoring source code of the cloud and studying its impact on cloud resource usage require further analysis. In this paper, we propose a framework called Unified Regression Modeling (URegM) which predicts the impact of code smell refactor- ing on cloud resource usage. We test our experiments in a real-life cloud environment using a complex scientific application as a workload. Results show that URegM is capable of accurately predicting resource consumption due to code smell refactoring. This will permit cloud service providers with advanced knowledge about the impact of refactoring code smells on resource consumption, thus allowing them to plan their resource provisioning and code refactoring more effectively. 
    more » « less