skip to main content

Title: Cloud and on-premises data center usage, expenditures, and approaches to return on investment: A survey of academic research computing organizations
The landscape of research in science and engineering is heavily reliant on computation and data processing. There is continued and expanded usage by disciplines that have historically used advanced computing resources, new usage by disciplines that have not traditionally used HPC, and new modalities of the usage in Data Science, Machine Learning, and other areas of AI. Along with these new patterns have come new advanced computing resource methods and approaches, including the availability of commercial cloud resources. The Coalition for Academic Scientific Computation (CASC) has long been an advocate representing the needs of academic researchers using computational resources, sharing best practices and offering advice to create a national cyberinfrastructure to meet US science, engineering, and other academic computing needs. CASC has completed the first of what we intend to be an annual survey of academic cloud and data center usage and practices in analyzing return on investment in cyberinfrastructure. Critically important findings from this first survey include the following: many of the respondents are engaged in some form of analysis of return in research computing investments, but only a minority currently report the results of such analyses to their upper-level administration. Most respondents are experimenting with use of commercial cloud more » resources but no respondent indicated that they have found use of commercial cloud services to create financial benefits compared to their current methods. There is clear correlation between levels of investment in research cyberinfrastructure and the scale of both cpu core-hours delivered and the financial level of supported research grants. Also interesting is that almost every respondent indicated that they participate in some sort of national cooperative or nationally provided research computing infrastructure project and most were involved in academic computing-related organizations, indicating a high degree of engagement by institutions of higher education in building and maintaining national research computing ecosystems. Institutions continue to evaluate cloud-based HPC service models, despite having generally concluded that so far cloud HPC is too expensive to use compared to their current methods. « less
Authors:
; ; ; ;
Award ID(s):
1362134 1939140
Publication Date:
NSF-PAR ID:
10216559
Journal Name:
PEARC '20: Practice and Experience in Advanced Research Computing
Sponsoring Org:
National Science Foundation
More Like this
  1. Reed, Daniel A. ; Lifka, David ; Swanson, David ; Amaro, Rommie ; Wilkins-Diehr, Nancy (Ed.)
    This report summarizes the discussions from a workshop convened at NSF on May 30-31, 2018 in Alexandria, VA. The overarching objective of the workshop was to rethink the nature and composition of the NSF-supported computational ecosystem given changing application requirements and resources and technology landscapes. The workshop included roughly 50 participants, drawn from high-performance computing (HPC) centers, campus computing facilities, cloud service providers (academic and commercial), and distributed resource providers. Participants spanned both large research institutions and smaller universities. Organized by Daniel Reed (University of Utah, chair), David Lifka (Cornell University), David Swanson (University of Nebraska), Rommie Amaro (UCSD), and Nancy Wilkins-Diehr (UCSD/SDSC), the workshop was motivated by the following observations. First, there have been dramatic changes in the number and nature of applications using NSF-funded resources, as well as their resource needs. As a result, there are new demands on the type (e.g., data centric) and location (e.g., close to the data or the users) of the resources as well as new usage modes (e.g., on-demand and elastic). Second, there have been dramatic changes in the landscape of technologies, resources, and delivery mechanisms, spanning large scientific instruments, ubiquitous sensors, and cloud services, among others.
  2. Supercomputers are used to power discoveries and to reduce the time-to-results in a wide variety of disciplines such as engineering, physical sciences, and healthcare. They are globally considered as vital for staying competitive in defense, the financial sector, several mainstream businesses, and even agriculture. An integral requirement for enabling the usage of the supercomputers, like any other computer, is the availability of the software. Scalable and efficient software is typically required for optimally using the large-scale supercomputing platforms, and thereby, effectively leveraging the investments in the advanced CyberInfrastructure (CI). However, developing and maintaining such software is challenging due to several factors, such as, (1) no well-defined processes or guidelines for writing software that can ensure high-performance on supercomputers, and (2) shortfall of trained workforce having skills in both software engineering and supercomputing. With the rapid advancement in the computer architecture discipline, the complexity of the processors that are used in the supercomputers is also increasing, and, in turn, the task of developing efficient software for supercomputers is further becoming challenging and complex. To mitigate the aforementioned challenges, there is a need for a common platform that brings together different stakeholders from the areas of supercomputing and software engineering. To providemore »such a platform, the second workshop on Software Challenges to Exascale Computing (SCEC) was organized in Delhi, India, during December 13–14, 2018. The SCEC 2018 workshop informed participants about the challenges in large-scale HPC software development and steered them in the direction of building international collaborations for finding solutions to those challenges. The workshop provided a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next-generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop served as a stepping-stone towards innovations in the future. We are very grateful to the Organizing and Program Committees (listed below), the sponsors (US National Science Foundation, Indian National Supercomputing Mission, Atos, Mellanox, Centre for Development of Advanced Computing, San Diego Supercomputing Center, Texas Advanced Computing Center), and the participants for their contributions to making the SCEC 2018 workshop a success.« less
  3. In 2017, National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC) established a pilot internship program for cyberinfrastructure (CI) professionals. The program, funded by NSF’s Office of Advanced Cyberinfrastructure (OAC) (award 1730519), was designed to address the shortage of a workforce with the specialized skills needed to support advanced CI operations. The program was envisioned to provide internship opportunities for individuals who want to gain first-hand experience in the CI operations at a supercomputing center, and develop and refine instructional materials to serve as a template that is openly distributed for use by other centers and institutions to train CI professionals. Program interns are selected from a pool of applicants with the main selection criteria of having a completed classwork equivalent to an associate degree and a demonstrated interest in a career in CI operations. Interns work directly with a group of NCSA engineers in one of the areas of CI focus to gain hands-on experience in the deployment and operation of high-performance computing (HPC) infrastructure at a leading HPC center. The expectation is that interns will enter a workforce that will develop, deploy, manage and support advanced CI at other universities, centers, and industrymore »to meet the needs of the national computational science research community across academia and industry.« less
  4. Developments in large scale computing environments have led to design of workflows that rely on containers and analytics platform that are well supported by the commercial cloud. The National Science Foundation also envisions a future in science and engineering that includes commercial cloud service providers (CSPs) such as Amazon Web Services, Azure and Google Cloud. These twin forces have made researchers consider the commercial cloud as an alternative option to current high performance computing (HPC) environments. Training and knowledge on how to migrate workflows, cost control, data management, and system administration remain some of the commonly listed concerns with adoption of cloud computing. In an effort to ameliorate this situation, CSPs have developed online and in-person training platforms to help address this problem. Scalability, ability to impart knowledge, evaluating knowledge gain, and accreditation are the core concepts that have driven this approach. Here, we present a review of our experience using Google’s Qwiklabs online platform for remote and in-person training from the perspective of a HPC user. For this study, we completed over 50 online courses, earned five badges and attended a one-day session. We identify the strengths of the approach, identify avenues to refine them, and consider means tomore »further community engagement. We further evaluate the readiness of these resources for a cloud-curious researcher who is familiar with HPC. Finally, we present recommendations on how the large scale computing community can leverage these opportunities to work with CSPs to assist researchers nationally and at their home institutions.« less
  5. Recent scientific computing increasingly relies on multi-scale multi-physics simulations to enhance predictive capabilities by replacing a suite of stand-alone simulation codes that independently simulate various physical phenomena. Inevitably, multi-physics simulation demands high performance computing (HPC) through advanced hardware and software accelerating due to its intensive computing workload and run-time communication needs. Thus, its research has become a hotspot across different disciplines. However, it is observed that most benchmarks used in the evaluation of corresponding work are through commercial or in-house codes. Then, the lack of accessible open-source multi-physics benchmark suites has presented a challenge in uniformly evaluating simulation performance across related disciplines. This work proposes the first open-source based benchmark suite with 12 selected benchmarks for research in multi-physics simulation, the Clarkson Open-Source Multi-physics Benchmark Suite (COMBS). Multiple metrics have been gathered for these benchmarks, such as instructions per second and memory usage. Also provided are build and benchmark scripts to improve usability. Additionally, their source codes and installation guides are available for downloading through a github repository built by the authors. The selected benchmarks are from key applications of multi-physics simulation and highly cited publications. It is believed that this benchmark suite will facilitate to harness the full potentialmore »of HPC research in the field of multi-physics simulation.« less