Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 1, 2025
-
Free, publicly-accessible full text available February 16, 2025
-
Free, publicly-accessible full text available February 16, 2025
-
Free, publicly-accessible full text available February 16, 2025
-
A scalable storage system is an integral requirement for supporting large-scale cloud computing jobs. The raw space on storage systems is made usable with the help of a software layer which is typically called a filesystem (e.g., Google's Cloud Filestore). In this paper, we present the design and implementation of an open-source and free cloud-based filesystem named as "Greyfish" that can be installed on the Virtual Machines (VMs) hosted on different cloud computing systems, such as Jetstream and Chameleon. Greyfish helps in: (1) storing files and directories for different user-accounts in a shared space on the cloud, (2) managing file-access permissions, and (3) purging files when needed. It is currently being used in the implementation of the Gateway-In-A-Box (GIB) project. A simplified version of Greyfish, known as Reef, is already in production in the BOINC@TACC project. Science gateway developers will find Greyfish useful for creating local filesystems that can be mounted in containers. By doing so, they can independently do quick installations of self-contained software solutions in development and test environments while mounting the filesystems on large-scale storage platforms in the production environments only.more » « less
-
This report is based on activities supported by the National Science Foundation under award number 2006409. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.more » « less
-
Supercomputers are used to power discoveries and to reduce the time-to-results in a wide variety of disciplines such as engineering, physical sciences, and healthcare. They are globally considered as vital for staying competitive in defense, the financial sector, several mainstream businesses, and even agriculture. An integral requirement for enabling the usage of the supercomputers, like any other computer, is the availability of the software. Scalable and efficient software is typically required for optimally using the large-scale supercomputing platforms, and thereby, effectively leveraging the investments in the advanced CyberInfrastructure (CI). However, developing and maintaining such software is challenging due to several factors, such as, (1) no well-defined processes or guidelines for writing software that can ensure high-performance on supercomputers, and (2) shortfall of trained workforce having skills in both software engineering and supercomputing. With the rapid advancement in the computer architecture discipline, the complexity of the processors that are used in the supercomputers is also increasing, and, in turn, the task of developing efficient software for supercomputers is further becoming challenging and complex. To mitigate the aforementioned challenges, there is a need for a common platform that brings together different stakeholders from the areas of supercomputing and software engineering. To provide such a platform, the second workshop on Software Challenges to Exascale Computing (SCEC) was organized in Delhi, India, during December 13–14, 2018. The SCEC 2018 workshop informed participants about the challenges in large-scale HPC software development and steered them in the direction of building international collaborations for finding solutions to those challenges. The workshop provided a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next-generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop served as a stepping-stone towards innovations in the future. We are very grateful to the Organizing and Program Committees (listed below), the sponsors (US National Science Foundation, Indian National Supercomputing Mission, Atos, Mellanox, Centre for Development of Advanced Computing, San Diego Supercomputing Center, Texas Advanced Computing Center), and the participants for their contributions to making the SCEC 2018 workshop a success.more » « less
-
The CSSI 2019 workshop was held on October 28-29, 2019, in Austin, Texas. The main objectives of this workshop were to (1) understand the impact of the CSSI program on the community over the last 9 years, (2) engage workshop participants in identifying gaps and opportunities in the current CSSI landscape, (3) gather ideas on the cyberinfrastructure needs and expectations of the community with respect to the CSSI program, and (4) prepare a report summarizing the feedback gathered from the community that can inform the future solicitations of the CSSI program. The workshop participants included a diverse mix of researchers and practitioners from academia, industry, and national laboratories. The participants belonged to diverse domains such as quantum physics, computational biology, High Performance Computing (HPC), and library science. Almost 50% participants were from computer science domain and roughly 50% were from non-computer science domains. As per the self-reported statistics, roughly 27% of the participants were from the different underrepresented groups as defined by the National Science Foundation (NSF). The workshop brought together different stakeholders interested in provisioning sustainable cyberinfrastructure that can power discoveries impacting the various fields of science and technology and maintaining the nation's competitiveness in the areas such as scientific software, HPC, networking, cybersecurity, and data/information science. The workshop served as a venue for gathering the community-feedback on the current state of the CSSI program and its future directions. Before they arrived at the workshop, the participants were encouraged to take an online survey on the challenges that they face in using the current cyberinfrastructure and the importance of the CSSI program in enabling cutting-edge research. The workshop included 16 brain-storming sessions of one hour each. Additionally, the workshop program included 16 lightning talks and an extempore session. The information collected from the survey, brainstorming sessions, lightning talks, and the extempore session are summarized in this report and can potentially be useful for the NSF in formulating the future CSSI solicitations. The workshop fostered an environment in which the participants were encouraged to identify gaps and opportunities in the current cyberinfrastructure landscape, and develop thoughts for proposing new projects.more » « less
-
The process of code adaptation to take advantage of the latest innovations in a supercomputing platform begins with learning about the details of the platform's underlying hardware. It can be challenging for many users to spend time and effort in developing an understanding of the innovative features in a supercomputing platform - such as deep memory hierarchies - and to harness their maximum possible performance by manually modernizing their applications. To mitigate the aforementioned challenge, we are developing an Interactive Code Adaptation Tool (ICAT). In its current form, ICAT can assist the users in modifying, compiling, and optimally running their applications on the latest HPC platforms that are equipped with the Intel Knights Landing (KNL) processors. ICAT detects a given application's characteristics such as memory usage pattern, type of memory allocation, and execution time. Depending upon the application's characteristics, it advises the user on optimal ways to take advantage of the KNL processor and its memory-hierarchy.more » « less