The landscape of research in science and engineering is heavily reliant on computation and data processing. There is continued and expanded usage by disciplines that have historically used advanced computing resources, new usage by disciplines that have not traditionally used HPC, and new modalities of the usage in Data Science, Machine Learning, and other areas of AI. Along with these new patterns have come new advanced computing resource methods and approaches, including the availability of commercial cloud resources. The Coalition for Academic Scientific Computation (CASC) has long been an advocate representing the needs of academic researchers using computational resources, sharing best practices and offering advice to create a national cyberinfrastructure to meet US science, engineering, and other academic computing needs. CASC has completed the first of what we intend to be an annual survey of academic cloud and data center usage and practices in analyzing return on investment in cyberinfrastructure. Critically important findings from this first survey include the following: many of the respondents are engaged in some form of analysis of return in research computing investments, but only a minority currently report the results of such analyses to their upper-level administration. Most respondents are experimenting with use of commercial cloud resources but no respondent indicated that they have found use of commercial cloud services to create financial benefits compared to their current methods. There is clear correlation between levels of investment in research cyberinfrastructure and the scale of both cpu core-hours delivered and the financial level of supported research grants. Also interesting is that almost every respondent indicated that they participate in some sort of national cooperative or nationally provided research computing infrastructure project and most were involved in academic computing-related organizations, indicating a high degree of engagement by institutions of higher education in building and maintaining national research computing ecosystems. Institutions continue to evaluate cloud-based HPC service models, despite having generally concluded that so far cloud HPC is too expensive to use compared to their current methods. 
                        more » 
                        « less   
                    
                            
                            Jetstream2: Research Clouds as a Convergence Accelerator
                        
                    
    
            Over the past decade, the convergence of Cloud and High-Performance Computing (HPC) has undergone significant movement. We explore the evolution, motivations, and practicalities of establishing on-premise research cloud infrastructure and the complementary nature with HPC and commercial resources; under the belief that research clouds serve a unique role within research and education as a convergence accelerator. This role is highlighted through exploring the design tradeoffs in architecting research clouds versus HPC resources, focusing on the balance between utility, availability, and hardware utilization. The discussion provides insights from experiences with the National Science Foundation-supported Jetstream and Jetstream2 systems, showcasing convergence technologies and challenges. A variety of real-world use cases are provided that show the interplay between these computing paradigms; exploring use in research and education for interactive and iterative development, as an on-ramp to large-scale resources, as a powerful tool for education and workforce development, and for domain specific science gateways. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2005506
- PAR ID:
- 10546263
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- Computing in Science & Engineering
- ISSN:
- 1521-9615
- Page Range / eLocation ID:
- 1 to 11
- Subject(s) / Keyword(s):
- Cloud computing Convergence Education Software Containers Metals Logic gates
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Neuroscientists are increasingly relying on high performance/throughput computing resources for experimentation on voluminous data, analysis and visualization at multiple neural levels. Though current science gateways provide access to computing resources, datasets and tools specific to the disciplines, neuroscientists require guided knowledge discovery at various levels to accomplish their research/education tasks. The guidance can help them to navigate them through relevant publications, tools, topic associations and cloud platform options as they accomplish important research and education activities. To address this need and to spur research productivity and rapid learning platform development, we present “OnTimeRecommend”, a novel recommender system that comprises of several integrated recommender modules through RESTful web services. We detail a neuroscience use case in a CyNeuro science gateway, and show how the OnTimeRecommend design can enable novice/expert user interfaces, as well as template-driven control of heterogeneous cloud resources.more » « less
- 
            Welcome to the 4 th Workshop on Education for High Performance Computing (EduHiPC 2022). The EduHiPC 2022 workshop, held in conjunction with the IEEE International Conference on High Performance Computing Data & Analytics (HiPC 2022), is devoted to the development and assessment of educational and curricular innovations and resources for undergraduate and graduate education in Parallel and Distributed Computing (PDC) and High Performance Computing (HPC). EduHiPC brings together individuals from academia, industry, and other educational and research institutes to explore new ideas, challenges, and experiences related to PDC pedagogy and curricula. The workshop is designed in coordination with the IEEE TCPP curriculum initiative on parallel and distributed computing ( hitps://tcpp.cs.gsu .edu/curriculum/) for undergraduates majoring in computer science and computer engineering. It is supported by C-DAC, India and the US National Science Foundation (NSF) supported Center for Parallel and Distributed Computing Curriculum Development and Educational Resources (CDER). Details for attending the workshop are available on the HiPC webpage (HiPC). The effect of pandemic on academic and research community seems now to be globally receding as was evident from the enthusiastic in-person participation of conference delegates. Please visit the EduHiPC-22 webpage for the complete online proceedings, including copies of papers and presentation slides: EduHiPC 2022 | NSF/IEEE-TCPP Curriculum Initiative.more » « less
- 
            High Performance Computing (HPC) is the ability to process data and perform complex calculations at extremely high speeds. Current HPC platforms can achieve calculations on the order of quadrillions of calculations per second with quintillions on the horizon. The past three decades witnessed a vast increase in the use of HPC across different scientific, engineering and business communities, for example, sequencing the genome, predicting climate changes, designing modern aerodynamics, or establishing customer preferences. Although HPC has been well incorporated into science curricula such as bioinformatics, the same cannot be said for most computing programs. This working group will explore how HPC can make inroads into computer science education, from the undergraduate to postgraduate levels. The group will address research questions designed to investigate topics such as identifying and handling barriers that inhibit the adoption of HPC in educational environments, how to incorporate HPC into various curricula, and how HPC can be leveraged to enhance applied critical thinking and problem-solving skills. Four deliverables include: (1) a catalog of core HPC educational concepts, (2) HPC curricula for contemporary computing needs, such as in artificial intelligence, cyberanalytics, data science and engineering, or internet of things, (3) possible infrastructures for implementing HPC coursework, and (4) HPC-related feedback to the CC2020 project.more » « less
- 
            High-Performance Computing (HPC) is increasingly being used in traditional scientific domains as well as emerging areas like Deep Learning (DL). This has led to a diverse set of professionals who interact with state-of-the-art HPC systems. The deployment of Science Gateways for HPC systems like Open On-Demand has a significant positive impact on these users in migrating their workflows to HPC systems. Although computing capabilities are ubiquitously available (as on-premises or in the cloud HPC infrastructure), significant effort and expertise are required to use them effectively. This is particularly challenging for domain scientists and other users whose primary expertise lies outside of computer science. In this paper, we seek to minimize the steep learning curve and associated complexities of using state-of-the-art high-performance systems by creating SAI: an AI-Enabled Speech Assistant Interface for Science Gateways in High Performance Computing. We use state-of-the-art AI models for speech and text and fine-tune them for the HPC arena by retraining them on a new HPC dataset we create. We use ontologies and knowledge graphs to capture the complex relationships between various components of the HPC ecosystem. We finally show how one can integrate and deploy SAI in Open OnDemand and evaluate its functionality and performance on real HPC systems. To the best of our knowledge, this is the first effort aimed at designing and developing an AI-powered speech-assisted interface for science gateways in HPC.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    