%AAcosta-Silva, C.%ADelgado Peris, A.%AFlix, J.%AFrey, J.%AHernández, J.M.%AYzquierdo, A.%ATannenbaum, T.%ABiscarat, C. Ed.%ACampana, S. Ed.%AHegner, B. Ed.%ARoiser, S. Ed.%ARovelli, C.I. Ed.%AStewart, G.A. Ed.%BJournal Name: EPJ Web of Conferences; Journal Volume: 251 %D2021%I %JJournal Name: EPJ Web of Conferences; Journal Volume: 251 %K %MOSTI ID: 10296562 %PMedium: X %TExploitation of network-segregated CPU resources in CMS %XCMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to the Internet. Pilot agents and payload jobs need to interact with external services from the compute nodes: access to the application software (CernVM-FS) and conditions data (Frontier), management of input and output data files (data management services), and job management (HTCondor). Finding an alternative route to these services is challenging. Seamless integration in the CMS production system without causing any operational overhead is a key goal. The case of the Barcelona Supercomputing Center (BSC), in Spain, is particularly challenging, due to its especially restrictive network setup. We describe in this paper the solutions developed within CMS to overcome these restrictions, and integrate this resource in production. Singularity containers with application software releases are built and pre-placed in the HPC facility shared file system, together with conditions data files. HTCondor has been extended to relay communications between running pilot jobs and HTCondor daemons through the HPC shared file system. This operation mode also allows piping input and output data files through the HPC file system. Results, issues encountered during the integration process, and remaining concerns are discussed. %0Journal Article