skip to main content

Title: SciTokens SSH: Token-based Authentication for Remote Login to Scientific Computing Environments
SciTokens SSH is a pluggable authentication module (PAM) that uses JSON Web Tokens (JWTs) for authentication to the Secure Shell (SSH) remote login service. SciTokens SSH supports multiple token issuers with local token verification, so scientific computing providers are not forced to rely on a single OAuth server for token issuance and verification. The decentralized design for SciTokens SSH was motivated by the distributed nature of scientific computing environments, where scientists use computational resources from multiple providers, with a variety of security policies, distributed across the globe.
Authors:
; ;
Award ID(s):
1738962
Publication Date:
NSF-PAR ID:
10158905
Journal Name:
Practice and Experience in Advanced Research Computing
Sponsoring Org:
National Science Foundation
More Like this
  1. The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and securitymore »of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.« less
  2. The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. SciTokens introduces a capabilities-based authorization infrastructure for distributed scientific computing, to help scientists manage their security credentials more reliably and securely. SciTokens uses IETF-standard OAuth JSON Web Tokens for capability-based secure access to remote scientific data. These access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems. In this extended abstract, we presentmore »the results over the past year of our open source implementation of the SciTokens model and its deployment in the Open Science Grid, including new OAuth support added in the HTCondor 8.8 release series.« less
  3. Hypervisors are widely deployed by cloud computing providers to support virtual machines, but their growing complexity poses a security risk, as large codebases contain many vulnerabilities. We present SeKVM, a layered Linux KVM hypervisor architecture that has been formally verified on multiprocessor hardware. Using layers, we isolate KVM's trusted computing base into a small core such that only the core needs to be verified to ensure KVM's security guarantees. Using layers, we model hardware features at different levels of abstraction tailored to each layer of software. Lower hypervisor layers that configure and control hardware are verified using a novel machine model that includes multiprocessor memory management hardware such as multi-level shared page tables, tagged TLBs, and a coherent cache hierarchy with cache bypass support. Higher hypervisor layers that build on the lower layers are then verified using a more abstract and simplified model, taking advantage of layer encapsulation to reduce proof burden. Furthermore, layers provide modularity to reduce verification effort across multiple implementation versions. We have retrofitted and verified multiple versions of KVM on Arm multiprocessor hardware, proving the correctness of the implementations and that they contain no vulnerabilities that can affect KVM's security guarantees. Our work is the firstmore »machine-checked proof for a commodity hypervisor using multiprocessor memory management hardware. SeKVM requires only modest KVM modifications and incurs only modest performance overhead versus unmodified KVM on real application workloads.« less
  4. USB-based attacks have increased in complexity in recent years. Modern attacks now incorporate a wide range of attack vectors, from social engineering to signal injection. To address these challenges, the security community has responded with a growing set of fragmented defenses. In this work, we survey and categorize USB attacks and defenses, unifying observations from both peer-reviewed research and industry. Our systematization extracts offensive and defensive primitives that operate across layers of communication within the USB ecosystem. Based on our taxonomy, we discover that USB attacks often abuse the trust-by-default nature of the ecosystem, and transcend different layers within a software stack; none of the existing defenses provide a complete solution, and solutions expanding multiple layers are most effective. We then develop the first formal verification of the recently released USB Type- C Authentication specification, and uncover fundamental flaws in the specification's design. Based on the findings from our systematization, we observe that while the spec has successfully pinpointed an urgent need to solve the USB security problem, its flaws render these goals unattainable. We conclude by outlining future research directions to ensure a safer computing experience with USB.
  5. Authentication and integrity are fundamental security services that are critical for any viable system. However, some of the emerging systems (e.g., smart grids, aerial drones) are delay-sensitive, and therefore their safe and reliable operation requires delay-aware authentication mechanisms. Unfortunately, the current state-of-the-art authentication mechanisms either incur heavy computations or lack scalability for such large and distributed systems. Hence, there is a crucial need for digital signature schemes that can satisfy the requirements of delay-aware applications. In this paper, we propose a new digital signature scheme that we refer to as Compact Energy and Delay-aware Authentication (CEDA). In CEDA, signature generation and verification only require a small-constant number of multiplications and Pseudo Random Function (PRF) calls. Therefore, it achieves the lowest end-to-end delay among its counterparts. Our implementation results on an ARM processor and commodity hardware show that CEDA has the most efficient signature generation on both platforms, while offering a fast signature verification. Among its delay-aware counter-parts, CEDA has a smaller private key with a constant-size signature. All these advantages are achieved with the cost of a larger public key. This is a highly favorable trade-0ff for applications wherein the verffier is not memory-limited. We open-sourced our implementation of CEDAmore »to enable its broad testing and adaptation.« less