skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: OpenSSF Scorecard: On the Path Toward Ecosystem-Wide Automated Security Metrics
The OpenSSF Scorecard project is an automated tool to monitor the security health of open-source software. This study evaluates the applicability of the Scorecard tool and compares the security practices and gaps in the npm and PyPI ecosystems.  more » « less
Award ID(s):
2207008
PAR ID:
10516079
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Security & Privacy
Volume:
21
Issue:
6
ISSN:
1540-7993
Page Range / eLocation ID:
76 to 88
Subject(s) / Keyword(s):
software supply chain OpenSSF Scorecard metrics
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Due to the ever-increasing number of security breaches, practitioners are motivated to produce more secure software. In the United States, the White House Office released a memorandum on Executive Order (EO) 14028 that mandates organizations provide self-attestation of the use of secure software development practices. The OpenSSF Scorecard project allows practitioners to measure the use of software security practices automatically. However, little research has been done to determine whether the use of security practices improves package security, particularly which security practices have the biggest impact on security outcomes. The goal of this study is to assist practitioners and researchers in making informed decisions on which security practices to adopt through the development of models between software security practice scores and security vulnerability counts. To that end, we developed five supervised machine learning models for npm and PyPI packages using the OpenSSF Scorecard security practices scores and aggregate security scores as predictors and the number of externally-reported vulnerabilities as a target variable. Our models found that four security practices (Maintained, Code Review, Branch Protection, and Security Policy) were the most important practices influencing vulnerability count. However, we had low R2 (ranging from 9% to 12%) when we tested the models to predict vulnerability counts. Additionally, we observed that the number of reported vulnerabilities increased rather than reduced as the aggregate security score of the packages increased. Both findings indicate that additional factors may influence the package vulnerability count. Other factors, such as the scarcity of vulnerability data, time to implicate security practices vs. time to detect vulnerabilities, and the need for more adequate scripts to detect security practices, may impede the data-driven studies to indicate that practice can aid in reducing externally-reported vulnerabilities. We suggest that vulnerability count and security score data be refined so that these measures can be used to provide actionable guidance on security practices. 
    more » « less
  2. This report provides a complete overview of the Partner Hire Scorecard project. The report’s goals are to provide clarity about the dual-career approaches of R1 universities in the United States. We’ve assessed publicly available documents pertaining to dual-career issues at these universities and have generated a “scorecard” that ranks institutions by their partner-friendly status. Moreover, we’ve archived the relevant documents so that jobseekers, researchers, and other interested parties can access them without needing to conduct their own web searches. Finally, we coded and analyzed these many documents to discover patterns in dual-career offerings by institution type, geographic location, and other variables. The “findings” section of this report reviews those overall results. 
    more » « less
  3. The College Internship Study wrapped up its third and final wave of data collection in the Spring of 2022. This report provides a summary of key findings from the longitudinal analyses across eight institutions that participated in the third and final wave of data collection. As an excerpt of the extensive dataset, this summary addresses the most pressing issues in college internship research and practice, as suggested in the Internship Scorecard (Hora et al., 2020). Developed for assessing the purpose, quality, and equity of internship programs, the Internship Scorecard provides a framework for this report to address three main issues of college internships: (a) access and barriers to internships, (b) internship program features and quality, and (c) effects of internships on post-graduate outcomes. Each of these issues are examined in this report, with special considerations for how the COVID-19 pandemic impacted student experiences in college, life, and work. 
    more » « less
  4. Sserwanga, I (Ed.)
    Data management plans (DMPs) are required from researchers seeking funding from federal agencies in the United States. Ideally, DMPs disclose how research outputs will be managed and shared. How well DMPs communicate those plans is less understood. Evaluation tools such as the DART rubric and the Belmont scorecard assess the completeness of DMPs and offer one view into what DMPs communicate. This paper compares the evaluation criteria of the two tools by applying them to the same corpus of 150 DMPs from five different NSF programs. Findings suggest that the DART rubric and the Belmont score overlap significantly, but the Belmont scorecard provides a better method to assess completeness. We find that most DMPs fail to address many of the best practices that are articulated by librarians and information professionals in the different evaluation tools. However, the evaluation methodology of both tools relies on a rating scale that does not account for the interaction of key areas of data management. This work contributes to the improvement of evaluation tools for data management planning. 
    more » « less
  5. We present a methodology for identifying security critical properties for use in the dynamic verification of a processor. Such verification has been shown to be an effective way to prevent exploits of vulnerabilities in the processor, given a meaningful set of security properties. We use known processor errata to establish an initial set of security-critical invariants of the processor. We then use machine learning to infer an additional set of invariants that are not tied to any particular, known vulnerability, yet are critical to security. We build a tool chain implementing the approach and evaluate it for the open-source OR1200 RISC processor. We find that our tool can identify 19 (86.4%) of the 22 manually crafted security-critical properties from prior work and generates 3 new security properties not covered in prior work. 
    more » « less