Recent record rainfall and flood events have prompted increased attention to flood impacts on human systems. Information regarding flood effects on food security is of particular importance for humanitarian organizations and is especially valuable across Africa's rural areas that contribute to regional food supplies. We quantitatively evaluate where and to what extent flooding impacts food security across Africa, using a Granger causality analysis and panel modeling approaches. Within our modeled areas, we find that ∼12% of the people that experienced food insecurity from 2009 to 2020 had their food security status affected by flooding. Furthermore, flooding and its associated meteorological conditions can simultaneously degrade food security locally while enhancing it at regional spatial scales, leading to large variations in overall food security outcomes. Dedicated data collection at the intersection of flood events and associated food security measures across different spatial and temporal scales are required to better characterize the extent of flood impact and inform preparedness, response, and recovery needs.
more »
« less
Looking Backwards (and Forwards): NSF Secure and Trustworthy Computing 20-Year Retrospective Panel Transcription
The U.S. National Science Foundation (NSF) celebrated the 20th anniversary of its research funding programs in cybersecurity, and more generally, secure and trustworthy computing, with a panel session at its conference held in June, 2022. The panel members, distinguished researchers in different research areas of trustworthy computing, were asked to comment on what has been learned, what perhaps should be “unlearned,” what still needs to be learned, and the status of education and training in their respective areas of expertise. Laurie Williams covered enterprise security and measuring security, Gene Tsudik commented on cryptographic security, Trent Jaeger addressed computing infrastructure security, Tadayoshi Kohno reviewed security in cyberphysical systems, and Apu Kapadia provided insights on human-centered security. Michael K. Reiter chaired the panel and moderated questions from the audience. This report provides a brief summary of NSF's research programs in the area and an edited transcript of the panel discussion.
more »
« less
- Award ID(s):
- 2205940
- PAR ID:
- 10395295
- Date Published:
- Journal Name:
- IEEE Security & Privacy
- ISSN:
- 1540-7993
- Page Range / eLocation ID:
- 2 to 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Emerging Distributed AI systems are revolutionizing big data computing and data processing capabilities with growing economic and societal impact. However, recent studies have identified new attack surfaces and risks caused by security, privacy, and fairness issues in AI systems. In this paper, we review representative techniques, algorithms, and theoretical foundations for trustworthy distributed AI through robustness guarantee, privacy protection, and fairness awareness in distributed learning. We first provide a brief overview of alternative architectures for distributed learning, discuss inherent vulnerabilities for security, privacy, and fairness of AI algorithms in distributed learning, and analyze why these problems are present in distributed learning regardless of specific architectures. Then we provide a unique taxonomy of countermeasures for trustworthy distributed AI, covering (1) robustness to evasion attacks and irregular queries at inference, and robustness to poisoning attacks, Byzantine attacks, and irregular data distribution during training; (2) privacy protection during distributed learning and model inference at deployment; and (3) AI fairness and governance with respect to both data and models. We conclude with a discussion on open challenges and future research directions toward trustworthy distributed AI, such as the need for trustworthy AI policy guidelines, the AI responsibility-utility co-design, and incentives and compliance.more » « less
-
This Innovative Practice paper describes the Local Research Experiences for Undergraduates (LREU) program that was established by the Computing Alliance of Hispanic-Serving Institutions (CAHSI) at Hispanic-serving institutions (HSIs) in 2021 to increase the number of students, particularly students from underrepresented populations, who enter graduate programs in computer science. Since its first offering in Spring 2022, the LREU program has involved 182 faculty and 253 students. The LREU program funds undergraduate research experiences at the students’ home institutions with an emphasis on first-generation students and those with financial needs. The motivation for the program is to address the low number of domestic students, particularly Hispanics and other minoritized populations, who seek and complete graduate degrees. Research shows that participation in research activities predicts college outcomes such as GPA, retention, and persistence. Even though these studies inform us of the importance of REU programs, many programmatic efforts are summer experiences and, while students may receive support, faculty mentors rarely receive coaching or professional development efforts. What distinguishes the LREU program is the focus on the deliberative development of students’ professional and research skills; faculty coaching on the Affinity Research Group model; and the learning community established to share experiences and practices and to learn from each other. Students, who are matched with faculty mentors based on their areas of interest, work with their mentor to co-create a research plan. Students keep a research journal in which they record what they have learned and identify areas for their growth and development as researchers. The LREU provides an opportunity for the LREU participants to cultivate a growth mindset through deliberate practice and reflection from personal, professional, social, and academic perspectives. The paper discusses the multi-institutional perspectives that help CAHSI understand the types of challenges faced in undergraduate research programs, how faculty mentors communicate and make decisions, and how mentors resolve challenges, allowing the research community to better understand students’ and faculty experiences. In addition, the paper reports on research and evaluation results that documented mentors’ growth in their knowledge of effective research mentoring practices and students’ learning gains in research and other skills. The paper also describes the impact of the learning community, e.g., how it supports developing strategies for interaction with and mentoring students from underrepresented populations.more » « less
-
null (Ed.)Computer-aided cryptography is an active area of research that develops and applies formal, machine-checkable approaches to the design, analysis, and implementation of cryptography. We present a cross-cutting systematization of the computer-aided cryptography literature, focusing on three main areas: (i) design-level security (both symbolic security and computational security), (ii) functional correctness and efficiency, and (iii) implementation-level security (with a focus on digital side-channel resistance). In each area, we first clarify the role of computer-aided cryptography---how it can help and what the caveats are---in addressing current challenges. We next present a taxonomy of state-of-the-art tools, comparing their accuracy, scope, trustworthiness, and usability. Then, we highlight their main achievements, trade-offs, and research challenges. After covering the three main areas, we present two case studies. First, we study efforts in combining tools focused on different areas to consolidate the guarantees they can provide. Second, we distill the lessons learned from the computer-aided cryptography community's involvement in the TLS 1.3 standardization effort. Finally, we conclude with recommendations to paper authors, tool developers, and standardization bodies moving forward.more » « less
-
Security and reliability are primary concerns in any computing paradigm, including quantum computing. Currently, users can access quantum computers through a cloud-based platform where they can run their programs on a suite of quantum computers. As the quantum computing ecosystem grows in popularity and utility, it is reasonable to expect that more companies including untrusted/less-trusted/unreliable vendors will begin offering quantum computers as hardware-as-a-service at varied price/performance points. Since computing time on quantum hardware is expensive and the access queue could be long, the users will be motivated to use the cheaper and readily available but unreliable/less-trusted hardware. The less-trusted vendors can tamper with the results, providing a sub-optimal solution to the user. For applications such as, critical infrastructure optimization, the inferior solution may have significant socio-political implications. Since quantum computers cannot be simulated in classical computers, users have no way of verifying the computation outcome. In this paper, we address this challenge by modeling adversarial tampering and simulating it's impact on both pure quantum and hybrid quantum-classical workloads. To achieve trustworthy computing in a mixed environment of trusted and untrusted hardware, we propose an equitable distribution of total shots (i.e., repeated executions of quantum programs) across hardware options. On average, we note ≈ 30X and ≈ 1.5X improvement across the pure quantum workloads and a maximum improvement of ≈ 5X for hybrid-classical algorithm in the chosen quality metrics. We also propose an intelligent run adaptive shot distribution heuristic leveraging temporal variation in hardware quality to user's advantage, allowing them to identify tampered/untrustworthy hardware at runtime and allocate more number of shots to the reliable hardware, which results in a maximum improvement of ≈ 190X and ≈ 9X across the pure quantum workloads and an improvement of up to ≈ 2.5X for hybrid-classical algorithm.more » « less
An official website of the United States government

