skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2025

Title: FEW questions, many answers: using machine learning to assess how students connect food–energy–water (FEW) concepts
Award ID(s):
2013359 2013373
PAR ID:
10589976
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Humanities and Social Sciences Communications
Date Published:
Journal Name:
Humanities and Social Sciences Communications
Volume:
11
Issue:
1
ISSN:
2662-9992
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data. In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients. Although such a mechanism is proven to be effective in various fields, existing works generally assume that each client preserves sufficient data for training. In practice, however, certain clients can only contain a limited number of samples (i.e., few-shot samples). For example, the available photo data taken by a specific user with a new mobile device is relatively rare. In this scenario, existing FL efforts typically encounter a significant performance drop on these clients. Therefore, it is urgent to develop a few-shot model that can generalize to clients with limited data under the FL scenario. In this paper, we refer to this novel problem as federated few-shot learning. Nevertheless, the problem remains challenging due to two major reasons: the global data variance among clients (i.e., the difference in data distributions among clients) and the local data insufficiency in each client (i.e., the lack of adequate local data for training). To overcome these two challenges, we propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies to reduce the adverse impact of global data variance and local data insufficiency. Extensive experiments on four prevalent datasets that cover news articles and images validate the effectiveness of our framework compared with the state-of-the-art baselines. 
    more » « less
  2. null (Ed.)
    Recently Cutler and Radcliffe proved that the graph on $$n$$ vertices with maximum degree at most $$r$$ having the most cliques is a disjoint union of $$\lfloor n/(r+1)\rfloor$$ cliques of size $r+1$ together with a clique on the remainder of the vertices. It is very natural also to consider this question when the limiting resource is edges rather than vertices. In this paper we prove that among graphs with $$m$$ edges and maximum degree at most $$r$$, the graph that has the most cliques of size at least two is the disjoint union of $$\bigl\lfloor m \bigm/\binom{r+1}{2} \bigr\rfloor$$ cliques of size $r+1$ together with the colex graph using the remainder of the edges. 
    more » « less
  3. null (Ed.)
  4. For real-world graph data, the node class distribution is inherently imbalanced and long-tailed, which naturally leads to a few-shot learning scenario with limited nodes labeled for newly emerging classes. Existing efforts are carefully designed to solve such a few-shot learning problem via data augmentation, learning transferable initialization, to name a few. However, most, if not all, of them are based on a strong assumption that all the test nodes must exclusively come from novel classes, which is impractical in real-world applications. In this paper, we study a broader and more realistic problem named generalized few-shot node classification, where the test samples can be from both novel classes and base classes. Compared with the standard fewshot node classification, this new problem imposes several unique challenges, including asymmetric classification and inconsistent preference. To counter those challenges, we propose a shot-aware graph neural network (STAGER) equipped with an uncertainty-based weight assigner module for adaptive propagation. To formulate this problem from the meta-learning perspective, we propose a new training paradigm named imbalanced episodic training to ensure the label distribution is consistent between the training and test scenarios. Experiment results on four real-world datasets demonstrate the efficacy of our model, with up to 14% accuracy improvement over baselines. 
    more » « less