skip to main content


Search for: All records

Creators/Authors contains: "Kim, John"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    A growing number of teaching materials invite students to discuss the complex mathematical, contextual and social aspects of data visualizations. Orchestrating such discussions can be difficult, as this requires teachers to balance a variety of learning goals and student perspectives. This paper examines how teachers interact with data visualization discussion tasks—specifically, those that engage visualizations’ social complexities—as they consider using them in their own classrooms. Drawing from semi-structured clinical interviews with six U.S.-based teachers as they reviewed discussion tasks called Data Story Bytes, we explore: How did these teachers envision using these data visualization discussions in their classrooms? And, What mathematical, contextual, and/or social aspects of visualizations did teachers emphasize when engaging with the discussion task materials? We found that all teachers envisioned using data visualization discussions as lesson openers or routine activities, but they differed in their overall emphasis on the visualizations’ mathematical, contextual, or social aspects. Despite these differences, certain types of discussion prompts were associated with particular response patterns across all teachers, suggesting these task structures can help guide teachers to address a shared set of intended baseline goals for all three of these dimensions. Our findings represent a first step in understanding whether and how socially-oriented data discussion materials may be enacted in classrooms, and what additional design features and supports may be needed to help teachers do so productively.

     
    more » « less
  2. Free, publicly-accessible full text available August 1, 2025
  3. Over the past few decades, chemical vapor deposition (CVD) of [2.2]paracyclophanes has captured significant attention as an emergent technology, producing conformal, chemically pure, and pinhole‐free coatings for biomedical and industrial applications. Compelling examples range from functional CVD polymers to tailored nanostructures. In this work, the unique functional properties of polymers derived from [2.2]paracyclophanes are connected with emergent applications. Special attention is given to the function‐property relationships in the areas of electronic materials, biomaterials, and separation materials. A particular focus is to highlight the versatility of CVD polymerization to process these polymers.

     
    more » « less
    Free, publicly-accessible full text available May 6, 2025
  4. Processing-in-memory (PIM), where the compute is moved closer to the memory or the data, has been widely explored to accelerate emerging workloads. Recently, different PIM-based systems have been announced by memory vendors to minimize data movement and improve performance as well as energy efficiency. One critical component of PIM is the large amount of compute parallelism provided across many PIM nodes'' or the compute units near the memory. In this work, we provide an extensive evaluation and analysis of real PIM systems based on UPMEM PIM. We show that while there are benefits of PIM, there are also scalability challenges and limitations as the number of PIM nodes increases. In particular, we show how collective communications that are commonly found in many kernels/workloads can be problematic for PIM systems. To evaluate the impact of collective communication in PIM architectures, we provide an in-depth analysis of two workloads on the UPMEM PIM system that utilize representative common collective communication patterns -- AllReduce and All-to-All communication. Specifically, we evaluate 1) embedding tables that are commonly used in recommendation systems that require AllReduce and 2) the Number Theoretic Transform (NTT) kernel which is a critical component of Fully Homomorphic Encryption (FHE) that requires All-to-All communication. We analyze the performance benefits of these workloads and show how they can be efficiently mapped to the PIM architecture through alternative data partitioning. However, since each PIM compute unit can only access its local memory, when communication is necessary between PIM nodes (or remote data is needed), communication between the compute units must be done through the host CPU, thereby severely hampering application performance. To increase the scalability (or applicability) of PIM to future workloads, we make the case for how future PIM architectures need efficient communication or interconnection networks between the PIM nodes that require both hardware and software support. 
    more » « less
    Free, publicly-accessible full text available February 16, 2025
  5. Epoxy-based polymer networks from step-growth polymerizations are ubiquitous in coatings, adhesives, and as matrices in composite materials. Dynamic covalent bonds in the network allow its degradation into small molecules and thus, enable chemical recycling; however, such degradation often requires elevated temperatures and costly chemicals, resulting in various small molecules. Here, we design crosslinked polyesters from structurally similar epoxy and anhydride monomers derived from phthalic acid. We achieve selective degradation of the polyesters through transesterification reactions at near-ambient conditions using an alkali carbonate catalyst, resulting in a singular phthalic ester. We also demonstrate upcycling the network polyesters to photopolymers by one-step depolymerization using a functional alcohol. 
    more » « less
  6. Abstract

    Climate change projections provided by global climate models (GCM) are generally too coarse for local and regional applications. Local and regional climate change impact studies therefore use downscaled datasets. While there are studies that evaluate downscaling methodologies, there is no study comparing the downscaled datasets that are actually distributed and used in climate change impact studies, and there is no guidance for selecting a published downscaled dataset. We compare five widely used statistically downscaled climate change projection datasets that cover the conterminous USA (CONUS): ClimateNA, LOCA, MACAv2-LIVNEH, MACAv2-METDATA, and NEX-DCP30. All of the datasets are derived from CMIP5 GCMs and are publicly distributed. The five datasets generally have good agreement across CONUS for Representative Concentration Pathways (RCP) 4.5 and 8.5, although the agreement among the datasets vary greatly depending on the GCM, and there are many localized areas of sharp disagreements. Areas of higher dataset disagreement emerge over time, and their importance relative to differences among GCMs is comparable between RCP4.5 and RCP8.5. Dataset disagreement displays distinct regional patterns, with greater disagreement in △Tmax and △Tmin in the interior West and in the North, and disagreement in △P in California and the Southeast. LOCA and ClimateNA are often the outlier dataset, while the seasonal timing of ClimateNA is somewhat shifted from the others. To easily identify regional study areas with high disagreement, we generated maps of dataset disagreement aggregated to states, ecoregions, watersheds, and forests. Climate change assessment studies can use the maps to evaluate and select one or more downscaled datasets for their study area.

     
    more » « less
  7. Abstract

    Forests play a critical role in mitigating climate change, and, at the same time, are predicted to experience large-scale impacts of climate change that will affect the efficiency of forests in mitigation efforts. Projections of future carbon sequestration potential typically do not account for the changing economic costs of timber and agricultural production and land use change. We integrated a dynamic forward-looking economic optimization model of global land use with results from a dynamic global vegetation model and meta-analysis of climate impacts on crop yields to project future carbon sequestration in forests. We find that the direct impacts of climate change on forests, represented by changes in dieback and forest growth, and indirect effects due to lost crop productivity, together result in a net gain of 17 Gt C in aboveground forest carbon storage from 2000 to 2100. Increases in climate-driven forest growth rates will result in an 81%–99% reduction in costs of reaching a range of global forest carbon stock targets in 2100, while the increases in dieback rates are projected to raise the costs by 57%–132%. When combined, these two direct impacts are expected to reduce the global costs of climate change mitigation in forests by more than 70%. Inclusion of the third, indirect impact of climate change on forests through reduction in crop yields, and the resulting expansion of cropland, raises the costs by 11%–38% and widens the uncertainty range. While we cannot rule out the possibility of climate change increasing mitigation costs, the central outcomes of the simultaneous impacts of climate change on forests and agriculture are 64%–86% reductions in the mitigation costs. Overall, the results suggest that concerns about climate driven dieback in forests should not inhibit the ambitions of policy makers in expanding forest-based climate solutions.

     
    more » « less