Abstract Consider averages along the prime integers ℙ given by {\mathcal{A}_N}f(x) = {N^{ - 1}}\sum\limits_{p \in \mathbb{P}:p \le N} {(\log p)f(x - p).} These averages satisfy a uniform scale-free ℓ p -improving estimate. For all 1 < p < 2, there is a constant C p so that for all integer N and functions f supported on [0, N ], there holds {N^{ - 1/p'}}{\left\| {{\mathcal{A}_N}f} \right\|_{\ell p'}} \le {C_p}{N^{ - 1/p}}{\left\| f \right\|_{\ell p}}. The maximal function 𝒜 * f = sup N |𝒜 N f | satisfies ( p , p ) sparse bounds for all 1 < p < 2. The latter are the natural variants of the scale-free bounds. As a corollary, 𝒜 * is bounded on ℓ p ( w ), for all weights w in the Muckenhoupt 𝒜 p class. No prior weighted inequalities for 𝒜 * were known.
more »
« less
Discovery of penicillic acid as a chemical probe against tau aggregation in Alzheimer's disease
A genetically modified fungal strain generated a natural product library used to conduct various activity screens for Alzheimer's disease tau aggregation. The hit compound, penicillic acid, was optimized for the development of analogs.
more »
« less
- Award ID(s):
- 2003261
- PAR ID:
- 10632315
- Publisher / Repository:
- Royal Society of Chemistry
- Date Published:
- Journal Name:
- Chemical Science
- Volume:
- 15
- Issue:
- 48
- ISSN:
- 2041-6520
- Page Range / eLocation ID:
- 20467 to 20477
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Given a set of points $$P = (P^+ \sqcup P^-) \subset \mathbb{R}^d$$ for some constant $$d$$ and a supply function $$\mu:P\to \mathbb{R}$$ such that $$\mu(p) > 0~\forall p \in P^+$$, $$\mu(p) < 0~\forall p \in P^-$$, and $$\sum_{p\in P}{\mu(p)} = 0$$, the geometric transportation problem asks one to find a transportation map $$\tau: P^+\times P^-\to \mathbb{R}_{\ge 0}$$ such that $$\sum_{q\in P^-}{\tau(p, q)} = \mu(p)~\forall p \in P^+$$, $$\sum_{p\in P^+}{\tau(p, q)} = -\mu(q) \forall q \in P^-$$, and the weighted sum of Euclidean distances for the pairs $$\sum_{(p,q)\in P^+\times P^-}\tau(p, q)\cdot ||q-p||_2$$ is minimized. We present the first deterministic algorithm that computes, in near-linear time, a transportation map whose cost is within a $$(1 + \varepsilon)$$ factor of optimal. More precisely, our algorithm runs in $$O(n\varepsilon^{-(d+2)}\log^5{n}\log{\log{n}})$$ time for any constant $$\varepsilon > 0$$. While a randomized $$n\varepsilon^{-O(d)}\log^{O(d)}{n}$$ time algorithm for this problem was discovered in the last few years, all previously known deterministic $$(1 + \varepsilon)$$-approximation algorithms run in~$$\Omega(n^{3/2})$$ time. A similar situation existed for geometric bipartite matching, the special case of geometric transportation where all supplies are unit, until a deterministic $$n\varepsilon^{-O(d)}\log^{O(d)}{n}$$ time $$(1 + \varepsilon)$$-approximation algorithm was presented at STOC 2022. Surprisingly, our result is not only a generalization of the bipartite matching one to arbitrary instances of geometric transportation, but it also reduces the running time for all previously known $$(1 + \varepsilon)$$-approximation algorithms, randomized or deterministic, even for geometric bipartite matching. In particular, we give the first $$(1 + \varepsilon)$$-approximate deterministic algorithm for geometric bipartite matching and the first $$(1 + \varepsilon)$$-approximate deterministic or randomized algorithm for geometric transportation with no dependence on $$d$$ in the exponent of the running time's polylog. As an additional application of our main ideas, we also give the first randomized near-linear $$O(\varepsilon^{-2} m \log^{O(1)} n)$$ time $$(1 + \varepsilon)$$-approximation algorithm for the uncapacitated minimum cost flow (transshipment) problem in undirected graphs with arbitrary \emph{real} edge costs.more » « less
-
Caratheodory’s theorem says that any point in the convex hull of a set $$P$$ in $R^d$ is in the convex hull of a subset $P'$ of $$P$$ such that $$|P'| \le d + 1$$. For some sets P, the upper bound d + 1 can be improved. The best upper bound for P is known as the Caratheodory number [2, 15, 17]. In this paper, we study a computational problem of finding the smallest set $P'$ for a given set $$P$$ and a point $$p$$. We call the size of this set $P'$, the Caratheodory number of a point p or CNP. We show that the problem of deciding the Caratheodory number of a point is NP-hard. Furthermore, we show that the problem is k-LDT-hard. We present two algorithms for computing a smallest set $P'$, if CNP= 2,3. Bárány [1] generalized Caratheodory’s theorem by using d+1 sets (colored sets) such that their convex hulls intersect. We introduce a Colorful Caratheodory number of a point or CCNP which can be smaller than d+1. Then we extend our results for CNP to CCNP.more » « less
-
AbstractThe relative effectiveness of reflection either through student generation of contrasting cases or through provided contrasting cases is not well‐established for adult learners. This paper presents a classroom study to investigate this comparison in a college level Computer Science (CS) course where groups of students worked collaboratively to design database access strategies. Forty‐four teams were randomly assigned to three reflection conditions ([GEN] directive to generate a contrasting case to the student solution and evaluate their trade‐offs in light of the principle, [CONT] directive to compare the student solution with a provided contrasting case and evaluate their trade‐offs in light of a principle, and [NSI] a control condition with a non‐specific directive for reflection evaluating the student solution in light of a principle). In the CONT condition, as an illustration of the use of LLMs to exemplify knowledge transformation beyond knowledge construction in the generation of an automated contribution to a collaborative learning discussion, an LLM generated a contrasting case to a group's solution to exemplify application of an alternative problem solving strategy in a way that highlighted the contrast by keeping many concrete details the same as those the group had most recently collaboratively constructed. While there was no main effect of condition on learning based on a content test, low‐pretest student learned more from CONT than GEN, with NSI not distinguishable from the other two, while high‐pretest students learned marginally more from the GEN condition than the CONT condition, with NSI not distinguishable from the other two. Practitioner notesWhat is already known about this topicReflection during or even in place of computer programming is beneficial for learning of principles for advanced computer science when the principles are new to students.Generation of contrasting cases and comparing contrasting cases have both been demonstrated to be effective as opportunities to learn from reflection in some contexts, though questions remain about ideal applicability conditions for adult learners.Intelligent conversational agents can be used effectively to deliver stimuli for reflection during collaborative learning, though room for improvement remains, which provides an opportunity to demonstrate the potential positive contribution of large language models (LLMs).What this paper addsThe study contributes new knowledge related to the differences in applicability conditions between generation of contrasting cases and comparison across provided contrasting cases for adult learning.The paper presents an application of LLMs as a tool to provide contrasting cases tailored to the details of actual student solutions.The study provides evidence from a classroom intervention study for positive impact on student learning of an LLM‐enabled intervention.Implications for practice and/or policyAdvanced computer science curricula should make substantial room for reflection alongside problem solving.Instructors should provide reflection opportunities for students tailored to their level of prior knowledge.Instructors would benefit from training to use LLMs as tools for providing effective contrasting cases, especially for low‐prior‐knowledge students.more » « less
-
The data provided here accompany the publication "Drought Characterization with GPS: Insights into Groundwater and Reservoir Storage in California" [Young et al., (2024)] which is currently under review with Water Resources Research. (as of 28 May 2024)Please refer to the manuscript and its supplemental materials for full details. (A link will be appended following publication)File formatting information is listed below, followed by a sub-section of the text describing the Geodetic Drought Index Calculation. The longitude, latitude, and label for grid points are provided in the file "loading_grid_lon_lat".Time series for each Geodetic Drought Index (GDI) time scale are provided within "GDI_time_series.zip".The included time scales are for 00- (daily), 1-, 3-, 6-, 12- 18- 24-, 36-, and 48-month GDI solutions.Files are formatted following...Title: "grid point label L****"_"time scale"_monthFile Format: ["decimal date" "GDI value"]Gridded, epoch-by-epoch, solutions for each time scale are provided within "GDI_grids.zip".Files are formatted following...Title: GDI_"decimal date"_"time scale"_monthFile Format: ["longitude" "latitude" "GDI value" "grid point label L****"]2.2 GEODETIC DROUGHT INDEX CALCULATION We develop the GDI following Vicente-Serrano et al. (2010) and Tang et al. (2023), such that the GDI mimics the derivation of the SPEI, and utilize the log-logistic distribution (further details below). While we apply hydrologic load estimates derived from GPS displacements as the input for this GDI (Figure 1a-d), we note that alternate geodetic drought indices could be derived using other types of geodetic observations, such as InSAR, gravity, strain, or a combination thereof. Therefore, the GDI is a generalizable drought index framework. A key benefit of the SPEI is that it is a multi-scale index, allowing the identification of droughts which occur across different time scales. For example, flash droughts (Otkin et al., 2018), which may develop over the period of a few weeks, and persistent droughts (>18 months), may not be observed or fully quantified in a uni-scale drought index framework. However, by adopting a multi-scale approach these signals can be better identified (Vicente-Serrano et al., 2010). Similarly, in the case of this GPS-based GDI, hydrologic drought signals are expected to develop at time scales that are both characteristic to the drought, as well as the source of the load variation (i.e., groundwater versus surface water and their respective drainage basin/aquifer characteristics). Thus, to test a range of time scales, the TWS time series are summarized with a retrospective rolling average window of D (daily with no averaging), 1, 3, 6, 12, 18, 24, 36, and 48-months width (where one month equals 30.44 days). From these time-scale averaged time series, representative compilation window load distributions are identified for each epoch. The compilation window distributions include all dates that range ±15 days from the epoch in question per year. This allows a characterization of the estimated loads for each day relative to all past/future loads near that day, in order to bolster the sample size and provide more robust parametric estimates [similar to Ford et al., (2016)]; this is a key difference between our GDI derivation and that presented by Tang et al. (2023). Figure 1d illustrates the representative distribution for 01 December of each year at the grid cell co-located with GPS station P349 for the daily TWS solution. Here all epochs between between 16 November and 16 December of each year (red dots), are compiled to form the distribution presented in Figure 1e. This approach allows inter-annual variability in the phase and amplitude of the signal to be retained (which is largely driven by variation in the hydrologic cycle), while removing the primary annual and semi-annual signals. Solutions converge for compilation windows >±5 days, and show a minor increase in scatter of the GDI time series for windows of ±3-4 days (below which instability becomes more prevalent). To ensure robust characterization of drought characteristics, we opt for an extended ±15-day compilation window. While Tang et al. (2023) found the log-logistic distribution to be unstable and opted for a normal distribution, we find that, by using the extended compiled distribution, the solutions are stable with negligible differences compared to the use of a normal distribution. Thus, to remain aligned with the SPEI solution, we retain the three-parameter log-logistic distribution to characterize the anomalies. Probability weighted moments for the log-logistic distribution are calculated following Singh et al., (1993) and Vicente-Serrano et al., (2010). The individual moments are calculated following Equation 3. These are then used to calculate the L-moments for shape (), scale (), and location () of the three-parameter log-logistic distribution (Equations 4 – 6). The probability density function (PDF) and the cumulative distribution function (CDF) are then calculated following Equations 7 and 8, respectively. The inverse Gaussian function is used to transform the CDF from estimates of the parametric sample quantiles to standard normal index values that represent the magnitude of the standardized anomaly. Here, positive/negative values represent greater/lower than normal hydrologic storage. Thus, an index value of -1 indicates that the estimated load is approximately one standard deviation dryer than the expected average load on that epoch. *Equations can be found in the main text.more » « less
An official website of the United States government

