skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Foundation models for geospatial reasoning: assessing the capabilities of large language models in understanding geometries and topological spatial relations
Award ID(s):
2112606
PAR ID:
10640779
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
International Journal of Geographical Information Science
Date Published:
Journal Name:
International Journal of Geographical Information Science
Volume:
39
Issue:
9
ISSN:
1365-8816
Page Range / eLocation ID:
1866 to 1903
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Lee, EA; Mousavi, MR; Talcott, C (Ed.)
    Driving progress in science and engineering for centuries, models are powerful tools for understanding systems and building abstractions. However, the goal of models in science is different from that in engineering, and we observe the misuse of models undermining research goals. Specifically in the field of formal methods, we advocate that verification should be performed on engineering models rather than scientific models, to the extent possible. We observe that models under verification are, very often, scientific models rather than engineering models, and we show why verifying scientific models is ineffective in engineering efforts. To guarantee safety in an engineered system, it is the engineering model one should verify. This model can be used to derive a correct-by-construction implementation. To demonstrate our proposed principle, we review lessons learned from verifying programs in a language called Lingua Franca using Timed Rebeca. 
    more » « less
  2. Recent work on supervised learning [GKR+22] defined the notion of omnipredictors, i.e., predictor functions p over features that are simultaneously competitive for minimizing a family of loss functions  against a comparator class . Omniprediction requires approximating the Bayes-optimal predictor beyond the loss minimization paradigm, and has generated significant interest in the learning theory community. However, even for basic settings such as agnostically learning single-index models (SIMs), existing omnipredictor constructions require impractically-large sample complexities and runtimes, and output complex, highly-improper hypotheses. Our main contribution is a new, simple construction of omnipredictors for SIMs. We give a learner outputting an omnipredictor that is ε-competitive on any matching loss induced by a monotone, Lipschitz link function, when the comparator class is bounded linear predictors. Our algorithm requires ≈ε−4 samples and runs in nearly-linear time, and its sample complexity improves to ≈ε−2 if link functions are bi-Lipschitz. This significantly improves upon the only prior known construction, due to [HJKRR18, GHK+23], which used ≳ε−10 samples. We achieve our construction via a new, sharp analysis of the classical Isotron algorithm [KS09, KKKS11] in the challenging agnostic learning setting, of potential independent interest. Previously, Isotron was known to properly learn SIMs in the realizable setting, as well as constant-factor competitive hypotheses under the squared loss [ZWDD24]. As they are based on Isotron, our omnipredictors are multi-index models with ≈ε−2 prediction heads, bringing us closer to the tantalizing goal of proper omniprediction for general loss families and comparators. 
    more » « less
  3. Abstract Different agents need to make a prediction. They observe identical data, but have different models: they predict using different explanatory variables. We study which agent believes they have the best predictive ability—as measured by the smallest subjective posterior mean squared prediction error—and show how it depends on the sample size. With small samples, we present results suggesting it is an agent using a low-dimensional model. With large samples, it is generally an agent with a high-dimensional model, possibly including irrelevant variables, but never excluding relevant ones. We apply our results to characterize the winning model in an auction of productive assets, to argue that entrepreneurs and investors with simple models will be overrepresented in new sectors, and to understand the proliferation of “factors” that explain the cross-sectional variation of expected stock returns in the asset-pricing literature. 
    more » « less
  4. This talk addresses the essential role of data models in analytics, especially for lean teams. It caters to a broad audience, from beginners creating reports to those integrating diverse datasets for advanced analytics and KPI development. A robust data model is crucial for rapidly scaling analytics efforts, allowing for the inclusion of varied data sources such as research awards and proposals, HR data (Gender, ranks, titles, ethnicity etc.), teaching loads, and external datasets like the HERD Survey. We will cover data modeling basics, then explore advanced analytics with the Microsoft Analytics Stack, focusing on Power BI Desktop, emphasizing its accessibility and capability for comprehensive insights. Including an introduction to Microsoft Data Analysis Expressions (DAX) and Time Intelligence Functions. Discover how effective data models enhance analytics capabilities, enabling teams to achieve significant research outcomes. 
    more » « less