skip to main content

Title: Montague Grammar Induction
We propose a computational modeling framework for inducing combinatory categorial grammars from arbitrary behavioral data. This framework provides the analyst fine-grained control over the assumptions that the induced grammar should conform to: (i) what the primitive types are; (ii) how complex types are constructed; (iii) what set of combinators can be used to combine types; and (iv) whether (and to what) the types of some lexical items should be fixed. In a proof-of-concept experiment, we deploy our framework for use in distributional analysis. We focus on the relationship between s(emantic)-selection and c(ategory)-selection, using as input a lexicon-scale acceptability judgment dataset focused on English verbs’ syntactic distribution (the MegaAcceptability dataset) and enforcing standard assumptions from the semantics literature on the induced grammar.
Authors:
;
Award ID(s):
1940981
Publication Date:
NSF-PAR ID:
10299985
Journal Name:
Proceedings from Semantics and Linguistic Theory
Volume:
30
Page Range or eLocation-ID:
227-251
ISSN:
2163-5951
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) anmore »improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.« less
  2. We propose a computational model for inducing full-fledged combinatory categorial grammars from behavioral data. This model contrasts with prior computational models of selection in representing syntactic and semantic types as structured (rather than atomic) objects, enabling direct interpretation of the modeling results relative to standard formal frameworks. We investigate the grammar our model induces when fit to a lexicon-scale acceptability judgment dataset – Mega Acceptability – focusing in particular on the types our model assigns to clausal complements and the predicates that select them.
  3. Lierler, Yuliya ; Morales, Jose F ; Dodaro, Carmine ; Dahl, Veroniica ; Gebser, Martin ; Tekle, Tuncay (Ed.)
    Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules. Like databases, KRR systems contain information about domains of human activities like industrial enterprises, science, and business. KRRs can represent complex concepts and relations, and they can query and manipulate information in sophisticated ways. Unfortunately, the KRR technology has been hindered by the fact that specifying the requisite knowledge requires skills that most domain experts do not have, and professional knowledge engineers are hard to find. One solution could be to extract knowledge from English text, and a number of works have attempted to do somore »(OpenSesame, Google's Sling, etc.). Unfortunately, at present, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning, while restricting the grammar of the language (so-called controlled natural language, or CNL) is hard for the users to learn and use. Nevertheless, some recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others, and a natural question is to what extent the CNL restrictions can be lifted. In this paper, we address this issue by transplanting the KALM framework to a neural natural language parser, mStanza. Here we limit our attention to authoring facts and queries and therefore our focus is what we call factual English statements. Authoring other types of knowledge, such as rules, will be considered in our followup work. As it turns out, neural network based parsers have problems of their own and the mistakes they make range from part-of-speech tagging to lemmatization to dependency errors. We present a number of techniques for combating these problems and test the new system, KALMFL (i.e., KALM for factual language), on a number of benchmarks, which show KALMFL achieves correctness in excess of 95%.« less
  4. The world outside our laboratories seldom conforms to the assumptions of our models. This is especially true for dynamics models used in control and motion planning for complex high–degree of freedom systems like deformable objects. We must develop better models, but we must also consider that, no matter how powerful our simulators or how big our datasets, our models will sometimes be wrong. What is more, estimating how wrong models are can be difficult, because methods that predict uncertainty distributions based on training data do not account for unseen scenarios. To deploy robots in unstructured environments, we must address twomore »key questions: When should we trust a model and what do we do if the robot is in a state where the model is unreliable. We tackle these questions in the context of planning for manipulating rope-like objects in clutter. Here, we report an approach that learns a model in an unconstrained setting and then learns a classifier to predict where that model is valid, given a limited dataset of rope-constraint interactions. We also propose a way to recover from states where our model prediction is unreliable. Our method statistically significantly outperforms learning a dynamics function and trusting it everywhere. We further demonstrate the practicality of our method on real-world mock-ups of several domestic and automotive tasks.

    « less
  5. Abstract The traditional approach to obtain valid confidence intervals for non-parametric quantities is to select a smoothing parameter such that the bias of the estimator is negligible relative to its standard deviation. While this approach is apparently simple, it has two drawbacks: first, the question of optimal bandwidth selection is no longer well-defined, as it is not clear what ratio of bias to standard deviation should be considered negligible. Second, since the bandwidth choice necessarily deviates from the optimal (mean squares-minimizing) bandwidth, such a confidence interval is very inefficient. To address these issues, we construct valid confidence intervals that accountmore »for the presence of a non-negligible bias and thus make it possible to perform inference with optimal mean squared error minimizing bandwidths. The key difficulty in achieving this involves finding a strict, yet feasible, bound on the bias of a non-parametric estimator. It is well-known that it is not possible to consistently estimate the pointwise bias of an optimal non-parametric estimator (for otherwise, one could subtract it and obtain a faster convergence rate violating Stone’s bounds on the optimal convergence rates). Nevertheless, we find that, under minimal primitive assumptions, it is possible to consistently estimate an upper bound on the magnitude of the bias, which is sufficient to deliver a valid confidence interval whose length decreases at the optimal rate and which does not contradict Stone’s results.« less