skip to main content


Search for: All records

Creators/Authors contains: "Yang, Yue"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In this paper, we study a predator–prey mite model of Leslie type with generalized Holling IV functional response. The model is shown to have very rich bifurcation dynamics, including subcritical and supercritical Hopf bifurcations, degenerate Hopf bifurcation, focus‐type and cusp‐type degenerate Bogdanov–Takens bifurcations of codimension 3, originating from a nilpotent focus or cusp of codimension 3 that acts as the organizing center for the bifurcation set. Coexistence of multiple steady states, multiple limit cycles, and homoclinic cycles is also found. Interestingly, the coexistence of two limit cycles is guaranteed by investigating generalized Hopf bifurcation and degenerate homoclinic bifurcation, and we also find that two generalized Hopf bifurcation points are connected by a saddle‐node bifurcation curve of limit cycles, which indicates the existence of global regime for two limit cycles. Our work extends some results in the literature.

     
    more » « less
    Free, publicly-accessible full text available May 1, 2025
  2. Learning from Demonstration (LfD) is a powerful method for nonroboticists end-users to teach robots new tasks, enabling them to customize the robot behavior. However, modern LfD techniques do not explicitly synthesize safe robot behavior, which limits the deployability of these approaches in the real world. To enforce safety in LfD without relying on experts, we propose a new framework, ShiElding with Control barrier fUnctions in inverse REinforcement learning (SECURE), which learns a customized Control Barrier Function (CBF) from end-users that prevents robots from taking unsafe actions while imposing little interference with the task completion. We evaluate SECURE in three sets of experiments. First, we empirically validate SECURE learns a high-quality CBF from demonstrations and outperforms conventional LfD methods on simulated robotic and autonomous driving tasks with improvements on safety by up to 100%. Second, we demonstrate that roboticists can leverage SECURE to outperform conventional LfD approaches on a real-world knife-cutting, meal-preparation task by 12.5% in task completion while driving the number of safety violations to zero. Finally, we demonstrate in a user study that non-roboticists can use SECURE to efectively teach the robot safe policies that avoid collisions with the person and prevent cofee from spilling. 
    more » « less
    Free, publicly-accessible full text available March 11, 2025
  3. Free, publicly-accessible full text available January 1, 2025
  4. Concept Bottleneck Models (CBM) are inherently interpretable models that factor model decisions into human-readable concepts. They allow people to easily understand why a model is failing, a critical feature for high-stakes applications. CBMs require manually specified concepts and often under-perform their black box counterparts, preventing their broad adoption. We address these shortcomings and are first to show how to construct high-performance CBMs without manual specification of similar accuracy to black box models. Our approach, Language Guided Bottlenecks (LaBo), leverages a language model, GPT-3, to define a large space of possible bottlenecks. Given a problem domain, LaBo uses GPT-3 to produce factual sentences about categories to form candidate concepts. LaBo efficiently searches possible bottlenecks through a novel submodular utility that promotes the selection of discriminative and diverse information. Ultimately, GPT-3's sentential concepts can be aligned to images using CLIP, to form a bottleneck layer. Experiments demonstrate that LaBo is a highly effective prior for concepts important to visual recognition. In the evaluation with 11 diverse datasets, LaBo bottlenecks excel at few-shot classification: they are 11.7% more accurate than black box linear probes at 1 shot and comparable with more data. Overall, LaBo demonstrates that inherently interpretable models can be widely applied at similar, or better, performance than black box approaches. 
    more » « less
  5. Entities and events are crucial to natural language reasoning and common in procedural texts. Existing work has focused either exclusively on entity state tracking (e.g., whether a pan is hot) or on event reasoning (e.g., whether one would burn themselves by touching the pan), while these two tasks are often causally related. We propose CREPE, the first benchmark on causal reasoning of event plausibility and entity states. We show that most language models, including GPT-3, perform close to chance at .35 F1, lagging far behind human at .87 F1. We boost model performance to .59 F1 by creatively representing events as programming languages while prompting language models pretrained on code. By injecting the causal relations between entities and events as intermediate reasoning steps in our representation, we further boost the performance to .67 F1. Our findings indicate not only the challenge that CREPE brings for language models, but also the efficacy of code-like prompting combined with chain-of-thought prompting for multihop event reasoning. 
    more » « less
  6. Neural language models encode rich knowledge about entities and their relationships which can be extracted from their representations using probing. Common properties of nouns (e.g., red strawberries, small ant) are, however, more challenging to extract compared to other types of knowledge because they are rarely explicitly stated in texts. We hypothesize this to mainly be the case for perceptual properties which are obvious to the participants in the communication. We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models. We consider perceptual properties to be more concrete than abstract properties (e.g., interesting, flawless). We propose to use the adjectives’ concreteness score as a lever to calibrate the contribution of each source (text vs. images). We evaluate our ensemble model in a ranking task where the actual properties of a noun need to be ranked higher than other non-relevant properties. Our results show that the proposed combination of text and images greatly improves noun property prediction compared to powerful text-based language models. 
    more » « less
  7. Jennions, Michael D (Ed.)
    Abstract Communication signals by both human and non-human animals are often interrupted in nature. One advantage of multimodal cues is to maintain the salience of interrupted signals. We studied a frog that naturally can have silent gaps within its call. Using video/audio-playbacks, we presented females with interrupted mating calls with or without a simultaneous dynamic (i.e., inflating and deflating) vocal sac and tested whether multisensory cues (noise and/or dynamic vocal sac) inserted into the gap can compensate an interrupted call. We found that neither inserting white noise into the silent gap of an interrupted call nor displaying the dynamic vocal sac in that same gap restored the attraction of the call equivalent to that of a complete call. Simultaneously presenting a dynamic vocal sac along with noise in the gap, however, compensated the interrupted call, making it as attractive as a complete call. Our results demonstrate that the dynamic visual sac compensates for noise interference. Such novel multisensory integration suggests that multimodal cues can provide insurance against imperfect sender coding in a noisy environment, and the communication benefits to the receiver from multisensory integration may be an important selective force favoring multimodal signal evolution. 
    more » « less
  8. Procedures are inherently hierarchical. To “make videos”, one may need to “purchase a camera”, which in turn may require one to “set a budget”. While such hierarchical knowledge is critical for reasoning about complex procedures, most existing work has treated procedures as shallow structures without modeling the parent-child relation. In this work, we attempt to construct an open-domain hierarchical knowledge-base (KB) of procedures based on wikiHow, a website containing more than 110k instructional articles, each documenting the steps to carry out a complex procedure. To this end, we develop a simple and efficient method that links steps (e.g., “purchase a camera”) in an article to other articles with similar goals (e.g., “how to choose a camera”), recursively constructing the KB. Our method significantly outperforms several strong baselines according to automatic evaluation, human judgment, and application to downstream tasks such as instructional video retrieval. 
    more » « less