skip to main content


Title: Reactivity and statefulness: Action-based sensors, plans, and necessary state

Typically to a roboticist, a plan is the outcome of other work, a synthesized object that realizes ends defined by some problem; plans qua plans are seldom treated as first-class objects of study. Plans designate functionality: a plan can be viewed as defining a robot’s behavior throughout its execution. This informs and reveals many other aspects of the robot’s design, including: necessary sensors and action choices, history, state, task structure, and how to define progress. Interrogating sets of plans helps in comprehending the ways in which differing executions influence the interrelationships between these various aspects. Revisiting Erdmann’s theory of action-based sensors, a classical approach for characterizing fundamental information requirements, we show how plans (in their role of designating behavior) influence sensing requirements. Using an algorithm for enumerating plans, we examine how some plans for which no action-based sensor exists can be transformed into sets of sensors through the identification and handling of features that preclude the existence of action-based sensors. We are not aware of those obstructing features having been previously identified. Action-based sensors may be treated as standalone reactive plans; we relate them to the set of all possible plans through a lattice structure. This lattice reveals a boundary between plans with action-based sensors and those without. Some plans, specifically those that are not reactive plans and require some notion of internal state, can never have associated action-based sensors. Even so, action-based sensors can serve as a framework to explore and interpret how such plans make use of state.

 
more » « less
Award ID(s):
1849249
NSF-PAR ID:
10367462
Author(s) / Creator(s):
 ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
The International Journal of Robotics Research
ISSN:
0278-3649
Page Range / eLocation ID:
Article No. 027836492210788
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. One important class of applications entails a robot scrutinizing, monitoring, or recording the evolution of an uncertain time-extended process. This sort of situation leads to an interesting family of active perception problems that can be cast as planning problems in which the robot is limited in what it sees and must, thus, choose what to pay attention to. The distinguishing characteristic of this setting is that the robot has influence over what it captures via its sensors, but exercises no causal authority over the process evolving in the world. As such, the robot’s objective is to observe the underlying process and to produce a “chronicle” of occurrent events, subject to a goal specification of the sorts of event sequences that may be of interest. This paper examines variants of such problems in which the robot aims to collect sets of observations to meet a rich specification of their sequential structure. We study this class of problems by modeling a stochastic process via a variant of a hidden Markov model and specify the event sequences of interest as a regular language, developing a vocabulary of “mutators” that enable sophisticated requirements to be expressed. Under different suppositions on the information gleaned about the event model, we formulate and solve different planning problems. The core underlying idea is the construction of a product between the event model and a specification automaton. Using this product, we compute a policy that minimizes the expected number of steps to reach a goal state. We introduce a general algorithm for this problem as well as several more efficient algorithms for important special cases. The paper reports and compares performance metrics by drawing on some small case studies analyzed in depth via simulation. Specifically, we study the effect of the robot’s observation model on the average time required for the robot to record a desired story. We also compare our algorithm with a baseline greedy algorithm, showing that our algorithm outperforms the greedy algorithm in terms of the average time to record a desired story. In addition, experiments show that the algorithms tailored to specialized variants of the problem are rather more efficient than the general algorithm.

     
    more » « less
  2. Robust motion planning entails computing a global motion plan that is safe under all possible uncertainty realizations, be it in the system dynamics, the robot’s initial position, or with respect to external disturbances. Current approaches for robust motion planning either lack theoretical guarantees, or make restrictive assumptions on the system dynamics and uncertainty distributions. In this paper, we address these limitations by proposing the robust rapidly-exploring random-tree (Robust-RRT) algorithm, which integrates forward reachability analysis directly into sampling-based control trajectory synthesis. We prove that Robust-RRT is probabilistically complete (PC) for nonlinear Lipschitz continuous dynamical systems with bounded uncertainty. In other words, Robust-RRT eventually finds a robust motion plan that is feasible under all possible uncertainty realizations assuming such a plan exists. Our analysis applies even to unstable systems that admit only short-horizon feasible plans; this is because we explicitly consider the time evolution of reachable sets along control trajectories. Thanks to the explicit consideration of time dependency in our analysis, PC applies to unstabilizable systems. To the best of our knowledge, this is the most general PC proof for robust sampling-based motion planning, in terms of the types of uncertainties and dynamical systems it can handle. Considering that an exact computation of reachable sets can be computationally expensive for some dynamical systems, we incorporate sampling-based reachability analysis into Robust-RRT and demonstrate our robust planner on nonlinear, underactuated, and hybrid systems. 
    more » « less
  3. Intelligent interactive narrative systems coordinate a cast of non-player characters to make the overall story experience meaningful for the player. Narrative generation involves a tradeoff between plot-structure requirements and quality of character behavior, as well as computational efficiency. We study this tradeoff using the example of benchmark problems for narrative planning algorithms. A typical narrative planning problem calls for a sequence of actions that leads to an overall plot goal being met, while also requiring each action to respect constraints that create the appearance of character autonomy. We consider simplified solution definitions that enforce only plot requirements or only character requirements, and we measure how often each of these definitions leads to a solution that happens to meet both types of requirements—i.e., the density with which narrative plans occur among plot- or character-requirement-satisfying sequences. We then investigate whether solution densities can guide the selection of narrative planning algorithms. We compare the performance of two search strategies: one that satisfies plot requirements first and checks character requirements afterward, and one that continuously verifies character requirements. Our results show that comparing solution densities does not by itself predict which of these search strategies will be more efficient in terms of search nodes visited, suggesting that other important factors exist. We discuss what some of these factors could be. Our work opens further investigation into characterizing narrative planning algorithms and how they interact with specific domains. The results also highlight the diversity and difficulty of solving narrative planning problems. 
    more » « less
  4. In an era of ubiquitous digital interfaces and systems, technology and design practitioners must address a range of ethical dilemmas surrounding the use of persuasive design techniques and how to balance shareholder and end-user needs [2], [5]. Similarly, the increasing user concerns about unethical products and services [1] is paralleling a rise in regulatory interests in enforcing ethical design and engineering practices among technology practitioners, surfacing a need for further support. Although various scholars have developed frameworks and methods to support practitioners in navigating these challenging contexts [3], [4], often, there is a lack of resonance between these generic methods and the situated ethical complexities facing the practitioner in their everyday work. In this project, we designed and implemented a three-hour cocreation workshop with designers, engineers, and technologists to support them to develop bespoke ethics-focused action plans that are resonant with the ethical challenges they face in their everyday practice. In developing the co-creation session, we sought to answer the following questions to empower practitioners: • How can we support practitioners in developing action plans to address ethical dilemmas in their everyday work? and • How can we empower designers to design more responsibly? Building on these questions as a guide, we employed Miro, a digital whiteboard platform, to develop the co-creation experience. The final c o-creation e xperience w as d esigned w ith the visual metaphor of a “house” with four floors and multiple rooms that allowed participants to complete different tasks per room, all aimed towards the overall goal of developing participants' own personalized action plan in an interactive and collaborative way. We invited participants to share their stories and ethical dilemmas to support their creation and iteration of a personal action plan that they could later use in their everyday work context. Across the six co-creation sessions we conducted, participants (n=26) gained a better understanding of the drivers for ethical action in the context of their everyday work and developed an action plan through the co-creation workshop that enabled them to constructively engage with ethical challenges in their professional context. At the end of the session, participants were provided the action plans they created to allow them to use it in their practice. Furthermore, the co-design workshops were designed such that practitioners could take them away (the house and session guide) and run them independently at their organization or another context to support their objectives. We describe the building and the activities conducted in each floor below and will provide a pictorial representation of the house with the different floors, rooms, and activities on the poster presentation. a) First floor-Welcome, Introduction, Reflection: The first floor of the virtual house was designed to allow participants to introduce themselves and to reflect on and discuss the ethical concerns they wished to resolve during the session. b) Second floor-Shopping for ethics-focused methods: The second floor of the virtual house was designed as a “shopping” space where participants selected from range of ethicsfocused building blocks that they wish to potentially adapt or incorporate into their own action plan. They were also allowed to introduce their own methods or tools. c) Third floor-DIY Workspace: The third floor was designed as a DIY workspace to allow the participants to work in small groups to develop their own bespoke action plan based on building blocks they have gathered from their shopping trip and by using any other components they wish. The goal here was to support participants in developing methods and action plans that were resonant with their situated ethical complexities. d) Fourth floor-Gallery Space: The fourth floor was designed as a gallery to allow participants to share and discuss their action plans with other participants and to identify how their action plans could impact their future practice or educational experiences. Participants were also provided an opportunity at this stage to reflect on their experience participating in the session and provide feedback on opportunities for future improvement. 
    more » « less
  5. The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein’s function based on its sequence, we still do not understand how to accurately predict an organism’s behavior based on neural activity. Here, we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, we used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets. 
    more » « less