skip to main content

Title: Rubik Tables and object rearrangement

A great number of robotics applications demand the rearrangement of many mobile objects, for example, organizing products on store shelves, shuffling containers at shipping ports, reconfiguring fleets of mobile robots, and so on. To boost the efficiency/throughput in systems designed for solving these rearrangement problems, it is essential to minimize the number of atomic operations that are involved, for example, the pick-n-places of individual objects. However, this optimization task poses a rather difficult challenge due to the complex inter-dependency between the objects, especially when they are tightly packed together. In this work, in tackling the aforementioned challenges, we have developed a novel algorithmic tool, called Rubik Tables, that provides a clean abstraction of object rearrangement problems as the proxy problem of shuffling items stored in a table or lattice. In its basic form, a Rubik Table is an n × n table containing n2items. We show that the reconfiguration of items in such a Rubik Table can be achieved using at most n column and n row shuffles in the partially labeled setting, where each column (resp., row) shuffle may arbitrarily permute the items stored in a column (resp., row) of the table. When items are fully distinguishable, additional n more » shuffles are needed. Rubik Tables allow many generalizations, for example, adding an additional depth dimension or extending to higher dimensions. Using Rubik Table results, we have designed a first constant-factor optimal algorithm for stack rearrangement problems where items are stored in stacks, accessible only from the top. We show that, for nd items stored in n stacks of depth d each, using one empty stack as the swap space, O( nd) stack pop-push operations are sufficient for an arbitrary reconfiguration of the stacks where [Formula: see text] for arbitrary fixed m > 0. Rubik Table results also allow the development of constant-factor optimal solutions for solving multi-robot motion planning problems under extreme robot density. These algorithms based on Rubik Table results run in low-polynomial time.

« less
Authors:
 ;  
Publication Date:
NSF-PAR ID:
10363407
Journal Name:
The International Journal of Robotics Research
Page Range or eLocation-ID:
Article No. 027836492110598
ISSN:
0278-3649
Publisher:
SAGE Publications
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract
    <p>Binder is a publicly accessible online service for executing interactive notebooks based on Git repositories. Binder dynamically builds and deploys containers following a recipe stored in the repository, then gives the user a browser-based notebook interface. The Binder group periodically releases a log of container launches from the public Binder service. Archives of launch records are available here. These records do not include identifiable information like IP addresses, but do give the source repo being launched along with some other metadata. The main content of this dataset is in the <code>binder.sqlite</code> file. This SQLite database includes launch records from 2018-11-03 to 2021-06-06 in the <code>events</code> table, which has the following schema.</p> <code>CREATE TABLE events( version INTEGER, timestamp TEXT, provider TEXT, spec TEXT, origin TEXT, ref TEXT, guessed_ref TEXT ); CREATE INDEX idx_timestamp ON events(timestamp); </code> <ul><li><code>version</code> indicates the version of the record as assigned by Binder. The <code>origin</code> field became available with version 3, and the <code>ref</code> field with version 4. Older records where this information was not recorded will have the corresponding fields set to null.</li><li><code>timestamp</code> is the ISO timestamp of the launch</li><li><code>provider</code> gives the type of source repo being launched (&#34;GitHub&#34; is by far the most common). The rest of theMore>>
  2. The Skyhook Data Management project (SkyhookDM.com) at the Center for Research in Open Source Software (cross.ucsc.edu) at UC Santa Cruz implements customized extensions through Ceph's object class interface that enables offloading database operations to the storage system. In our previous Vault '19 talk, we showed how SkyhookDM can transparently scale out databases. The SkyhookDM Ceph extensions are an example of our 'programmable storage' research efforts at UCSC, and can be accessed through commonly available external/foreign table database interfaces. Utilizing fast in-memory serialization libraries such as Google Flatbuffers and Apache Arrow, SkyhookDM currently implements common database functions such as SELECT, PROJECT, AGGREGATE, and indexing inside Ceph, along with lower-level data manipulations such as transforming data from row to column formats on RADOS servers. In this talk, we will present three of our latest developments on the SkyhookDM project since Vault '19. First, SkyhookDM can be used to also offload operations of access libraries that support plugins for backends, such as HDF5 and its Virtual Object Layer. Second, in addition to row-oriented data format using Google's Flatbuffers, we have added support for column-oriented data formats using the Apache Arrow library within our Ceph extensions. Third, we added dynamic switching between row andmore »column data formats within Ceph objects, a first step towards physical design management in storage systems, similar to physical design tuning in database systems.« less
  3. Hash tables are a ubiquitous class of dictionary data structures. However, standard hash table implementations do not translate well into the external memory model, because they do not incorporate locality for insertions. Iacono and Pătraşu established an update/query tradeoff curve for external-hash tables: a hash table that performs insertions in O(λ/B) amortized IOs requires Ω(logλ N) expected IOs for queries, where N is the number of items that can be stored in the data structure, B is the size of a memory transfer, M is the size of memory, and λ is a tuning parameter. They provide a complicated hashing data structure, which we call the IP hash table, that meets this curve for λ that is Ω(loglogM +logM N). In this paper, we present a simpler external-memory hash table, the Bundle of Arrays Hash Table (BOA), that is optimal for a narrower range of λ. The simplicity of BOAs allows them to be readily modified to achieve the following results: A new external-memory data structure, the Bundle of Trees Hash Table (BOT), that matches the performance of the IP hash table, while retaining some of the simplicity of the BOAs. The Cache-Oblivious Bundle of Trees Hash Table (COBOT), themore »first cache-oblivious hash table. This data structure matches the optimality of BOTs and IP hash tables over the same range of λ.« less
  4. TaxonWorks is an integrated web-based application for practicing taxonomists and biodiversity specialists. It is focused on promoting collaboration between researchers and developers. TaxonWorks has a modular structure that enables various components of the application to target specific needs and requirements of different groups of users. Specific areas of interest may include nomenclature-related tasks (Yoder and Dmitriev 2021) designed to help assemble and validate scientific name checklists of a target group of organisms; and collection management tasks, including interfaces to create, filter, and edit collecting events, collection objects, and loans. This presentation focuses on matrix-related tools integrated into TaxonWorks. A matrix, which could either be used for phylogenetic analysis or to build an identification key, is structured as a table where columns represent numerous characters that could be used to describe a set of entities, taxa or specimens (presented as rows of the table). Each cell of the table may contain observations for specific character/entity combinations. TaxonWorks does not generate a table for each a particular matrix—all observations are stored as graphs. This structure allows building of a matrix of an unlimited size as well as reuse of individual observations in multiple matrices. For matrix columns, TaxonWorks supports a variety ofmore »different kinds of characters or descriptors: qualitative, presence/absence, quantitative, sample, gene, free text, and media. Each character may have specific properties, for example a qualitative descriptor may have numerous characters states, and a quantitative descriptor may have a measurement unit defined. For an entity in a matrix row, TaxonWorks supports either collection objects (specimens) or taxa as Operational Taxonomic Units (OTU). OTUs could either be linked to nomenclature or be stand alone entities (e.g., representing undescribed species). The matrix, once built, could serve several purposes. A matrix based on qualitative and quantitative characters could be used to build an interactive key (Fig. 1), construct standardized natural language descriptions for each entity, and determine a diagnosis (a minimal set of characters that separate one entity from all others). It could also be exported and used for phylogenetic analysis or to build an interactive key in an external application. TaxonWorks supports export files in several formats, including Nexus, TNT, NeXML. Application Programming Interfaces (API) are also available. A matrix based on media descriptors could be used as a pictorial identification tool (Fig. 2).« less
  5. Spreadsheet tables are often labeled, and these labels effectively constitute types for the data in the table. In such cases tables can be considered to be built from typed data where the placement of values within the table is controlled by the types used for rows and columns. We present a new approach to the transformations of spreadsheet tables that is based on transformations of row and column types. We illustrate the basic idea of type-based table construction and transformation and lay out a series of research questions that should be addressed in future work.