skip to main content


Title: Gravity Model of Passenger and Mobility Fleet Origin–Destination Patterns with Partially Observed Service Data

Mobility-as-a-service systems are becoming increasingly important in the context of smart cities, with challenges arising for public agencies to obtain data from private operators. Only limited mobility data are typically provided to city agencies, which are not enough to support their decision-making. This study proposed an entropy-maximizing gravity model to predict origin–destination patterns of both passenger and mobility fleets with only partial operator data. An iterative balancing algorithm was proposed to efficiently reach the entropy maximization state. With different trip length distributions data available, two calibration applications were discussed and validated with a small-scale numerical example. Tests were also conducted to verify the applicability of the proposed model and algorithm to large-scale real data from Chicago transportation network companies. Both shared-ride and single-ride trips were forecast based on the calibrated model, and the prediction of single-ride has a higher level of accuracy. The proposed solution and calibration algorithms are also efficient to handle large scenarios. Additional analyses were conducted for north and south sub-areas of Chicago and revealed different travel patterns in these two sub-areas.

 
more » « less
Award ID(s):
1652735
NSF-PAR ID:
10281680
Author(s) / Creator(s):
 ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Transportation Research Record: Journal of the Transportation Research Board
Volume:
2675
Issue:
6
ISSN:
0361-1981
Page Range / eLocation ID:
p. 235-253
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ride-sourcing services play an increasingly important role in meeting mobility needs in many metropolitan areas. Yet, aside from delivering passengers from their origins to destinations, ride-sourcing vehicles generate a significant number of vacant trips from the end of one customer delivery trip to the start of the next. These vacant trips create additional traffic demand and may worsen traffic conditions in urban networks. Capturing the congestion effect of these vacant trips poses a great challenge to the modeling practice of transportation planning agencies. With ride-sourcing services, vehicular trips are the outcome of the interactions between service providers and passengers, a missing ingredient in the current traffic assignment methodology. In this paper, we enhance the methodology by explicitly modeling those vacant trips, which include cruising for customers and deadheading for picking up them. Because of the similarity between taxi and ride-sourcing services, we first extend previous taxi network models to construct a base model, which assumes intranode matching between customers and idle ride-sourcing vehicles and thus, only considers cruising vacant trips. Considering spatial matching among multiple zones commonly practiced by ride-sourcing platforms, we further enhance the base model by encapsulating internode matching and considering both the cruising and deadheading vacant trips. A large set of empirical data from Didi Chuxing is applied to validate the proposed enhancement for internode matching. The extended model describes the equilibrium state that results from the interactions between background regular traffic and occupied, idle, and deadheading ride-sourcing vehicles. A solution algorithm is further proposed to solve the enhanced model effectively. Numerical examples are presented to demonstrate the model and solution algorithm. Although this study focuses on ride-sourcing services, the proposed modeling framework can be adapted to model other types of shared use mobility services. 
    more » « less
  2. In this paper, we consider a setting inspired by spatial crowdsourcing platforms, where both workers and tasks arrive at different times, and each worker-task assignment yields a given reward. The key challenge is to address the uncertainty in the stochastic arrivals from both workers and the tasks. In this work, we consider a ubiquitous scenario where the arrival patterns of worker “types” and task “types” are not erratic but can be predicted from historical data. Specifically, we consider a finite time horizon T and assume that in each time-step the arrival of a worker and a task can be seen as an independent sample from two (different) distributions. Our model, called "Online Task Assignment with Two-Sided Arrival" (OTA-TSA), is a significant generalization of the classical online task-assignment problem when all the tasks are statically available. For the general case of OTA-TSA, we present an optimal non-adaptive algorithm (NADAP), which achieves a competitive ratio (CR) of at least 0.295. For a special case of OTA-TSA when the reward depends only on the worker type, we present two adaptive algorithms, which achieve CRs of at least 0.343 and 0.355, respectively. On the hardness side, we show that (1) no non-adaptive can achieve a CR larger than that of NADAP, establishing the optimality of NADAP among all non-adaptive algorithms; and (2) no (adaptive) algorithm can achieve a CR better than 0.581 (unconditionally) or 0.423 (conditionally on the benchmark linear program), respectively. All aforementioned negative results apply to even unweighted OTA-TSA when every assignment yields a uniform reward. At the heart of our analysis is a new technical tool, called "two-stage birth-death process", which is a refined notion of the classical birth-death process. We believe it may be of independent interest. Finally, we perform extensive numerical experiments on a real-world ride-share dataset collected in Chicago and a synthetic dataset, and results demonstrate the effectiveness of our proposed algorithms in practice. 
    more » « less
  3. Abstract Background

    Spectral CT material decomposition provides quantitative information but is challenged by the instability of the inversion into basis materials. We have previously proposed the constrained One‐Step Spectral CT Image Reconstruction (cOSSCIR) algorithm to stabilize the material decomposition inversion by directly estimating basis material images from spectral CT data. cOSSCIR was previously investigated on phantom data.

    Purpose

    This study investigates the performance of cOSSCIR using head CT datasets acquired on a clinical photon‐counting CT (PCCT) prototype. This is the first investigation of cOSSCIR for large‐scale, anatomically complex, clinical PCCT data. The cOSSCIR decomposition is preceded by a spectrum estimation and nonlinear counts correction calibration step to address nonideal detector effects.

    Methods

    Head CT data were acquired on an early prototype clinical PCCT system using an edge‐on silicon detector with eight energy bins. Calibration data of a step wedge phantom were also acquired and used to train a spectral model to account for the source spectrum and detector spectral response, and also to train a nonlinear counts correction model to account for pulse pileup effects. The cOSSCIR algorithm optimized the bone and adipose basis images directly from the photon counts data, while placing a grouped total variation (TV) constraint on the basis images. For comparison, basis images were also reconstructed by a two‐step projection‐domain approach of Maximum Likelihood Estimation (MLE) for decomposing basis sinograms, followed by filtered backprojection (MLE + FBP) or a TV minimization algorithm (MLE + TVmin) to reconstruct basis images. We hypothesize that the cOSSCIR approach will provide a more stable inversion into basis images compared to two‐step approaches. To investigate this hypothesis, the noise standard deviation in bone and soft‐tissue regions of interest (ROIs) in the reconstructed images were compared between cOSSCIR and the two‐step methods for a range of regularization constraint settings.

    Results

    cOSSCIR reduced the noise standard deviation in the basis images by a factor of two to six compared to that of MLE + TVmin, when both algorithms were constrained to produce images with the same TV. The cOSSCIR images demonstrated qualitatively improved spatial resolution and depiction of fine anatomical detail. The MLE + TVminalgorithm resulted in lower noise standard deviation than cOSSCIR for the virtual monoenergetic images (VMIs) at higher energy levels and constraint settings, while the cOSSCIR VMIs resulted in lower noise standard deviation at lower energy levels and overall higher qualitative spatial resolution. There were no statistically significant differences in the mean values within the bone region of images reconstructed by the studied algorithms. There were statistically significant differences in the mean values within the soft‐tissue region of the reconstructed images, with cOSSCIR producing mean values closer to the expected values.

    Conclusions

    The cOSSCIR algorithm, combined with our previously proposed spectral model estimation and nonlinear counts correction method, successfully estimated bone and adipose basis images from high resolution, large‐scale patient data from a clinical PCCT prototype. The cOSSCIR basis images were able to depict fine anatomical details with a factor of two to six reduction in noise standard deviation compared to that of the MLE + TVmintwo‐step approach.

     
    more » « less
  4. Urban dispersal events occur when an unexpectedly large number of people leave an area in a relatively short period of time. It is beneficial for the city authorities, such as law enforcement and city management, to have an advance knowledge of such events, as it can help them mitigate the safety risks and handle important challenges such as managing traffic, and so forth. Predicting dispersal events is also beneficial to Taxi drivers and/or ride-sharing services, as it will help them respond to an unexpected demand and gain competitive advantage. Large urban datasets such as detailed trip records and point of interest ( POI ) data make such predictions achievable. The related literature mainly focused on taxi demand prediction. The pattern of the demand was assumed to be repetitive and proposed methods aimed at capturing those patterns. However, dispersal events are, by definition, violations of those patterns and are, understandably, missed by the methods in the literature. We proposed a different approach in our prior work [32]. We showed that dispersal events can be predicted by learning the complex patterns of arrival and other features that precede them in time. We proposed a survival analysis formulation of this problem and proposed a two-stage framework (DILSA), where a deep learning model predicted the survival function at each point in time in the future. We used that prediction to determine the time of the dispersal event in the future, or its non-occurrence. However, DILSA is subject to a few limitations. First, based on evidence from the data, mobility patterns can vary through time at a given location. DILSA does not distinguish between different mobility patterns through time. Second, mobility patterns are also different for different locations. DILSA does not have the capability to directly distinguish between different locations based on their mobility patterns. In this article, we address these limitations by proposing a method to capture the interaction between POIs and mobility patterns and we create vector representations of locations based on their mobility patterns. We call our new method DILSA+. We conduct extensive case studies and experiments on the NYC Yellow taxi dataset from 2014 to 2016. Results show that DILSA+ can predict events in the next 5 hours with an F1-score of 0.66. It is significantly better than DILSA and the state-of-the-art deep learning approaches for taxi demand prediction. 
    more » « less
  5. Abstract

    The dimensionless critical shear stress (τ*c) needed for the onset of sediment motion is important for a range of studies from river restoration projects to landscape evolution calculations. Many studies simply assume aτ*cvalue within the large range of scatter observed in gravel‐bedded rivers because direct field estimates are difficult to obtain. Informed choices of reach‐scaleτ*cvalues could instead be obtained from force balance calculations that include particle‐scale bed structure and flow conditions. Particle‐scale bed structure is also difficult to measure, precluding wide adoption of such force‐balanceτ*cvalues. Recent studies have demonstrated that bed grain size distributions (GSD) can be determined from detailed point clouds (e.g. using G3Point open‐source software). We build on these point cloud methods to introduce Pro+, software that estimates particle‐scale protrusion distributions andτ*cfor each grain size and for the entire bed using a force‐balance model. We validated G3Point and Pro+ using two laboratory flume experiments with different grain size distributions and bed topographies. Commonly used definitions of protrusion may not produce representativeτ*cdistributions, and Pro+ includes new protrusion definitions to better include flow and bed structure influences on particle mobility. The combined G3Point/Pro+ provided accurate grain size, protrusion andτ*cdistributions with simple GSD calibration. The largest source of error in protrusion andτ*cdistributions were from incorrect grain boundaries and grain locations in G3Point, and calibration of grain software beyond comparing GSD is likely needed. Pro+ can be coupled with grain identifying software and relatively easily obtainable data to provide informed estimates ofτ*c. These could replace arbitrary choices ofτ*cand potentially improve channel stability and sediment transport estimates.

     
    more » « less