skip to main content


Title: Estimating road traffic impacts of commute mode shifts
This work considers the sensitivity of commute travel times in US metro areas due to potential changes in commute patterns, for example caused by events such as pandemics. Permanent shifts away from transit and carpooling can add vehicles to congested road networks, increasing travel times. Growth in the number of workers who avoid commuting and work from home instead can offset travel time increases. To estimate these potential impacts, 6-9 years of American Community Survey commute data for 118 metropolitan statistical areas are investigated. For 74 of the metro areas, the average commute travel time is shown to be explainable using only the number of passenger vehicles used for commuting. A universal Bureau of Public Roads model characterizes the sensitivity of each metro area with respect to additional vehicles. The resulting models are then used to determine the change in average travel time for each metro area in scenarios when 25% or 50% of transit and carpool users switch to single occupancy vehicles. Under a 25% mode shift, areas such as San Francisco and New York that are already congested and have high transit ridership may experience round trip travel time increases of 12 minutes (New York) to 20 minutes (San Francisco), costing individual commuters $1065 and $1601 annually in lost time. The travel time increases and corresponding costs can be avoided with an increase in working from home. The main contribution of this work is to provide a model to quantify the potential increase in commute travel times under various behavior changes, that can aid policy making for more efficient commuting.  more » « less
Award ID(s):
2033580
NSF-PAR ID:
10437407
Author(s) / Creator(s):
; ; ; ; ;
Editor(s):
Jin, Sheng
Date Published:
Journal Name:
PLOS ONE
Volume:
18
Issue:
1
ISSN:
1932-6203
Page Range / eLocation ID:
e0279738
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. West, Brooke (Ed.)
    Objectives An Opioid Treatment Desert is an area with limited accessibility to medication-assisted treatment and recovery facilities for Opioid Use Disorder. We explored the concept of Opioid Treatment Deserts including racial differences in potential spatial accessibility and applied it to one Midwestern urban county using high resolution spatiotemporal data. Methods We obtained individual-level data from one Emergency Medical Services (EMS) agency (Columbus Fire Department) in Franklin County, Ohio. Opioid overdose events were based on EMS runs where naloxone was administered from 1/1/2013 to 12/31/2017. Potential spatial accessibility was measured as the time (in minutes) it would take an individual, who may decide to seek treatment after an opioid overdose, to travel from where they had the overdose event, which was a proxy measure of their residential location, to the nearest opioid use disorder (OUD) treatment provider that provided medically-assisted treatment (MAT). We estimated accessibility measures overall, by race and by four types of treatment providers (any type of MAT for OUD, Buprenorphine, Methadone, or Naltrexone). Areas were classified as an Opioid Treatment Desert if the estimate travel time to treatment provider (any type of MAT for OUD) was greater than a given threshold. We performed sensitivity analysis using a range of threshold values based on multiple modes of transportation (car and public transit) and using only EMS runs to home/residential location types. Results A total of 6,929 geocoded opioid overdose events based on data from EMS agencies were used in the final analysis. Most events occurred among 26–35 years old (34%), identified as White adults (56%) and male (62%). Median travel times and interquartile range (IQR) to closest treatment provider by car and public transit was 2 minutes (IQR: 3 minutes) and 17 minutes (IQR: 17 minutes), respectively. Several neighborhoods in the study area had limited accessibility to OUD treatment facilities and were classified as Opioid Treatment Deserts. Travel time by public transit for most treatment provider types and by car for Methadone-based treatment was significantly different between individuals who were identified as Black adults and White adults based on their race. Conclusions Disparities in access to opioid treatment exist at the sub-county level in specific neighborhoods and across racial groups in Columbus, Ohio and can be quantified and visualized using local public safety data (e.g., EMS runs). Identification of Opioid Treatment Deserts can aid multiple stakeholders better plan and allocate resources for more equitable access to MAT for OUD and, therefore, reduce the burden of the opioid epidemic while making better use of real-time public safety data to address a public health epidemic that has turned into a public safety crisis. 
    more » « less
  2. We consider the problem of routing a large fleet of drones to deliver packages simultaneously across broad urban areas. Besides flying directly, drones can use public transit vehicles such as buses and trams as temporary modes of transportation to conserve energy. Adding this capability to our formulation augments effective drone travel range and the space of possible deliveries but also increases problem input size due to the large transit networks. We present a comprehensive algorithmic framework that strives to minimize the maximum time to complete any delivery and addresses the multifaceted computational challenges of our problem through a two-layer approach. First, the upper layer assigns drones to package delivery sequences with an approximately optimal polynomial time allocation algorithm. Then, the lower layer executes the allocation by periodically routing the fleet over the transit network, using efficient, bounded suboptimal multi-agent pathfinding techniques tailored to our setting. We demonstrate the efficiency of our approach on simulations with up to 200 drones, 5000 packages, and transit networks with up to 8000 stops in San Francisco and the Washington DC Metropolitan Area. Our framework computes solutions for most settings within a few seconds on commodity hardware and enables drones to extend their effective range by a factor of nearly four using transit. 
    more » « less
  3. Abstract

    In the United States, the absence of federal funding and coordination for immigration legal services often means that local resources determine immigrants' access to justice. Many of these resources go toward supporting immigrants caught in the detention and deportation system. Yet local support is also critical for implementing federal benefits programs such as the 2012 Deferred Action for Childhood Arrivals (DACA) program. In this article, we draw on 146 interviews with representatives of legal services providers and their nonprofit collaborators in three immigrant‐dense metropolitan areas—the Greater Houston Area, the New York City Metro Area, and the San Francisco Bay Area—to analyze the distinct, place‐specific service and collaboration models that have emerged over the last decade to meet demand for DACA implementation support. Specifically, we examine how local context shapes the types of actors that immigrants can turn to for immigration legal services, and how they have coordinated on the ground in distinct ways during a time of increasing uncertainty.

     
    more » « less
  4. null (Ed.)
    Hurricane Sandy hit New York City on October 29, 2012 and greatly disrupted transportation systems, power systems, work, and schools. This research used survey data from 397 respondents in the NYC Metropolitan Area to develop an agent-based model for capturing commuter behavior and adaptation after the disruption. Six different recovery scenarios were tested to find which systems are more critical to recover first to promote a faster return to productivity. Important factors in the restoration timelines depends on the normal commuting pattern of people in that area. In the NYC Metropolitan Area, transit is one of the common modes of transportation; therefore, it was found that the subway/rail system recovery is the top factor in returning to productivity. When the subway/rail system recovers earlier (with the associated power), more people are able to travel to work and be productive. The second important factor is school and daycare closure (with the associated power and water systems). Parents cannot travel unless they can find a caregiver for their children, even if the transportation system is functional. Therefore, policy makers should consider daycare and school condition as one of the important factors in recovery planning. The next most effective scenario is power restoration. Telework is a good substitute for the physical movement of people to work. By teleworking, people are productive while they skip using the disrupted transportation system. To telework, people need power and communication systems. Therefore, accelerating power restoration and encouraging companies to let their employees' telework can promote a faster return to productivity. Finally, the restoration of major crossings like bridges and tunnels is effective in the recovery process. 
    more » « less
  5. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/. 
    more » « less