While cities around the world are increasingly promoting streets and public spaces that prioritize pedestrians over vehicles, significant data gaps have made pedestrian mapping, analysis, and modeling challenging to carry out. Most cities, even in industrialized economies, still lack information about the location and connectivity of their sidewalks, making it difficult to implement research on pedestrian infrastructure and holding the technology industry back from developing accurate, location-based Apps for pedestrians, wheelchair users, street vendors, and other sidewalk users. To address this gap, we have designed and implemented an end-to-end open-source tool— Tile2Net —for extracting sidewalk, crosswalk, and footpath polygons from orthorectified aerial imagery using semantic segmentation. The segmentation model, trained on aerial imagery from Cambridge, MA, Washington DC, and New York City, offers the first open-source scene classification model for pedestrian infrastructure from sub-meter resolution aerial tiles, which can be used to generate planimetric sidewalk data in North American cities. Tile2Net also generates pedestrian networks from the resulting polygons, which can be used to prepare datasets for pedestrian routing applications. The work offers a low-cost and scalable data collection methodology for systematically generating sidewalk network datasets, where orthorectified aerial imagery is available, contributing to over-due efforts to equalize data opportunities for pedestrians, particularly in cities that lack the resources necessary to collect such data using more conventional methods.
more »
« less
Data Hub Architecture for Smart Cities
Today large amount of data is generated by cities. Many of the datasets are openly available and are contributed by different sectors, government bodies and institutions. The new data can affect our understanding of the issues faced by cities and can support evidence based policies. However usage of data is limited due to difficulty in assimilating data from different sources. Open datasets often lack uniform structure which limits its analysis using traditional database systems. In this paper we present Citadel, a data hub for cities. Citadel's goal is to support end to end knowledge discovery cyber-infrastructure for effective analysis and policy support. Citadel is designed to ingest large amount of heterogeneous data and supports multiple use cases by encouraging data sharing in cities. Our poster presents the proposed features, architecture, implementation details and initial results.
more »
« less
- PAR ID:
- 10074643
- Date Published:
- Journal Name:
- Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems
- Page Range / eLocation ID:
- 1 to 2
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Supporting the interactive exploration of large datasets is a popular and challenging use case for data management systems. Traditionally, the interface and the back-end system are built and optimized separately, and interface design and system optimization require different skill sets that are difficult for one person to master. To enable analysts to focus on visualization design, we contribute VegaPlus, a system that automatically optimizes interactive dashboards to support large datasets. To achieve this, VegaPlus leverages two core ideas. First, we introduce an optimizer that can reason about execution plans in Vega, a back-end DBMS, or a mix of both environments. The optimizer also considers how user interactions may alter execution plan performance, and can partially or fully rewrite the plans when needed. Through a series of benchmark experiments on seven different dashboard designs, our results show that VegaPlus provides superior performance and versatility compared to standard dashboard optimization techniques.more » « less
-
Cities such as Detroit, MI in the post-industrial Rust Belt region of the United States, have been experiencing a decline in both population and economy since the 1970's. These “shrinking cities” are characterized by aging infrastructure and increasing vacant areas, potentially resulting in more green space. While in growing cities research has demonstrated an “urban heat island” effect resulting from increased temperatures with increased urbanization, little is known about how this may be different if a city shrinks due to urban decline. We hypothesize that the changes associated with shrinking cities will have a measurable impact on their local climatology that is different than in areas experiencing increased urbanization. Here we present our analysis of historical temperature and precipitation records (1900–2020) from weather stations positioned in multiple shrinking cities from within the Rust Belt region of the United States and in growing cities within and outside of this region. Our results suggest that while temperatures are increasing overall, these increases are lower in shrinking cities than those cities that are continuing to experience urban growth. Our analysis also suggests there are differences in precipitation trends between shrinking and growing cities. We also highlight recent climate data in Detroit, MI in the context of these longer-term changes in climatology to support urban planning and management decisions that may influence or be influenced by these trends.more » « less
-
We present the pipeline for the cosmic shear analysis of the Dark Energy Camera All Data Everywhere (DECADE) weak lensing dataset: a catalog consisting of 107 million galaxies observed by the Dark Energy Camera (DECam) in the northern Galactic cap. The catalog derives from a large number of disparate observing programs and is therefore more inhomogeneous across the sky compared to existing lensing surveys. First, we use simulated data-vectors to show the sensitivity of our constraints to different analysis choices in our inference pipeline, including sensitivity to residual systematics. Next we use simulations to validate our covariance modeling for inhomogeneous datasets. Finally, we show that our choices in the end-to-end cosmic shear pipeline are robust against inhomogeneities in the survey, by extracting relative shifts in the cosmology constraints across different subsets of the footprint/catalog and showing they are all consistent within to . This is done for forty-six subsets of the data and is carried out in a fully consistent manner: for each subset of the data, we re-derive the photometric redshift estimates, shear calibrations, survey transfer functions, the data vector, measurement covariance, and finally, the cosmological constraints. Our results show that existing analysis methods for weak lensing cosmology can be fairly resilient towards inhomogeneous datasets. This also motivates exploring a wider range of image data for pursuing such cosmological constraints.more » « less
-
Abstract In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.more » « less
An official website of the United States government

