- PAR ID:
- 10385361
- Date Published:
- Journal Name:
- 2022 2nd Workshop on Data-Driven and Intelligent Cyber-Physical Systems for Smart Cities Workshop (DI-CPS)
- Page Range / eLocation ID:
- 40 to 46
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)The COVID-19 viral disease surfaced at the end of 2019 and quickly spread across the globe. To rapidly respond to this pandemic and offer data support for various communities (e.g., decision-makers in health departments and governments, researchers in academia, public citizens), the National Science Foundation (NSF) spatiotemporal innovation center constructed a spatiotemporal platform with various task forces including international researchers and implementation strategies. Compared to similar platforms that only offer viral and health data, this platform views virus-related environmental data collection (EDC) an important component for the geospatial analysis of the pandemic. The EDC contains environmental factors either proven or with potential to influence the spread of COVID-19 and virulence or influence the impact of the pandemic on human health (e.g., temperature, humidity, precipitation, air quality index and pollutants, nighttime light (NTL)). In this platform/framework, environmental data are processed and organized across multiple spatiotemporal scales for a variety of applications (e.g., global mapping of daily temperature, humidity, precipitation, correlation of the pandemic to the mean values of climate and weather factors by city). This paper introduces the raw input data, construction and metadata of reprocessed data, and data storage, as well as the sharing and quality control methodologies of the COVID-19 related environmental data collection.more » « less
-
IEEE (Ed.)The massive use of vehicles as a primary means of transportation as well the increasing adoption of vehicles’ on-board sensors represents a unique opportunity for sensing and data collection. However, vehicles tend to cluster in specific regions such as highways and a few popular roads, making their utilization for data collection in isolated regions with low-density traffic difficult. We address this problem by proposing an incentive mechanism that encourages vehicles to deviate from their pre-planned trajectories to visit these isolated places. At the core of our proposal is the idea of compensation based on participants’ location diversity, which allows for rewarding vehicles in low-density traffic areas more than those located in high-density ones. We model this problem as a non-cooperative game in which participants are the vehicles and their new trajectories are their strategies. The output of this game is a new set of stable trajectories that maximize spatial coverage. Simulations show our approach outperforms the approach that doesn't take into account participants’ location diversity in terms of spatial coverage and road utilization.more » « less
-
Intersections are essential road infrastructures for traffic in modern metropolises. However, they can also be the bottleneck of traffic flows as a result of traffic incidents or the absence of traffic coordination mechanisms such as traffic lights. Recently, various control and coordination mechanisms that are beyond traditional control methods have been proposed to improve the efficiency of intersection traffic by leveraging the ability of autonomous vehicles. Among these methods, the control of foreseeable mixed traffic that consists of human-driven vehicles (HVs) and robot vehicles (RVs) has emerged. We propose a decentralized multi-agent reinforcement learning approach for the control and coordination of mixed traffic by RVs at real-world, complex intersections—an open challenge to date. We design comprehensive experiments to evaluate the effectiveness, robustness, generalizablility, and adaptability of our approach. In particular, our method can prevent congestion formation via merely 5% RVs under a real-world traffic demand of 700 vehicles per hour. In contrast, without RVs, congestion will form when the traffic demand reaches as low as 200 vehicles per hour. Moreover, when the RV penetration rate exceeds 60%, our method starts to outperform traffic signal control in terms of the average waiting time of all vehicles. Our method is not only robust against blackout events, sudden RV percentage drops, and V2V communication error, but also enjoys excellent generalizablility, evidenced by its successful deployment in five unseen intersections. Lastly, our method performs well under various traffic rules, demonstrating its adaptability to diverse scenarios. Videos and code of our work are available at https://sites.google.com/view/mixedtrafficcontrol .
-
Telemetry systems are widely used to collect data from distributed endpoints, analyze data in conjunction to gain valuable insights, and store data for historical analytics. These systems consist of four stages (Figure 1): collection, transmission, analysis, and storage. Collectors at the endpoint collect various types of data, which is then transmitted to a central server for analysis. This data is used for multiple downstream tasks, such as dashboard monitoring and anomaly detection. Finally, this data is stored in long-term storage to aid retrospective analytics and debugging.more » « less
-
Swimming microrobots are increasingly developed with complex materials and dynamic shapes and are expected to operate in complex environments in which the system dynamics are difficult to model and positional control of the microrobot is not straightforward to achieve. Deep reinforcement learning is a promising method of autonomously developing robust controllers for creating smart microrobots, which can adapt their behavior to operate in uncharacterized environments without the need to model the system dynamics. This article reports the development of a smart helical magnetic hydrogel microrobot that uses the soft actor critic reinforcement learning algorithm to autonomously derive a control policy which allows the microrobot to swim through an uncharacterized biomimetic fluidic environment under control of a time‐varying magnetic field generated from a three‐axis array of electromagnets. The reinforcement learning agent learns successful control policies from both state vector input and raw images, and the control policies learned by the agent recapitulate the behavior of rationally designed controllers based on physical models of helical swimming microrobots. Deep reinforcement learning applied to microrobot control is likely to significantly expand the capabilities of the next generation of microrobots.