Accurate air pollution monitoring is critical to understand and mitigate the impacts of air pollution on human health and ecosystems. Due to the limited number and geographical coverage of advanced, highly accurate sensors monitoring air pollutants, many low-cost and low-accuracy sensors have been deployed. Calibrating low-cost sensors is essential to fill the geographical gap in sensor coverage. We systematically examined how different machine learning (ML) models and open-source packages could help improve the accuracy of particulate matter (PM) 2.5 data collected by Purple Air sensors. Eleven ML models and five packages were examined. This systematic study found that both models and packages impacted accuracy, while the random training/testing split ratio (e.g., 80/20 vs. 70/30) had minimal impact (0.745% difference for R2). Long Short-Term Memory (LSTM) models trained in RStudio and TensorFlow excelled, with high R2 scores of 0.856 and 0.857 and low Root Mean Squared Errors (RMSEs) of 4.25 µg/m3 and 4.26 µg/m3, respectively. However, LSTM models may be too slow (1.5 h) or computation-intensive for applications with fast response requirements. Tree-boosted models including XGBoost (0.7612, 5.377 µg/m3) in RStudio and Random Forest (RF) (0.7632, 5.366 µg/m3) in TensorFlow offered good performance with shorter training times (<1 min) and may be suitable for such applications. These findings suggest that AI/ML models, particularly LSTM models, can effectively calibrate low-cost sensors to produce precise, localized air quality data. This research is among the most comprehensive studies on AI/ML for air pollutant calibration. We also discussed limitations, applicability to other sensors, and the explanations for good model performances. This research can be adapted to enhance air quality monitoring for public health risk assessments, support broader environmental health initiatives, and inform policy decisions.
more »
« less
Evaluating and improving the reliability of gas-phase sensor system calibrations across new locations for ambient measurements and personal exposure monitoring
Abstract. Advances in ambient environmental monitoring technologies are enabling concerned communities and citizens to collect data to better understand their local environment and potential exposures. These mobile, low-cost tools make it possible to collect data with increased temporal and spatial resolution, providing data on a large scale with unprecedented levels of detail. This type of data has the potential to empower people to make personal decisions about their exposure and support the development of local strategies for reducing pollution and improving health outcomes. However, calibration of these low-cost instruments has been a challenge. Often, a sensor package is calibrated via field calibration. This involves colocating the sensor package with a high-quality reference instrument for an extended period and then applying machine learning or other model fitting technique such as multiple linear regression to develop a calibration model for converting raw sensor signals to pollutant concentrations. Although this method helps to correct for the effects of ambient conditions (e.g., temperature) and cross sensitivities with nontarget pollutants, there is a growing body of evidence that calibration models can overfit to a given location or set of environmental conditions on account of the incidental correlation between pollutant levels and environmental conditions, including diurnal cycles. As a result, a sensor package trained at a field site may provide less reliable data when moved, or transferred, to a different location. This is a potential concern for applications seeking to perform monitoring away from regulatory monitoring sites, such as personal mobile monitoring or high-resolution monitoring of a neighborhood. We performed experiments confirming that transferability is indeed a problem and show that it can be improved by collecting data from multiple regulatory sites and building a calibration model that leverages data from a more diverse data set. We deployed three sensor packages to each of three sites with reference monitors (nine packages total) and then rotated the sensor packages through the sites over time. Two sites were in San Diego, CA, with a third outside of Bakersfield, CA, offering varying environmental conditions, general air quality composition, and pollutant concentrations. When compared to prior single-site calibration, the multisite approach exhibits better model transferability for a range of modeling approaches. Our experiments also reveal that random forest is especially prone to overfitting and confirm prior results that transfer is a significant source of both bias and standard error. Linear regression, on the other hand, although it exhibits relatively high error, does not degrade much in transfer. Bias dominated in our experiments, suggesting that transferability might be easily increased by detecting and correcting for bias. Also, given that many monitoring applications involve the deployment of many sensor packages based on the same sensing technology, there is an opportunity to leverage the availability of multiple sensors at multiple sites during calibration to lower the cost of training and better tolerate transfer. We contribute a new neural network architecture model termed split-NN that splits the model into two stages, in which the first stage corrects for sensor-to-sensor variation and the second stage uses the combined data of all the sensors to build a model for a single sensor package. The split-NN modeling approach outperforms multiple linear regression, traditional two- and four-layer neural networks, and random forest models. Depending on the training configuration, compared to random forest the split-NN method reduced error 0 %–11 % for NO2 and 6 %–13 % for O3.
more »
« less
- Award ID(s):
- 1826967
- PAR ID:
- 10169546
- Date Published:
- Journal Name:
- Atmospheric Measurement Techniques
- Volume:
- 12
- Issue:
- 8
- ISSN:
- 1867-8548
- Page Range / eLocation ID:
- 4211 to 4239
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The paucity of fine particulate matter (PM2.5) measurements limits estimates of air pollution mortality in Sub‐Saharan Africa. Well calibrated low‐cost sensors can provide reliable data especially where reference monitors are unavailable. We evaluate the performance of Clarity Node‐S PM monitors against a Tapered element oscillating microbalance (TEOM) 1400a and develop a calibration model in Mombasa, Kenya's second largest city. As‐reported Clarity Node‐S data from January 2023 through April 2023 was moderately correlated with the TEOM‐1400a measurements (R2 = 0.61) and exhibited a mean absolute error (MAE) of 7.03 μg m−3. Employing three calibration models, namely, multiple linear regression (MLR), Gaussian mixture regression and random forest (RF) decreased the MAE to 4.28, 3.93, and 4.40 μg m−3respectively. TheR2value improved to 0.63 for the MLR model but all other models registered a decrease (R2 = 0.44 and 0.60 respectively). Applying the correction factor to a five‐sensor network in Mombasa that was operated between July 2021 and July 2022 gave insights to the air quality in the city. The average daily concentrations of PM2.5within the city ranged from 12 to 18 μg m−3. The concentrations exceeded the WHO daily PM2.5limits more than 50% of the time, in particular at the sites nearby frequent industrial activity. Higher averages were observed during the dry and cold seasons and during early morning and evening periods of high activity. These results represent some of the first air quality monitoring measurements in Mombasa and highlight the need for more study.more » « less
-
Background: As software development becomes more interdependent, unique relationships among software packages arise and form complex software ecosystems. Aim: We aim to understand the behavior of these ecosystems better through the lens of software supply chains and model how the effects of software dependency network affect the change in downloads of Javascript packages. Method: We analyzed 12,999 popular packages in NPM, between 01-December-2017 and 15-March-2018, using Linear Regression and Random Forest models and examined the effects of predictors representing different aspects of the software dependency supply chain on changes in numbers of downloads for a package. Result: Preliminary results suggest that the count and downloads of upstream and downstream runtime dependencies have a strong effect on the change in downloads, with packages having fewer, more popular packages as dependencies (upstream or downstream) likely to see an increase in downloads. This suggests that in order to interpret the number of downloads for a package properly, it is necessary to take into account the peculiarities of the supply chain (both upstream and downstream) of that package. Conclusion: Future work is needed to identify the effects of added, deleted, and unchanged dependencies for different types of packages, e.g. build tools, test tools.more » « less
-
Abstract Carbon fluxes in terrestrial ecosystems and their response to environmental change are a major source of uncertainty in the modern carbon cycle. The National Ecological Observatory Network (NEON) presents the opportunity to merge eddy covariance (EC)‐derived fluxes with CO2isotope ratio measurements to gain insights into carbon cycle processes. Collected continuously and consistently across >40 sites, NEON EC and isotope data facilitate novel integrative analyses. However, currently provisioned atmospheric isotope data are uncalibrated, greatly limiting ability to perform cross‐site analyses. Here, we present two approaches to calibrating NEON CO2isotope ratios, along with an R package to calibrate NEON data. We find that calibrating CO2isotopologues independently yields a lowerδ13C bias (<0.05‰) and higher precision (<0.40‰) than directly correctingδ13C with linear regression (bias: <0.11‰, precision: 0.42‰), but with slightly higher error and lower precision in calibrated CO2mole fraction. The magnitude of the corrections toδ13C and CO2mole fractions vary substantially by site, underscoring the need for users to apply a consistent calibration framework to data in the NEON archive. Post‐calibration data sets show that site mean annualδ13C correlates negatively with precipitation, temperature, and aridity, but positively with elevation. Forested and agricultural ecosystems exhibit larger gradients in CO2andδ13C than other sites, particularly during the summer and at night. The overview and analysis tools developed here will facilitate cross‐site analysis using NEON data, provide a model for other continental‐scale observational networks, and enable new advances leveraging the isotope ratios of specific carbon fluxes.more » « less
-
Streamflow prediction is crucial for planning future developments and safety measures along river basins, especially in the face of changing climate patterns. In this study, we utilized monthly streamflow data from the United States Bureau of Reclamation and meteorological data (snow water equivalent, temperature, and precipitation) from the various weather monitoring stations of the Snow Telemetry Network within the Upper Colorado River Basin to forecast monthly streamflow at Lees Ferry, a specific location along the Colorado River in the basin. Four machine learning models—Random Forest Regression, Long short-term memory, Gated Recurrent Unit, and Seasonal AutoRegresive Integrated Moving Average—were trained using 30 years of monthly data (1991–2020), split into 80% for training (1991–2014) and 20% for testing (2015–2020). Initially, only historical streamflow data were used for predictions, followed by including meteorological factors to assess their impact on streamflow. Subsequently, sequence analysis was conducted to explore various input-output sequence window combinations. We then evaluated the influence of each factor on streamflow by testing all possible combinations to identify the optimal feature combination for prediction. Our results indicate that the Random Forest Regression model consistently outperformed others, especially after integrating all meteorological factors with historical streamflow data. The best performance was achieved with a 24-month look-back period to predict 12 months of streamflow, yielding a Root Mean Square Error of 2.25 and R-squared (R2) of 0.80. Finally, to assess model generalizability, we tested the best model at other locations—Greenwood Springs (Colorado River), Maybell (Yampa River), and Archuleta (San Juan) in the basin.more » « less
An official website of the United States government

