Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost.
The increasing amount of data and the growing use of them in the information era have raised questions about the quality of data and its impact on the decision-making process. Currently, the importance of high-quality data is widely recognized by researchers and decision-makers. Sewer inspection data have been collected for over three decades, but the reliability of the data was questionable. It was estimated that between 25% and 50% of sewer inspection data is not usable due to data quality problems. In order to address reliability problems, a data quality evaluation framework is developed. Data quality evaluation is a multi-dimensional concept that includes both subjective perceptions and objective measurements. Five data quality metrics were defined to assess different quality dimensions of the sewer inspection data, including Accuracy, Consistency, Completeness, Uniqueness, and Validity. These data quality metrics were calculated for the collected sewer inspection data, and it was found that consistency and uniqueness are the major problems based on the current practices with sewer pipeline inspection. This paper contributes to the overall body of knowledge by providing a robust data quality evaluation framework for sewer system data for the first time, which will result in quality data for sewer asset management.
more » « less- Award ID(s):
- 2141184
- PAR ID:
- 10484419
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Water
- Volume:
- 15
- Issue:
- 11
- ISSN:
- 2073-4441
- Page Range / eLocation ID:
- 2043
- Subject(s) / Keyword(s):
- data quality sewer infrastructure pipeline assessment certification program sewer asset management
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Diffusion models have established new state of the art in a multitude of computer vision tasks, in- cluding image restoration. Diffusion-based inverse problem solvers generate reconstructions of ex- ceptional visual quality from heavily corrupted measurements. However, in what is widely known as the perception-distortion trade-off, the price of perceptually appealing reconstructions is often paid in declined distortion metrics, such as PSNR. Distortion metrics measure faithfulness to the observation, a crucial requirement in inverse problems. In this work, we propose a novel framework for inverse problem solving, namely we assume that the observation comes from a stochastic degra- dation process that gradually degrades and noises the original clean image. We learn to reverse the degradation process in order to recover the clean image. Our technique maintains consistency with the original measurement throughout the reverse process, and allows for great flexibility in trading off perceptual quality for improved distortion metrics and sampling speedup via early-stopping. We demonstrate the efficiency of our method on different high-resolution datasets and inverse problems, achieving great improvements over other state-of-the-art diffusion-based methods with respect to both perceptual and distortion metrics. Source code and pre-trained models will be released soon.more » « less
-
A controlled vocabulary list that was originally developed for the automotive assembly environment was modified for home appliance assembly in this study. After surveying over 700 assembly tasks with the original vocabulary, additions were made to the vocabulary list as necessary. The vocabulary allowed for the transformation of work instructions in approximately 90% of cases, with the most discrepancies occurring during the inspection phase of the transfer line. The modified vocabulary list was then tested for coder reliability to ensure broad usability and was found to have Cohen’s kappa values of 0.671 < κ < 0.848 between coders and kappa values of 0.731 < κ < 0.875 within coders over time. Using this analysis, it was demonstrated that this original automotive vocabulary could be applied to the non-automotive context with a high degree of reliability and consistency.more » « less
-
Abstract Wearable recordings of neurophysiological signals captured from the wrist offer enormous potential for seizure monitoring. Yet, data quality remains one of the most challenging factors that impact data reliability. We suggest a combined data quality assessment tool for the evaluation of multimodal wearable data. We analyzed data from patients with epilepsy from four epilepsy centers. Patients wore wristbands recording accelerometry, electrodermal activity, blood volume pulse, and skin temperature. We calculated data completeness and assessed the time the device was worn (on-body), and modality-specific signal quality scores. We included 37,166 h from 632 patients in the inpatient and 90,776 h from 39 patients in the outpatient setting. All modalities were affected by artifacts. Data loss was higher when using data streaming (up to 49% among inpatient cohorts, averaged across respective recordings) as compared to onboard device recording and storage (up to 9%). On-body scores, estimating the percentage of time a device was worn on the body, were consistently high across cohorts (more than 80%). Signal quality of some modalities, based on established indices, was higher at night than during the day. A uniformly reported data quality and multimodal signal quality index is feasible, makes study results more comparable, and contributes to the development of devices and evaluation routines necessary for seizure monitoring.
-
The operation of today’s data centers increasingly relies on environmental data collection and analysis to operate the cooling infrastructure as efficiently as possible and to maintain the reliability of IT equipment. This in turn emphasizes the importance of the quality of the data collected and their relevance to the overall operation of the data center. This study presents an experimentally based analysis and comparison between two different approaches for environmental data collection; one using a discrete sensor network, and another using available data from installed IT equipment through their Intelligent Platform Management Interface (IPMI). The comparison considers the quality and relevance of the data collected and investigates their effect on key performance and operational metrics. The results have shown the large variation of server inlet temperatures provided by the IPMI interface. On the other hand, the discrete sensor measurements showed much more reliable results where the server inlet temperatures had minimal variation inside the cold aisle. These results highlight the potential difficulty in using IPMI inlet temperature data to evaluate the thermal environment inside the contained cold aisle. The study also focuses on how industry common methods for cooling efficiency management and control can be affected by the data collection approach. Results have shown that using preheated IPMI inlet temperature data can lead to unnecessarily lower cooling set points, which in turn minimizes the potential cooling energy savings. It was shown in one case that using discrete sensor data for control provides 20% more energy savings than using IPMI inlet temperature data.more » « less