Abstract In the quest for high-energy neutrino sources, the Astrophysical Multimessenger Observatory Network has implemented a new search by combining data from the High Altitude Water Cherenkov (HAWC) Observatory and the Astronomy with a Neutrino Telescope and Abyss environmental RESearch (ANTARES) neutrino telescope. Using the same analysis strategy as in a previous detector combination of HAWC and IceCube data, we perform a search for coincidences in HAWC and ANTARES events that are below the threshold for sending public alerts in each individual detector. Data were collected between 2015 July and 2020 February with a live time of 4.39 yr. Over this time period, three coincident events with an estimated false-alarm rate of <1 coincidence per year were found. This number is consistent with background expectations.
more »
« less
This content will become publicly available on June 17, 2026
Effect of Acquisition Noise Outliers on Steganalysis
Understanding the mechanisms that lead to false alarms (erro- neously detecting cover images as containing secrets) in steganaly- sis is a topic of utmost importance for practical applications. In this paper, we present evidence that a relatively small number of pixel outliers introduced by the image acquisition process can skew the soft output of a data driven detector to produce a strong false alarm. To verify this hypothesis, for a cover image we estimate a statistical model of the acquisition noise in the developed domain and identify pixels that contribute the most to the associated likelihood ratio test (LRT) for steganography. We call such cover elements LIEs (Locally Infuential Elements). The efect of LIEs on the output of a data-driven detector is demonstrated by turning a strong false alarm into a correctly classifed cover by introducing a relatively small number of “de-embedding” changes at LIEs. Similarly, we show that it is possible to introduce a small number of LIEs into a strong cover to make a data driven detector classify it as stego. Our fndings are supported by experiments on two datasets with three steganographic algorithms and four types of data driven detectors.
more »
« less
- Award ID(s):
- 2324991
- PAR ID:
- 10621653
- Publisher / Repository:
- ACM
- Date Published:
- ISSN:
- 979-8-4007-1887-8/25/06
- ISBN:
- 9798400718878
- Page Range / eLocation ID:
- 164 to 173
- Subject(s) / Keyword(s):
- Steganalysis false alarm LIE acquisition noise CNNs
- Format(s):
- Medium: X
- Location:
- San Jose USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Incorporating learning based components in the current state-of-the-art cyber-physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine, and other safety-critical domains. This is because it would allow system designers to use high-dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high-dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. However, achieving a meaningful coverage is impossible. This naturally leads to the following question: is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this article is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from the in-distribution setting can potentially lead to unsafe behavior. It should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory-based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches, such feedback is difficult to obtain due to reliance on techniques which use variational autoencoders.more » « less
-
Brankov, Jovan G; Anastasio, Mark A (Ed.)Artificial intelligence (AI) tools are designed to improve the efficacy and efficiency of data analysis and interpretation by the human decision maker. However, we know little about the optimal ways to present AI output to providers. This study used radiology image interpretation with AI-based decision support to explore the impact of different forms of AI output on reader performance. Readers included 5 experienced radiologists and 3 radiology residents reporting on a series of COVID chest x-ray images. Four different forms (1 word summarizing diagnoses (normal, mild, moderate, severe), probability graph, heatmap, heatmap plus probability graph) of AI outputs (plus no AI feedback) were evaluated. Results reveal that most decisions regarding presence/absence of COVID without AI were correct and overall remained unchanged across all types of AI outputs. Fewer than 1% of decisions that were changed as a function of seeing the AI output were negative (true positive to false negative or true negative to false positive) regarding presence/absence of COVID; and about 1% were positive (false negative to true positive, false positive to true negative). More complex output formats (e.g., heat map plus a probability graph) tend to increase reading time and the number of scans between the clinical image and the AI outputs as revealed through eyetracking. The key to the success of AI tools in medical imaging will be to incorporate the human into the overall process to optimize and synergize the human-computer dyad, since at least for the foreseeable future, the human is and will be the ultimate decision maker. Our results demonstrate that the form of the AI output is important as it can impact clinical decision making and efficiency.more » « less
-
While deep learning has revolutionized image steganalysis in terms of performance, little is known about how much modern data-driven detectors can still be improved. In this paper, we approach this difficult and currently wide open question by working with artificial but realistic looking images with a known statistical model that allows us to compute the detectability of modern content-adaptive algorithms with respect to the most powerful detectors. Multiple artificial image datasets are crafted with different levels of content complexity and noise power to assess their influence on the gap between both types of detectors. Experiments with SRNet as the heuristic detector indicate that independent noise contributes less to the performance gap than content of the same MSE. While this loss is rather small for smooth images, it can be quite large for textured images. A network trained on many realizations of a fixed textured scene will, however, recuperate most of the loss, suggesting that networks have the capacity to approximately learn the parameters of a cover source narrowed to a fixed scene.more » « less
-
While deep learning has revolutionized image steganalysis in terms of performance, little is known about how much modern data-driven detectors can still be improved. In this paper, we approach this difficult and currently wide open question by working with artificial but realistic looking images with a known statistical model that allows us to compute the detectability of modern content-adaptive algorithms with respect to the most powerful detectors. Multiple artificial image datasets are crafted with different levels of content complexity and noise power to assess their influence on the gap between both types of detectors. Experiments with SRNet as the heuristic detector indicate that in dependent noise contributes less to the performance gap than content of the same MSE. While this loss is rather small for smooth images, it can be quite large for textured images. A network trained on many realizations of a fixed textured scene will, however, recuperate most of the loss, suggesting that networks have the capacity to approximately learn the parameters of a cover source narrowed to a fixed scene.more » « less
An official website of the United States government
