skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Enabling Large-scale Fine-grained Simulation of IED Vapor Concentration in Open-air Environments
Ammonium nitrate and nitromethane are two of the most prevalent ingredients in improvised explosive devices (IED). Developing a detection system for IEDs in open public events where no specific check points are available requires many large scale, fine-grained simulations to estimate the explosive vapors. However, such large scale molecular simulations at the required granularity is very time consuming and in most cases not feasible. In this paper, we propose region-specific meshing to alleviate the computational cost. The proposed simulation methodology provides accurate results as compared with a baseline simulation of fine grained mesh (in a small area) while providing significant reduction in simulation time. Thus, large scale simulations at feasible computational burden can be achieved.  more » « less
Award ID(s):
1739451
PAR ID:
10196059
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS)
Page Range / eLocation ID:
73 to 76
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Crowdsourcing is popular for large-scale data collection and labeling, but a major challenge is on detecting low-quality submissions. Recent studies have demonstrated that behavioral features of workers are highly correlated with data quality and can be useful in quality control. However, these studies primarily leveraged coarsely extracted behavioral features, and did not further explore quality control at the fine-grained level, i.e., the annotation unit level. In this paper, we investigate the feasibility and benefits of using fine-grained behavioral features, which are the behavioral features finely extracted from a worker's individual interactions with each single unit in a subtask, for quality control in crowdsourcing. We design and implement a framework named Fine-grained Behavior-based Quality Control (FBQC) that specifically extracts fine-grained behavioral features to provide three quality control mechanisms: (1) quality prediction for objective tasks, (2) suspicious behavior detection for subjective tasks, and (3) unsupervised worker categorization. Using the FBQC framework, we conduct two real-world crowdsourcing experiments and demonstrate that using fine-grained behavioral features is feasible and beneficial in all three quality control mechanisms. Our work provides clues and implications for helping job requesters or crowdsourcing platforms to further achieve better quality control. 
    more » « less
  2. Baeza-Yates, Ricardo; Bonchi, Francesco (Ed.)
    Fine-grained entity typing (FET) is the task of identifying specific entity types at a fine-grained level for entity mentions based on their contextual information. Conventional methods for FET require extensive human annotation, which is time-consuming and costly given the massive scale of data. Recent studies have been developing weakly supervised or zero-shot approaches.We study the setting of zero-shot FET where only an ontology is provided. However, most existing ontology structures lack rich supporting information and even contain ambiguous relations, making them ineffective in guiding FET. Recently developed language models, though promising in various few-shot and zero-shot NLP tasks, may face challenges in zero-shot FET due to their lack of interaction with task-specific ontology. In this study, we propose OnEFET, where we (1) enrich each node in the ontology structure with two categories of extra information: instance information for training sample augmentation and topic information to relate types with contexts, and (2) develop a coarse-to-fine typing algorithm that exploits the enriched information by training an entailment model with contrasting topics and instance-based augmented training samples. Our experiments show that OnEFET achieves high-quality fine-grained entity typing without human annotation, outperforming existing zero-shot methods by a large margin and rivaling supervised methods. OnEFET also enjoys strong transferability to unseen and finer-grained types. Code is available at https://github.com/ozyyshr/OnEFET. 
    more » « less
  3. null (Ed.)
    Intelligent thought is the product of efficient neural information processing, which is embedded in fine-grained, topographically organized population responses and supported by fine-grained patterns of connectivity among cortical fields. Previous work on the neural basis of intelligence, however, has focused on coarse-grained features of brain anatomy and function because cortical topographies are highly idiosyncratic at a finer scale, obscuring individual differences in fine-grained connectivity patterns. We used a computational algorithm, hyperalignment, to resolve these topographic idiosyncrasies and found that predictions of general intelligence based on fine-grained (vertex-by-vertex) connectivity patterns were markedly stronger than predictions based on coarse-grained (region-by-region) patterns. Intelligence was best predicted by fine-grained connectivity in the default and frontoparietal cortical systems, both of which are associated with self-generated thought. Previous work overlooked fine-grained architecture because existing methods could not resolve idiosyncratic topographies, preventing investigation where the keys to the neural basis of intelligence are more likely to be found. 
    more » « less
  4. null (Ed.)
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH( a ) and per-int-NSETH( b ) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F 2 or the permanent of an n × n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3-ave-SBSETH( a ′ ) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates. 
    more » « less
  5. Modelling of fluid–particle interactions is a major area of research in many fields of science and engineering. There are several techniques that allow modelling of such interactions, among which the coupling of computational fluid dynamics (CFD) and the discrete element method (DEM) is one of the most convenient solutions due to the balance between accuracy and computational costs. However, the accuracy of this method is largely dependent upon mesh size, where obtaining realistic results always comes with the necessity of using a small mesh and thereby increasing computational intensity. To compensate for the inaccuracies of using a large mesh in such modelling, and still take advantage of rapid computations, we extended the classical modelling by combining it with a machine learning model. We have conducted seven simulations where the first one is a numerical model with a fine mesh (i.e. ground truth) with a very high computational time and accuracy, the next three models are constructed on coarse meshes with considerably less accuracy and computational burden and the last three models are assisted by machine learning, where we can obtain large improvements in terms of observing fine-scale features yet based on a coarse mesh. The results of this study show that there is a great opportunity in machine learning towards improving classical fluid–particle modelling approaches by producing highly accurate models for large-scale systems in a reasonable time. 
    more » « less