We propose a new probabilistic programming language for the design and analysis of cyber-physical systems, especially those based on machine learning. We consider several problems arising in the design process, including training a system to be robust to rare events, testing its performance under different conditions, and debugging failures. We show how a probabilistic programming language can help address these problems by specifying distributions encoding interesting types of inputs, then sampling these to generate specialized training and test data. More generally, such languages can be used to write environment models, an essential prerequisite to any formal analysis. In this paper, we focus on systems such as autonomous cars and robots, whose environment at any point in time is a
Recent years have seen the rapid growth of new approaches to optical imaging, with an emphasis on extracting three-dimensional (3D) information from what is normally a two-dimensional (2D) image capture. Perhaps most importantly, the rise of computational imaging enables both new physical layouts of optical components and new algorithms to be implemented. This paper concerns the convergence of two advances: the development of a transparent focal stack imaging system using graphene photodetector arrays, and the rapid expansion of the capabilities of machine learning including the development of powerful neural networks. This paper demonstrates 3D tracking of point-like objects with multilayer feedforward neural networks and the extension to tracking positions of multi-point objects. Computer simulations further demonstrate how this optical system can track extended objects in 3D, highlighting the promise of combining nanophotonic devices, new optical system designs, and machine learning for new frontiers in 3D imaging.
- Award ID(s):
- 1838179
- Publication Date:
- NSF-PAR ID:
- 10224342
- Journal Name:
- Nature Communications
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2041-1723
- Publisher:
- Nature Publishing Group
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract scene , a configuration of physical objects and agents. We design a domain-specific language,Scenic , for describingscenarios that are distributions over scenes and the behaviors of their agents over time.Scenic combines concise, readable syntax for spatiotemporal relationships with the ability to declaratively impose hard and soft constraints over the scenario. We develop specialized techniques for sampling from the resulting distribution, taking advantage of the structure provided byScenic ’s domain-specific syntax. Finally, we applyScenic in multiple case studies for training, testing, and debugging neural networks for perception bothmore » -
Abstract Machine learning (ML) tools are able to learn relationships between the inputs and outputs of large complex systems directly from data. However, for time-varying systems, the predictive capabilities of ML tools degrade if the systems are no longer accurately represented by the data with which the ML models were trained. For complex systems, re-training is only possible if the changes are slow relative to the rate at which large numbers of new input-output training data can be non-invasively recorded. In this work, we present an approach to deep learning for time-varying systems that does not require re-training, but uses instead an adaptive feedback in the architecture of deep convolutional neural networks (CNN). The feedback is based only on available system output measurements and is applied in the encoded low-dimensional dense layers of the encoder-decoder CNNs. First, we develop an inverse model of a complex accelerator system to map output beam measurements to input beam distributions, while both the accelerator components and the unknown input beam distribution vary rapidly with time. We then demonstrate our method on experimental measurements of the input and output beam distributions of the HiRES ultra-fast electron diffraction (UED) beam line at Lawrence Berkeley National Laboratory,more »
-
Abstract Computer-aided Design for Manufacturing (DFM) systems play an essential role in reducing the time taken for product development by providing manufacturability feedback to the designer before the manufacturing phase. Traditionally, DFM rules are hand-crafted and used to accelerate the engineering product design process by integrating manufacturability analysis during design. Recently, the feasibility of using a machine learning-based DFM tool in intelligently applying the DFM rules have been studied. These tools use a voxelized representation of the design and then use a 3D-Convolutional Neural Network (3D-CNN), to provide manufacturability feedback. Although these frameworks work effectively, there are some limitations to the voxelized representation of the design. In this paper, we introduce a new representation of the computer-aided design (CAD) model using orthogonal distance fields (ODF). We provide a GPU-accelerated algorithm to convert standard boundary representation (B-rep) CAD models into ODF representation. Using the ODF representation, we build a machine learning framework, similar to earlier approaches, to create a machine learning-based DFM system to provide manufacturability feedback. As proof of concept, we apply this framework to assess the manufacturability of drilled holes. The framework has an accuracy of more than 84% correctly classifying the manufacturable and non-manufacturable models using the newmore »
-
Drilling and milling operations are material removal processes involved in everyday conventional productions, especially in the high-speed metal cutting industry. The monitoring of tool information (wear, dynamic behavior, deformation, etc.) is essential to guarantee the success of product fabrication. Many methods have been applied to monitor the cutting tools from the information of cutting force, spindle motor current, vibration, as well as sound acoustic emission. However, those methods are indirect and sensitive to environmental noises. Here, the in-process imaging technique that can capture the cutting tool information while cutting the metal was studied. As machinists judge whether a tool is worn-out by the naked eye, utilizing the vision system can directly present the performance of the machine tools. We proposed a phase shifted strobo-stereoscopic method (Figure 1) for three-dimensional (3D) imaging. The stroboscopic instrument is usually applied for the measurement of fast-moving objects. The operation principle is as follows: when synchronizing the frequency of the light source illumination and the motion of object, the object appears to be stationary. The motion frequency of the target is transferring from the count information of the encoder signals from the working rotary spindle. If small differences are added to the frequency, the objectmore »
-
Abstract The reaction-diffusion system is naturally used in chemistry to represent substances reacting and diffusing over the spatial domain. Its solution illustrates the underlying process of a chemical reaction and displays diverse spatial patterns of the substances. Numerical methods like finite element method (FEM) are widely used to derive the approximate solution for the reaction-diffusion system. However, these methods require long computation time and huge computation resources when the system becomes complex. In this paper, we study the physics of a two-dimensional one-component reaction-diffusion system by using machine learning. An encoder-decoder based convolutional neural network (CNN) is designed and trained to directly predict the concentration distribution, bypassing the expensive FEM calculation process. Different simulation parameters, boundary conditions, geometry configurations and time are considered as the input features of the proposed learning model. In particular, the trained CNN model manages to learn the time-dependent behaviour of the reaction-diffusion system through the input time feature. Thus, the model is capable of providing concentration prediction at certain time directly with high test accuracy (mean relative error <3.04%) and 300 times faster than the traditional FEM. Our CNN-based learning model provides a rapid and accurate tool for predicting the concentration distribution of the reaction-diffusionmore »