Modern design problems present both opportunities and challenges, including multifunctionality, high dimensionality, highly nonlinear multimodal responses, and multiple levels or scales. These factors are particularly important in materials design problems and make it difficult for traditional optimization algorithms to search the space effectively, and designer intuition is often insufficient in problems of this complexity. Efficient machine learning algorithms can map complex design spaces to help designers quickly identify promising regions of the design space. In particular, Bayesian network classifiers (BNCs) have been demonstrated as effective tools for top-down design of complex multilevel problems. The most common instantiations of BNCs assume that all design variables are independent. This assumption reduces computational cost, but can limit accuracy especially in engineering problems with interacting factors. The ability to learn representative network structures from data could provide accurate maps of the design space with limited computational expense. Population-based stochastic optimization techniques such as genetic algorithms (GAs) are ideal for optimizing networks because they accommodate discrete, combinatorial, and multimodal problems. Our approach utilizes GAs to identify optimal networks based on limited training sets so that future test points can be classified as accurately and efficiently as possible. This method is first tested on a common machine learning data set, and then demonstrated on a sample design problem of a composite material subjected to a planar sound wave. 
                        more » 
                        « less   
                    
                            
                            A Comparative Evaluation of Supervised Machine Learning Classification Techniques for Engineering Design Applications
                        
                    
    
            Abstract Supervised machine learning techniques have proven to be effective tools for engineering design exploration and optimization applications, in which they are especially useful for mapping promising or feasible regions of the design space. The design space mappings can be used to inform early-stage design exploration, provide reliability assessments, and aid convergence in multiobjective or multilevel problems that require collaborative design teams. However, the accuracy of the mappings can vary based on problem factors such as the number of design variables, presence of discrete variables, multimodality of the underlying response function, and amount of training data available. Additionally, there are several useful machine learning algorithms available, and each has its own set of algorithmic hyperparameters that significantly affect accuracy and computational expense. This work elucidates the use of machine learning for engineering design exploration and optimization problems by investigating the performance of popular classification algorithms on a variety of example engineering optimization problems. The results are synthesized into a set of observations to provide engineers with intuition for applying these techniques to their own problems in the future, as well as recommendations based on problem type to aid engineers in algorithm selection and utilization. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1641078
- PAR ID:
- 10176475
- Date Published:
- Journal Name:
- Journal of Mechanical Design
- Volume:
- 141
- Issue:
- 12
- ISSN:
- 1050-0472
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Learned optimization algorithms are promising approaches to inverse problems by leveraging advanced numerical optimization schemes and deep neural network techniques in machine learning. In this paper, we propose a novel deep neural network architecture imitating an extra proximal gradient algorithm to solve a general class of inverse problems with a focus on applications in image reconstruction. The proposed network features learned regularization that incorporates adaptive sparsification mappings, robust shrinkage selections, and nonlocal operators to improve solution quality. Numerical results demonstrate the improved efficiency and accuracy of the proposed network over several state-of-the-art methods on a variety of test problems.more » « less
- 
            Binary classification is a fundamental machine learning task defined as correctly assigning new objects to one of two groups based on a set of training objects. Driven by the practical importance of binary classification, numerous machine learning techniques have been developed and refined over the last three decades. Among the most popular techniques are artificial neural networks, decision trees, ensemble methods, logistic regression, and support vector machines. We present here machine learning and pattern recognition algorithms that, unlike the commonly used techniques, are based on combinatorial optimization and make use of information on pairwise relations between the objects of the data set, whether training objects or not. These algorithms solve the respective problems optimally and efficiently, in contrast to the primarily heuristic approaches currently used for intractable problem models in pattern recognition and machine learning. The algorithms described solve efficiently the classification problem as a network flow problem on a graph. The technical tools used in the algorithm are the parametric cut procedure and a process called sparse computation that computes only the pairwise similarities that are “relevant.” Sparse computation enables the scalability of any algorithm that uses pairwise similarities. We present evidence on the effectiveness of the approaches, measured in terms of accuracy and running time, in pattern recognition, image segmentation, and general data mining.more » « less
- 
            Over the past decade, machine learning model complexity has grown at an extraordinary rate, as has the scale of the systems training such large models. However there is an alarmingly low hardware utilization (5-20%) in large scale AI systems. The low system utilization is a cumulative effect of minor losses across different layers of the stack, exacerbated by the disconnect between engineers designing different layers spanning across different industries. To address this challenge, in this work we designed a cross-stack performance modelling and design space exploration framework. First, we introduce CrossFlow, a novel framework that enables cross-layer analysis all the way from the technology layer to the algorithmic layer. Next, we introduce DeepFlow (built on top of CrossFlow using machine learning techniques) to automate the design space exploration and co-optimization across different layers of the stack. We have validated CrossFlow’s accuracy with distributed training on real commercial hardware and showcase several DeepFlow case studies demonstrating pitfalls of not optimizing across the technology-hardware-software stack for what is likely, the most important workload driving large development investments in all aspects of computing stack.more » « less
- 
            Abstract The incorporation of high‐performance optoelectronic devices into photonic neuromorphic processors can substantially accelerate computationally intensive matrix multiplication operations in machine learning (ML) algorithms. However, the conventional designs of individual devices and system are largely disconnected, and the system optimization is limited to the manual exploration of a small design space. Here, a device‐system end‐to‐end design methodology is reported to optimize a free‐space optical general matrix multiplication (GEMM) hardware accelerator by engineering a spatially reconfigurable array made from chalcogenide phase change materials. With a highly parallelized integrated hardware emulator with experimental information, the design of unit device to directly optimize GEMM calculation accuracy is achieved by exploring a large parameter space through reinforcement learning algorithms, including deep Q‐learning neural network, Bayesian optimization, and their cascaded approach. The algorithm‐generated physical quantities show a clear correlation between system performance metrics and device specifications. Furthermore, physics‐aware training approaches are employed to deploy optimized hardware to the tasks of image classification, materials discovery, and a closed‐loop design of optical ML accelerators. The demonstrated framework offers insights into the end‐to‐end and co‐design of optoelectronic devices and systems with reduced human supervision and domain knowledge barriers.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    