Direct digital manufacturing (DDM) is the creation of a physical part directly from a computer-aided design (CAD) model with minimal process planning and is typically applied to additive manufacturing (AM) processes to fabricate complex geometry. AM is preferred for DDM because of its minimal user input requirements; as a result, users can focus on exploiting other advantages of AM, such as the creation of intricate mechanisms that require no assembly after fabrication. Such assembly free mechanisms can be created using DDM during a single build process. In contrast, subtractive manufacturing (SM) enables the creation of higher strength parts that do not suffer from the material anisotropy inherent in AM. However, process planning for SM is more difficult than it is for AM due to geometric constraints imposed by the machining process; thus, the application of SM to the fabrication of assembly free mechanisms is challenging. This research describes a voxel-based computer-aided manufacturing (CAM) system that enables direct digital subtractive manufacturing (DDSM) of an assembly free mechanism. Process planning for SM involves voxel-by-voxel removal of material in the same way that an AM process consists of layer-by-layer addition of material. The voxelized CAM system minimizes user input by automatically generating toolpaths based on an analysis of accessible material to remove for a certain clearance in the mechanism's assembled state. The DDSM process is validated and compared to AM using case studies of the manufacture of two assembly free ball-in-socket mechanisms.
more »
« less
This content will become publicly available on March 1, 2026
Manufacturing Feature Recognition With a Sparse Voxel-Based Convolutional Neural Network
Abstract Automated manufacturing feature recognition is a crucial link between computer-aided design and manufacturing, facilitating process selection and other downstream tasks in computer-aided process planning. While various methods such as graph-based, rule-based, and neural networks have been proposed for automatic feature recognition, they suffer from poor scalability or computational inefficiency. Recently, voxel-based convolutional neural networks have shown promise in solving these challenges but incur a tradeoff between computational cost and feature resolution. This paper investigates a computationally efficient sparse voxel-based convolutional neural network for manufacturing feature recognition, specifically, an octree-based sparse voxel convolutional neural network. This model is trained on a large-scale manufacturing feature dataset, and its performance is compared to a voxel-based feature recognition model (FeatureNet). The results indicate that the octree-based model yields higher feature recognition accuracy (99.5% on the test dataset) with 44% lower graphics processing unit (GPU) memory consumption than a voxel-based model of comparable resolution. In addition, increasing the resolution of the octree-based model enables recognition of finer manufacturing features. These results indicate that a sparse voxel-based convolutional neural network is a computationally efficient deep learning model for manufacturing feature recognition to enable process planning automation. Moreover, the sparse voxel-based neural network demonstrated comparable performance to a boundary representation-based feature recognition neural network, achieving similar accuracy in single-feature recognition without having access to the exact 3D shape descriptors.
more »
« less
- Award ID(s):
- 2229260
- PAR ID:
- 10595628
- Publisher / Repository:
- ASME
- Date Published:
- Journal Name:
- Journal of Computing and Information Science in Engineering
- Volume:
- 25
- Issue:
- 3
- ISSN:
- 1530-9827
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Machine learning can be used to automate common or time-consuming engineering tasks for which sufficient data already exist. For instance, design repositories can be used to train deep learning algorithms to assess component manufacturability; however, methods to determine the suitability of a design repository for use with machine learning do not exist. We provide an initial investigation toward identifying such a method using “artificial” design repositories to experimentally test the extent to which altering properties of the dataset impacts the assessment precision and generalizability of neural networks trained on the data. For this experiment, we use a 3D convolutional neural network to estimate quantitative manufacturing metrics directly from voxel-based component geometries. Additive manufacturing (AM) is used as a case study because of the recent growth of AM-focused design repositories such as GrabCAD and Thingiverse that are readily accessible online. In this study, we focus only on material extrusion, the dominant consumer AM process, and investigate three AM build metrics: (1) part mass, (2) support material mass, and (3) build time. Additionally, we compare the convolutional neural network accuracy to that of a baseline multiple linear regression model. Our results suggest that training on design repositories with less standardized orientation and position resulted in more accurate trained neural networks and that orientation-dependent metrics were harder to estimate than orientation-independent metrics. Furthermore, the convolutional neural network was more accurate than the baseline linear regression model for all build metrics.more » « less
-
null (Ed.)Human action recognition is an important topic in artificial intelligence with a wide range of applications including surveillance systems, search-and-rescue operations, human-computer interaction, etc. However, most of the current action recognition systems utilize videos captured by stationary cameras. Another emerging technology is the use of unmanned ground and aerial vehicles (UAV/UGV) for different tasks such as transportation, traffic control, border patrolling, wild-life monitoring, etc. This technology has become more popular in recent years due to its affordability, high maneuverability, and limited human interventions. However, there does not exist an efficient action recognition algorithm for UAV-based monitoring platforms. This paper considers UAV-based video action recognition by addressing the key issues of aerial imaging systems such as camera motion and vibration, low resolution, and tiny human size. In particular, we propose an automated deep learning-based action recognition system which includes the three stages of video stabilization using the SURF feature selection and Lucas-Kanade method, human action area detection using faster region-based convolutional neural networks (R-CNN), and action recognition. We propose a novel structure that extends and modifies the InceptionResNet-v2 architecture by combining a 3D CNN architecture and a residual network for action recognition. We achieve an average accuracy of 85.83% for the entire-video-level recognition when applying our algorithm to the popular UCF-ARG aerial imaging dataset. This accuracy significantly improves upon the state-of-the-art accuracy by a margin of 17%.more » « less
-
null (Ed.)Modern digital manufacturing processes, such as additive manufacturing, are cyber-physical in nature and utilize complex, process-specific simulations for both design and manufacturing. Although computational simulations can be used to optimize these complex processes, they can take hours or days--an unreasonable cost for engineering teams leveraging iterative design processes. Hence, more rapid computational methods are necessary in areas where computation time presents a limiting factor. When existing data from historical examples is plentiful and reliable, supervised machine learning can be used to create surrogate models that can be evaluated orders of magnitude more rapidly than comparable finite element approaches. However, for applications that necessitate computationally- intensive simulations, even generating the training data necessary to train a supervised machine learning model can pose a significant barrier. Unsupervised methods, such as physics- informed neural networks, offer a shortcut in cases where training data is scarce or prohibitive. These novel neural networks are trained without the use of potentially expensive labels. Instead, physical principles are encoded directly into the loss function. This method substantially reduces the time required to develop a training dataset, while still achieving the evaluation speed that is typical of supervised machine learning surrogate models. We propose a new method for stochastically training and testing a convolutional physics-informed neural network using the transient 3D heat equation- to model temperature throughout a solid object over time. We demonstrate this approach by applying it to a transient thermal analysis model of the powder bed fusion manufacturing process.more » « less
-
Accurate detection of skin lesions through computer-aided diagnosis has emerged as a critical advancement in dermatology, addressing the inefficiencies and errors inherent in manual visual analysis. Despite the promise of automated diagnostic approaches, challenges such as image size variability, hair artifacts, color inconsistencies, ruler markers, low contrast, lesion dimension differences, and gel bubbles must be overcome. Researchers have made significant strides in binary classification problems, particularly in distinguishing melanocytic lesions from normal skin conditions. Leveraging the “MNIST HAM10000” dataset from the International Skin Image Collaboration, this study integrates Scale-Invariant Feature Transform (SIFT) features with a custom convolutional neural network model called LesionNet. The experimental results reveal the model's robustness, achieving an impressive accuracy of 99.28%. This high accuracy underscores the effectiveness of combining feature extraction techniques with advanced neural network models in enhancing the precision of skin lesion detection.more » « less
An official website of the United States government
