The efficient production planning of Additively Manufactured (AM) parts is a key point for industry-scale adoption of AM. This study develops an AM-based production plan for the case of manufacturing a significant number of parts with different shapes and sizes by multiple machines with the ultimate purpose of reducing the cycle time. The proposed AM-based production planning includes three main steps: (1) determination of build orientation; (2) 2D packing of parts within the limited workspace of AM machines; and (3) scheduling parts on multiple AM machines. For making decision about build orientation, two main policies are considered: (1) laying policy in which the focus is on reducing the height of parts; and (2) standing policy which aims at minimizing the projection area on the tray to reduce the number of jobs. A heuristic algorithm is suggested to solve 2D packing and scheduling problems. A numerical example is conducted to identify which policy is more preferred in terms of cycle time. As a result, the standing policy is more preferred than the laying policy as the number of parts increases. In the case of testing 3,000 parts, the cycle time of standing policy is about 6% shorter than laying policy.
more »
« less
The impact of build orientation policies on the completion time in two-dimensional irregular packing for additive manufacturing
The study investigates the impact of build orientation policies on the production time in additive manufacturing (AM) for mass customisation business models. Two main orientation policies are considered: (1) Laying Policy (LP) that focuses on reducing the height of parts; and (2) Standing Policy (SP) that aims to minimise the projection base plane of parts to reduce the number of jobs. While LP minimises the build time per job since parts have low height, it could increase the total completion time as the number of parts increases. On the other hand, SP takes longer build time per job due to the high height of parts, where it could lead to a fewer number of jobs. Several numerical experiments have been conducted based on Stereolithography (SLA). The results show that, when the number of parts is experimentally about 40, SP could be more preferred than LP for minimising the completion time where the shape tendency of parts is likely to affect the extent of preference for the policies. When 40 parts with long and flat shape are considered, SP reduces the completion time by 15.7% over the default policy, the initial orientation of a part, while LP reduces by only 6.6%.
more »
« less
- PAR ID:
- 10185543
- Date Published:
- Journal Name:
- International journal of production research
- ISSN:
- 0020-7543
- Page Range / eLocation ID:
- 1-15
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multiserver-job systems, where jobs require concurrent service at many servers, occur widely in practice. Essentially all of the theoretical work on multiserver-job systems focuses on maximizing utilization, with almost nothing known about mean response time. In simpler settings, such as various known-size single-server-job settings, minimizing mean response time is merely a matter of prioritizing small jobs. However, for the multiserver-job system, prioritizing small jobs is not enough, because we must also ensure servers are not unnecessarily left idle. Thus, minimizing mean response time requires prioritizing small jobs while simultaneously maximizing throughput. Our question is how to achieve these joint objectives. We devise the ServerFilling-SRPT scheduling policy, which is the first policy to minimize mean response time in the multiserver-job model in the heavy traffic limit. In addition to proving this heavy-traffic result, we present empirical evidence that ServerFilling-SRPT outperforms all existing scheduling policies for all loads, with improvements by orders of magnitude at higher loads. Because ServerFilling-SRPT requires knowing job sizes, we also define the ServerFilling-Gittins policy, which is optimal when sizes are unknown or partially known.more » « less
-
Data center workloads are composed of multiresource jobs requiring a variety of computational resources including CPU cores, memory, disk space, and hardware accelerators. Mod- ern servers can run multiple jobs in parallel, but a set of jobs can only run in parallel if the server has sufficient resources to satisfy the demands of each job. It is generally hard to find sets of jobs that perfectly utilize all server resources, and choosing the wrong set of jobs can lead to low resource uti- lization. This raises the question of how to allocate resources across a stream of arriving multiresource jobs to minimize the mean response time across jobs — the mean time from when a job arrives to the system until it is complete. Current policies for scheduling multiresource jobs are com- plex to analyze and hard to implement. We propose a class of simple policies, called Markovian Service Rate (MSR) policies. We show that the class of MSR policies is throughput- optimal, in that if a policy exists that can stabilize the sys- tem, then an MSR policy exists that stabilizes the system. We derive bounds on the mean response time under an MSR policy, and show how our bounds can be used to choose an MSR policy that minimizes mean response time.more » « less
-
Due to a growing interest in deep learning applications [5], compute-intensive and long-running (hours to days) training jobs have become a significant component of datacenter workloads. A large fraction of these jobs is often exploratory, with the goal of determining the best model structure (e.g., the number of layers and channels in a convolutional neural network), hyperparameters (e.g., the learning rate), and data augmentation strategies for the target application. Notably, training jobs are often terminated early if their learning metrics (e.g., training and validation accuracy) are not converging, with only a few completing successfully. For this motivating application, we consider the problem of scheduling a set of jobs that can be terminated at predetermined checkpoints with known probabilities estimated from historical data. We prove that, in order to minimize the time to complete the first K successful jobs on a single server, optimal scheduling does not require preemption (even when preemption overhead is negligible) and provide an optimal policy; advantages of this policy are quantified through simulation. Related Work. While job scheduling has been investigated extensively in many scenarios (see [6] and [2] for a survey of recent result), most policies require that the cost of waiting times of each job be known at scheduling time; in contrast, in our setting the scheduler does not know which job will be the K-th successful job, and sojourn times of subsequent jobs do not contribute to the target metric. For example, [4, 3] minimize makespan (i.e., the time to complete all jobs) for known execution times and waiting time costs; similarly, Gittins index [1] and SR rank [7] minimize expected sojourn time of all jobs, i.e., both successfully completed jobs and jobs terminated early. Unfortunately, scheduling policies not distinguishing between these two types of jobs may favor jobs where the next stage is short and leads to early termination with high probability, which is an undesirable outcome in our applications of interest.more » « less
-
Wootters, Mary and (Ed.)We consider online scheduling to minimize weighted completion time on related machines, where each job consists of several tasks that can be concurrently executed. A job gets completed when all its component tasks finish. We obtain an O(K³ log² K)-competitive algorithm in the non-clairvoyant setting, where K denotes the number of distinct machine speeds. The analysis is based on dual-fitting on a precedence-constrained LP relaxation that may be of independent interest.more » « less
An official website of the United States government

