Cloud computing is a concept introduced in the information technology era, with the main components being the grid, distributed, and valuable computing. The cloud is being developed continuously and, naturally, comes up with many challenges, one of which is scheduling. A schedule or timeline is a mechanism used to optimize the time for performing a duty or set of duties. A scheduling process is accountable for choosing the best resources for performing a duty. The main goal of a scheduling algorithm is to improve the efficiency and quality of the service while at the same time ensuring the acceptability and effectiveness of the targets. The task scheduling problem is one of the most important NP-hard issues in the cloud domain and, so far, many techniques have been proposed as solutions, including using genetic algorithms (GAs), particle swarm optimization, (PSO), and ant colony optimization (ACO). To address this problem, in this paper one of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied. The performance of the proposed algorithm has been compared with that of GAs, PSO, continuous ACO, and the basic SSA. The results show that our algorithm has generally higher performance than the other algorithms. For example, compared to the basic SSA, the proposed method has an average reduction of approximately 21% in makespan.
more »
« less
Optimization of MLP neural network for modeling effects of electric fields on bubble growth in pool boiling
In this paper, a multilayer perceptron (MLP)-type artificial neural network model with a back-propagation training algorithm is utilized to model the bubble growth and bubble dynamics parameters in nucleate boiling with a non-uniform electric field. The influences of the electric field on different parameters that describe bubble’s behaviors including bubble waiting time, bubble departure frequency, bubble growth time, and bubble departure diameter are considered. This study models single bubble dynamic behaviors of R113 created on a heater in an inconsistent electric field by utilizing a MLP neural network optimized by four different swarm-based optimization algorithms, namely: Salp Swarm Algorithm (SSA), Grey Wolf Optimizer (GWO), Artificial Bee Colony (ABC) algorithm, and Particle Swarm Optimization (PSO). For evaluating the model effectiveness, the MSE value (Mean-Square Error) of the artificial neural network model with various optimization algorithms is measured and compared. The results suggest that the optimal networks in the two-hidden layer and three-hidden layer models for the bubble departure diameter improve MSE by 33.85% and 35.27%, respectively, when compared with the best response in the one-hidden layer model. Additionally, for bubble growth time, the networks with two hidden layers and three hidden layers have the 44.51% and 45.85% reduction in error, when compared with the network with one hidden layer, respectively. For the departure frequency, the error reduction in the two-layer and three-layer networks is 46.85% and 62.32%, respectively. For bubble waiting time, the best networks in the two hidden-layer and three hidden-layer models improve MSE by 52.44% and 62.27% compared with the best 1HL model response, respectively. Also, the two algorithms of SSA and GWO are able to compete well (comparable MSE) with the PSO and ABC algorithms.
more »
« less
- Award ID(s):
- 1917272
- PAR ID:
- 10552531
- Publisher / Repository:
- Springer Nature
- Date Published:
- Journal Name:
- Heat and Mass Transfer
- Volume:
- 60
- Issue:
- 2
- ISSN:
- 0947-7411
- Page Range / eLocation ID:
- 329 to 361
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The challenge of optimizing personalized learning pathways to maximize student engagement and minimize task completion time while adhering to prerequisite constraints remains a significant issue in educational technology. This paper applies the Salp Swarm Algorithm (SSA) as a new solution to this problem. Our approach compares SSA against traditional optimization techniques such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The results demonstrate that SSA significantly outperforms these methods, achieving a lower average fitness value of 307.0 compared to 320.0 for GA and 315.0 for PSO. Furthermore, SSA exhibits greater consistency, with a lower standard deviation and superior computational efficiency, as evidenced by faster execution times. The success of SSA is attributed to its balanced approach to exploration and exploitation within the search space. These findings highlight the potential of SSA as an effective tool for optimizing personalized learning experiencesmore » « less
-
Matrix multiplication is one of the bottleneck computations for training the weights within deep neural networks. To speed up the training phase, we propose to use faster algorithms for matrix multiplication known as Arbitrary Precision Approximating (APA) algorithms. APA algorithms perform asymptotically fewer arithmetic operations than the classical algorithm, but they compute an approximate result with an error that can be made arbitrarily small in exact arithmetic. Practical APA algorithms provide significant reduction in computation time and still provide enough accuracy for many applications like neural network training. We demonstrate that APA algorithms can be efficiently implemented and parallelized for multicore CPUs to obtain up to 28% and 21% speedups over the fastest implementation of the classical algorithm using one core and 12 cores, respectively. Furthermore, using these algorithms to train a Multi-Layer Perceptron (MLP) network yields no significant reduction in the training or testing error. Our performance results on a large MLP network show overall sequential and multithreaded performance improvements of up to 25% and 13%, respectively. We also demonstrate up to 15% improvement when training the fully connected layers of the VGG-19 image classification network.more » « less
-
We discuss applicability of Principal Component Analysis and Particle Swarm Optimization in protein tertiary structure prediction. The proposed algorithm is based on establishing a low-dimensional space where the sampling (and optimization) is carried out via Particle Swarm Optimizer (PSO). The reduced space is found via Principal Component Analysis (PCA) performed for a set of previously found low- energy protein models. A high frequency term is added into this expansion by projecting the best decoy into the PCA basis set and calculating the residual model. Our results show that PSO improves the energy of the best decoy used in the PCA considering an adequate number of PCA terms.more » « less
-
Employing Reconfigurable Intelligent Surface (RIS) is an advanced strategy to enhance the efficiency of wireless communication systems. However, the number and positions of the RISs elements are still challenging and require a smart optimization framework. This paper aims to optimize the number of RISs subject to the technical limitations of the average achievable data rate with consideration of the practical overlapping between the associated multi-RISs in wireless communication systems. In this regard, the Differential evolution optimizer (DEO) algorithm is created to minimize the number of RIS devices to be installed. Accordingly, the number, positions, and phase shift matrix coefficients of RISs are then jointly optimized using the intended DEO. Also, it is contrasted to several recent algorithms, including Particle swarm optimization (PSO), Gradient-based optimizer (GBO), Growth optimizer (GO), and Seahorse optimization (SHO). The outcomes from the simulation demonstrate the high efficiency of the proposed DEO and GO in obtaining a 100% feasibility rate for finding the minimum number of RISs under different threshold values of the achievable rates. PSO scores a comparable result of 99.09%, while SHO and GBO attain poor rates of 66.36% and 53.94%, respectively. Nevertheless, the excellence of the created DEO becomes evident through having the lowest average number of RISs when compared to the other algorithms. Numerically, the DEO drives improvements by 5.13%, 15.68%, 30.58%, and 51.01% compared to GO, PSO, SHO and GBO, respectively.more » « less
An official website of the United States government

