logo
    Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems
    113
    Citation
    58
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    This paper presents a new evolutionary-based approach called a Subtraction-Average-Based Optimizer (SABO) for solving optimization problems. The fundamental inspiration of the proposed SABO is to use the subtraction average of searcher agents to update the position of population members in the search space. The different steps of the SABO's implementation are described and then mathematically modeled for optimization tasks. The performance of the proposed SABO approach is tested for the optimization of fifty-two standard benchmark functions, consisting of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types, and the CEC 2017 test suite. The optimization results show that the proposed SABO approach effectively solves the optimization problems by balancing the exploration and exploitation in the search process of the problem-solving space. The results of the SABO are compared with the performance of twelve well-known metaheuristic algorithms. The analysis of the simulation results shows that the proposed SABO approach provides superior results for most of the benchmark functions. Furthermore, it provides a much more competitive and outstanding performance than its competitor algorithms. Additionally, the proposed approach is implemented for four engineering design problems to evaluate the SABO in handling optimization tasks for real-world applications. The optimization results show that the proposed SABO approach can solve for real-world applications and provides more optimal designs than its competitor algorithms.
    Keywords:
    Benchmark (surveying)
    Many real-world problems such as determining the type and number of wind turbines, facility placement problems, job scheduling problems, are in the category of combinatorial optimization problems in terms of the type of decision variables. However, since many of the evolutionary optimization algorithms are developed for solving continuous optimization problems, they cannot be directly applied to optimization problems with discrete decision variables. Therefore, the continuous decision variable values generated by these metaheuristics need to be converted to binary values using some techniques. In other words, to apply such algorithms to discrete optimization problems, it is necessary to adapt the candidate solution vectors of the algorithms to discrete values and make changes in their working structures. In this study, firstly, adaptation methods that are frequently used in previous studies in transforming metaheuristic optimization algorithms designed for the solution of continuous optimization problems into discrete optimization algorithms are explained. Then, the popular location update strategies used in solving discrete optimization problems are explained. The presented work summarizes the process of adapting continuous optimization algorithms to solve combinatorial problems step by step.
    Discrete optimization
    Continuous variable
    Parallel metaheuristic
    L-reduction
    Citations (0)
    Benchmark (surveying)
    Derivative-Free Optimization
    Maxima and minima
    Engineering optimization
    Citations (7)
    Wireless communication systems are based on many algorithms and theories such as information theory, coding theory, decision/estimation theory, mathematical modeling, optimization theory, and so on. Mathematical optimization techniques play an important role in many practical systems and research areas such as science, engineering, economics, statistics and medicine. The purpose of optimization is to find the best possible value of the objective function. Optimization problems are composed of three elements: objective function, constraints, and optimization variables. Linear programming is a highly useful tool of analysis and optimization. Optimization problems are categorized as convex optimization problems or non-convex optimization problems. Convex optimization covers convexity analysis, modeling and problem formulation, optimization, and numerical analysis. The gradient descent method is one of the most widely used optimization methods because it is simple and suitable for large-scale problems, and also works very well with few assumptions.
    Engineering optimization
    Derivative-Free Optimization
    Discrete optimization
    Random optimization
    Convexity
    Citations (1)
    Optimization problems are crucial in artificial intelligence. Optimization algorithms are generally used to adjust the performance of artificial intelligence models to minimize the error of mapping inputs to outputs. Current evaluation methods on optimization algorithms generally consider the performance in terms of quality. However, not all optimization algorithms for all test cases are evaluated equal from quality, the computation time should be also considered for optimization tasks. In this paper, we investigate the quality and computation time of optimization algorithms in optimization problems, instead of the one-for-all evaluation of quality. We select the well-known optimization algorithms (Bayesian optimization and evolutionary algorithms) and evaluate them on the benchmark test functions in terms of quality and computation time. The results show that BO is suitable to be applied in the optimization tasks that are needed to obtain desired quality in the limited function evaluations, and the EAs are suitable to search the optimal of the tasks that are allowed to find the optimal solution with enough function evaluations. This paper provides the recommendation to select suitable optimization algorithms for optimization problems with different numbers of function evaluations, which contributes to the efficiency that obtains the desired quality with less computation time for optimization problems.
    Bayesian Optimization
    Benchmark (surveying)
    Derivative-Free Optimization
    Engineering optimization
    Citations (0)
    The objective of this research is to efficiently solve discontinuous optimization problems as well as optimization problems with large infeasible regions in the design variables space. Recently, major optimization targets have been changed to more complicated ones such as topology optimization problem, discontinuous optimization problem, robust optimization problem and high dimensional optimization problem. The aim of this research is to efficiently solve the complicated optimization problems by using machine learning technologies. In aerodynamic optimization problems at supersonic flow conditions, it is confirmed that aerodynamic objective functions have discontinuity due to shock waves and it needs to treat the discontinuous functions and large infeasible regions via strong shock waves. In this research, therefore, we develop an efficient global optimization method for discontinuous optimization problems with infeasible regions using classification method (EGODISC). The developed method is compared with a Bayesian optimization method using the Matern 5/2 kernel Gaussian process regression and a genetic algorithm to verify the usefulness of the developed method. The Bayesian optimization falls into an infinite loop in its optimization process by selecting an additional sample point in the infeasible regions. On the other hand, the developed method can work well with the infeasible regions in the design variables space. It is confirmed that EGODISC can be effectively used with discontinuous aerodynamic objective functions. It is also confirmed that EGODISC can be effectively used for a shape optimization problem with large infeasible regions by the negative thickness of airfoil.
    Bayesian Optimization
    Random optimization
    Derivative-Free Optimization
    Topology optimization
    Engineering optimization
    Multidisciplinary design optimization
    Engineering optimization
    Robust Optimization
    Derivative-Free Optimization
    Topology optimization
    Global Optimization
    Shape Optimization
    We focused on a decomposit type optimization method that makes it easier to solve individual optimization problems by dividing the optimization problem into smaller optimization problems in multiple design spaces. Then, based on the parametric relationship of the product system model, a hierarchical optimization method to divide the optimization problem into optimizable optimization problems that can definitely define the objective function was studied. As a result, we focused on the dependency relationship between the design variables of the optimization problem and the objective function, and constructed an algorithm that divides and hierarchizes the optimization problem into an optimizable problem, which can be definitely defined by the objective function. In addition, we have constructed an automatic division and hierarchization method that automatically realizes division and hierarchization based on division / hierarchical algorithm, and realized a practical automatic hierarchical optimization method without user intervention.
    Engineering optimization
    Derivative-Free Optimization
    Global Optimization
    Discrete optimization
    Derivative-Free Optimization
    L-reduction
    Engineering optimization
    Optimization algorithm
    Random optimization
    Citations (12)
    This research focused on a decomposition type optimization method that makes it easier to solve individual optimization problems by dividing the optimization problem into smaller optimization problems in multiple design spaces. Then, based on the parametric relationship of the product system model, a hierarchical optimization method to divide the optimization problem into optimizable optimization problems that can definitely define the objective function was studied. For this purpose, this research focused on the dependency relationship between the design variables of the optimization problem and the objective function and constructed an algorithm that divides and hierarchizes the optimization problem into an optimizable problem, which can be definitely defined by the objective function. By using this algorithm, this research adapted this method for 3DLSI rough design. As a result, we get the proper definition of optimization problem.
    Engineering optimization
    Multidisciplinary design optimization
    Random optimization
    Citations (1)