Wafer defect inspection is one of the crucial semiconductor processing technologies because it can help to identify the surface defects in the process and eventually improve the yield.Manual inspection using human eye is subjective and long-term fatigue can lead to erroneous classification.Deep learning technology such as convolutional neural network (CNN) is a promising way to achieve automated wafer defect classification.The training of CNN is time consuming and it is nontrivial to fine tune its hyperparameters to achieve good classification performance.In this study, Arithmetic Optimization Algorithm (AOA) is proposed to optimize the CNN hyperparameters, such as momentum, initial learn rate, maximum epochs, L2 regularization, to reduce the burden brought by trial-and-error methods.The hyperparameters of a well-known pretrained model, i.e., GoogleNet, are optimized using AOA to perform wafer defects classification task.Simulation studies report that the AOA-optimized GoogleNet achieves promising accuracy of 91.32% in classifying wafer defects.
Convolutional neural networks (CNNs) have excelled in artificial intelligence, particularly in image-related tasks such as classification and object recognition. However, manually designing CNN architectures demands significant domain expertise and involves time-consuming trial-and-error processes, along with substantial computational resources. To overcome this challenge, an automated network design method known as Modified Teaching-Learning-Based Optimization with Refined Knowledge Sharing (MTLBORKS-CNN) is introduced. It autonomously searches for optimal CNN architectures, achieving high classification performance on specific datasets without human intervention. MTLBORKS-CNN incorporates four key features. It employs an effective encoding scheme for various network hyperparameters, facilitating the search for innovative and valid network architectures. During the modified teacher phase, it leverages a social learning concept to calculate unique exemplars that effectively guide learners while preserving diversity. In the modified learner phase, self-learning and adaptive peer learning are incorporated to enhance knowledge acquisition of learners during CNN architecture optimization. Finally, MTLBORKS-CNN employs a dual-criterion selection scheme, considering both fitness and diversity, to determine the survival of learners in subsequent generations. MTLBORKS-CNN is rigorously evaluated across nine image datasets and compared with state-of-the-art methods. The results consistently demonstrate MTLBORKS-CNN’s superiority in terms of classification accuracy and network complexity, suggesting its potential for infrastructural development of smart devices.
Artificial neural networks (ANNs) have achieved great success in performing machine learning tasks, including classification, regression, prediction, image processing, image recognition, etc., due to their outstanding training, learning, and organizing of data. Conventionally, a gradient-based algorithm known as backpropagation (BP) is frequently used to train the parameters’ value of ANN. However, this method has inherent drawbacks of slow convergence speed, sensitivity to initial solutions, and high tendency to be trapped into local optima. This paper proposes a modified particle swarm optimization (PSO) variant with two-level learning phases to train ANN for image classification. A multi-swarm approach and a social learning scheme are designed into the primary learning phase to enhance the population diversity and the solution quality, respectively. Two modified search operators with different search characteristics are incorporated into the secondary learning phase to improve the algorithm’s robustness in handling various optimization problems. Finally, the proposed algorithm is formulated as a training algorithm of ANN to optimize its neuron weights, biases, and selection of activation function based on the given classification dataset. The ANN model trained by the proposed algorithm is reported to outperform those trained by existing PSO variants in terms of classification accuracy when solving the majority of selected datasets, suggesting its potential applications in challenging real-world problems, such as intelligent condition monitoring of complex industrial systems.
Feature selection is a popular pre-processing technique applied to enhance the learning performances of machine learning models by removing irrelevant features without compromising their accuracies.The rapid growth of input features in big data era has increased the complexities of feature selection problems tremendously.Given their excellent global search ability, differential evolution (DE) and particle swarm optimization (PSO) are considered as the promising techniques used to solve feature selection problems.In this paper, a new hybrid algorithm is proposed to solve feature selection problems more effectively by leveraging the strengths of both DE and PSO.The proposed feature selection algorithm is reported to achieve an average accuracy of 89.03% when solving 7 datasets obtained from UCI Machine Learning Repository.
Different variants of particle swarm optimization (PSO) algorithms were introduced in recent years with various improvements to tackle different types of optimization problems more robustly. However, the conventional initialization scheme tends to generate an initial population with relatively inferior solution due to the random guess mechanism. In this paper, a PSO variant known as modified PSO with chaotic initialization scheme is introduced to solve unconstrained global optimization problems more effectively, by generating a more promising initial population. Experimental studies are conducted to assess and compare the optimization performance of the proposed algorithm with four existing well-establised PSO variants using seven test functions. The proposed algorithm is observed to outperform its competitors in solving the selected test problems.