Divisive hierarchical clustering is a powerful tool for extracting knowledge from data with a pluralistic and appropriate information granularity. Recent developments of hierarchical clustering algorithms apply Growing Neural Gas (GNG) to data divisive mechanisms. However, GNG-based algorithms tend to generate nodes excessively and sensitive to the input order of data points. Furthermore, the plasticity-stability dilemma is another unavoidable problem. In this paper, we propose a divisive hierarchical clustering algorithm based on Adaptive Resonance Theory-based clustering. Simulation experiments show that the proposed algorithm can generate an appropriate tree structure depending on data while improving the performance of hierarchical clustering.
This paper proposes a supervised classification algorithm capable of continual learning by utilizing an Adaptive Resonance Theory (ART)-based growing self-organizing clustering algorithm. The ART-based clustering algorithm is theoretically capable of continual learning, and the proposed algorithm independently applies it to each class of training data for generating classifiers. Whenever an additional training data set from a new class is given, a new ART-based clustering will be defined in a different learning space. Thanks to the above-mentioned features, the proposed algorithm realizes continual learning capability. Simulation experiments showed that the proposed algorithm has superior classification performance compared with state-of-the-art clustering-based classification algorithms capable of continual learning.
Metabolic Syndrome (MetS) constitutes of metabolic abnormalities that lead to non-communicable diseases, such as type II diabetes, cardiovascular diseases, and cancer. Early and accurate diagnosis of this abnormality is required to prevent its further progression to these diseases. This paper aims to diagnose the risk of MetS using a new non-clinical approach called "genetically optimized Bayesian adaptive resonance theory mapping" (GOBAM). We evolve the Bayesian adaptive resonance theory mapping (BAM) by using genetic algorithm to optimize the parameters of BAM and its training input sequence. We use the GOBAM algorithm to classify individuals as either being at risk of MetS or not at risk of MetS with a related posterior probability, which ranges between 0 and 1. A data set of 11 237 Malaysians from the CLUSTer study stratified by age and gender into four subcategories was used to evaluate the proposed GOBAM algorithm. The comparative evaluation of our results suggested that the GOBAM performs significantly better than other classical adaptive resonance theory mapping models on the area under the receiver operating characteristic curves (AUC) and others criteria. Our algorithm gives an AUC of 86.42 %, 87.04 %, 91.08 %, and 89.24 % for the young female, middle aged female, young male, and middle-aged male subcategories, respectively. The proposed model can be used to support medical practitioners in accurate and early diagnosis of MetS.
Generating various strategies for a given task is challenging. However, it has already proven to bring many assets to the main learning process, such as improved behavior exploration. With the growth in the interest of heterogeneity in solution in evolutionary computation and reinforcement learning, many promising approaches have emerged. To better understand how one guides multiple policies toward distinct strategies and benefit from diversity, we need to analyze further the influence of the reward signal modulation and other evolutionary mechanisms on the obtained behaviors. To that effect, this paper considers an existing evolutionary reinforcement learning framework which exploits multi-objective optimization as a way to obtain policies that succeed at behavior-related tasks as well as completing the main goal. Experiments on the Atari games stress that optimization formulations which do not consider objectives equally fail at generating diversity and even output agents that are worse at solving the problem at hand, regardless of the obtained behaviors.
A promising idea for evolutionary constrained optimization is to efficiently utilize not only feasible solutions (feasible individuals) but also infeasible ones. In this paper, we propose a simple implementation of this idea in MOEA/D. In the proposed method, MOEA/D has two grids of weight vectors. One is used for maintaining the main population as in the standard MOEA/D. In the main population, feasible solutions always have higher fitness than infeasible ones. Among infeasible solutions, solutions with smaller constraint violations have higher fitness. The other grid is for maintaining a secondary population where non-dominated solutions with respect to scalarizing function values and constraint violations are stored. More specifically, a single non-dominated solution with respect to the scalarizing function and the total constraint violation is stored for each weight vector. A new solution is generated from a pair of neighboring solutions in the two grids. That is, there exist three possible combinations of two parents: both from the main population, both from the secondary population, and each from each population. The proposed MOEA/D variant is compared with the standard MOEA/D and other evolutionary algorithms for constrained multiobjective optimization through computational experiments.
There may exist more than one Pareto optimal solution with the same objective vector to a multimodal multiobjective optimization problem (MMOP). The difficulties in finding such solutions can be different. Although a number of evolutionary multimodal multiobjective algorithms (EMMAs) have been proposed, they are unable to solve such an MMOP due to their convergence-first selection criteria. They quickly converge to the Pareto optimal solutions which are easy to find and therefore lose diversity in the decision space. That is, such an MMOP features an imbalance between achieving convergence and preserving diversity in the decision space. In this article, we first present a set of imbalanced distance minimization benchmark problems. Then we propose an evolutionary algorithm using a convergence-penalized density method (CPDEA). In CPDEA, the distances among solutions in the decision space are transformed based on their local convergence quality. Their density values are estimated based on the transformed distances and used as the selection criterion. We compare CPDEA with five state-of-the-art EMMAs on the proposed benchmarks. Our experimental results show that CPDEA is clearly superior in solving these problems.
This paper attempts to solve the typical problems of self-organizing growing network models, i.e. (a) an influence of the order of input data on the self-organizing ability, (b) an instability to high-dimensional data and an excessive sensitivity to noise, and (c) an expensive computational cost by integrating Kernel Bayes Rule (KBR) and Correntropy-Induced Metric (CIM) into Adaptive Resonance Theory (ART) framework. KBR performs a covariance-free Bayesian computation which is able to maintain a fast and stable computation. CIM is a generalized similarity measurement which can maintain a high-noise reduction ability even in a high-dimensional space. In addition, a Growing Neural Gas (GNG)-based topology construction process is integrated into the ART framework to enhance its self-organizing ability. The simulation experiments with synthetic and real-world datasets show that the proposed model has an outstanding stable self-organizing ability for various test environments.