Download This Paper Open PDF in Browser Add Paper to My Library Share: Permalink Using these links will ensure access to this page indefinitely Copy URL Reinforcement Learning Driven Dual Neighborhood Structure Artificial Bee Colony Algorithm with Adaptive Neighborhood Search 35 Pages Posted: 24 Feb 2024 See all articles by Tingyu YeTingyu YeSouth China University of TechnologyPing ZhangSouth China University of TechnologyHongliang ZengSouth China University of TechnologyJiahua WangSouth China University of Technology Abstract Artificial bee colony algorithm (ABC) is one of the most popular swarm intelligence optimization algorithms currently. Although ABC has strong exploration capability, its exploitation ability is weak. It is difficult to find the optimal solution for complex optimization problem. Neighborhood topology-based search method has been proposed and achieved excellent results for the above problem. In fact, the neighborhood topology size seriously affects the efficiency of search, and most of the current works has not been well considered. Obtaining the appropriate neighborhood size is a challenge task. To further explore the potential of neighborhood topology, this paper proposed a reinforcement learning driven dual neighborhood structure ABC with adaptive neighborhood search (called RL_DNSABC). In RL_DNSABC, a dual neighborhood structure combining reinforcement learning driven random neighborhood structure (RL_RNS) and the classical neighborhood structure based on Euclidean distance (EDNS) is built. In RL_RNS, an adaptive neighborhood search method is designed based on reinforcement learning. Then, a novel probability selection technique based on RL_RNS in the onlooker bee phase. Moreover, three search strategies with different preferences are devised to exploration and exploitation based on RL_RNS and EDNS. To verify the effectiveness of RL_DNSABC, nineteen ABC variants are compared on the classical benchmark set and the CEC 2013 benchmark set. Experimental results show that RL_DNSABC obtained competitive performance than the compared algorithms. Keywords: Artificial bee colony, Adaptive neighborhood search, Dual neighborhood structure, Reinforcement learning, Search strategy. Suggested Citation: Suggested Citation Ye, Tingyu and Zhang, Ping and Zeng, Hongliang and Wang, Jiahua, Reinforcement Learning Driven Dual Neighborhood Structure Artificial Bee Colony Algorithm with Adaptive Neighborhood Search. Available at SSRN: https://ssrn.com/abstract=4737763 Tingyu Ye South China University of Technology ( email ) WushanGuangzhou, AR 510640China Ping Zhang (Contact Author) South China University of Technology ( email ) WushanGuangzhou, AR 510640China Hongliang Zeng South China University of Technology ( email ) WushanGuangzhou, AR 510640China Jiahua Wang South China University of Technology ( email ) WushanGuangzhou, AR 510640China Download This Paper Open PDF in Browser Do you have negative results from your research you’d like to share? Submit Negative Results Paper statistics Downloads 0 Abstract Views 2 58 References PlumX Metrics Feedback Feedback to SSRN Feedback (required) Email (required) Submit If you need immediate assistance, call 877-SSRNHelp (877 777 6435) in the United States, or +1 212 448 2500 outside of the United States, 8:30AM to 6:00PM U.S. Eastern, Monday - Friday.
Precise perception of articulated objects is vital for empowering service robots. Recent studies mainly focus on point cloud, a single-modal approach, often neglecting vital texture and lighting details and assuming ideal conditions like optimal viewpoints, unrepresentative of real-world scenarios. To address these limitations, we introduce MARS, a novel framework for articulated object characterization. It features a multi-modal fusion module utilizing multi-scale RGB features to enhance point cloud features, coupled with reinforcement learning-based active sensing for autonomous optimization of observation viewpoints. In experiments conducted with various articulated object instances from the PartNet-Mobility dataset, our method outperformed current state-of-the-art methods in joint parameter estimation accuracy. Additionally, through active sensing, MARS further reduces errors, demonstrating enhanced efficiency in handling suboptimal viewpoints. Furthermore, our method effectively generalizes to real-world articulated objects, enhancing robot interactions. Code is available at https://github.com/robhlzeng/MARS.
Precise perception of articulated objects is vital for empowering service robots. Recent studies mainly focus on point cloud, a single-modal approach, often neglecting vital texture and lighting details and assuming ideal conditions like optimal viewpoints, unrepresentative of real-world scenarios. To address these limitations, we introduce MARS, a novel framework for articulated object characterization. It features a multi-modal fusion module utilizing multi-scale RGB features to enhance point cloud features, coupled with reinforcement learning-based active sensing for autonomous optimization of observation viewpoints. In experiments conducted with various articulated object instances from the PartNet-Mobility dataset, our method outperformed current state-of-the-art methods in joint parameter estimation accuracy. Additionally, through active sensing, MARS further reduces errors, demonstrating enhanced efficiency in handling suboptimal viewpoints. Furthermore, our method effectively generalizes to real-world articulated objects, enhancing robot interactions. Code is available at https://github.com/robhlzeng/MARS.
In the field of 2D image generation modeling and representation learning, Masked Generative Encoder (MAGE) has demonstrated the synergistic potential between generative modeling and representation learning. Inspired by this, we propose Point-MAGE to extend this concept to point cloud data. Specifically, this framework first utilizes a Vector Quantized Variational Autoencoder (VQVAE) to reconstruct a neural field representation of 3D shapes, thereby learning discrete semantic features of point patches. Subsequently, by combining the masking model with variable masking ratios, we achieve synchronous training for both generation and representation learning. Furthermore, our framework seamlessly integrates with existing point cloud self-supervised learning (SSL) models, thereby enhancing their performance. We extensively evaluate the representation learning and generation capabilities of Point-MAGE. In shape classification tasks, Point-MAGE achieved an accuracy of 94.2% on the ModelNet40 dataset and 92.9% (+1.3%) on the ScanObjectNN dataset. Additionally, it achieved new state-of-the-art performance in few-shot learning and part segmentation tasks. Experimental results also confirmed that Point-MAGE can generate detailed and high-quality 3D shapes in both unconditional and conditional settings.