Smart community microgrids are capable of efficiently addressing the energy and environmental challenges faced by cities. However, the inherent instability of renewable energy sources and the diverse nature of user demands pose challenges to the safe operation of community power systems. In this article, we first introduce a comprehensive system architecture, and an operational framework based on Energy Internet of Things (EIoT), which considers system‐level safety, reliability, and cost‐effectiveness, thereby enhancing the system’s coordination and performance. Next, we propose a bi‐level coordinated optimization method based on the users’ electricity consumption behaviors. At the planning level, we employ a multiobjective optimization approach to determine the most suitable microgrid configurations that cater to the requirements of various user groups, and the results derived from adaptive weight particle swarm optimization (PSO) algorithm are fed back to the operational level. At the operational level, a 24‐h time scale is selected, and the economic efficiency problem is addressed using a linear programming method. The operational decision results are then fed back to the planning level for major maintenance of the microgrid system. Meanwhile, we employ trend prediction methods to categorize maintenance tasks into short‐term and long‐term operations based on an analysis of daily operational data. The short‐term prediction results can serve as a reference to guide daily short‐term operations and maintenance tasks, while the long‐term prediction results can inform renovation and reconstruction initiatives for community microgrid. Finally, we choose a community as the subject of our study, and the results indicate that our research can provide new methods for the design and operation of microgrid in smart communities, thereby improving the scalability of the community’s power system.
Abstract Advanced motion planning is crucial for safe and efficient robotic operations in various scenarios of smart manufacturing, such as assembling, packaging and palletizing. Compared to traditional motion planning methods, Reinforcement Learning (RL) shows better adaptability to complex and dynamic working environments. However, the training of RL models is often time-consuming and the determination of well-behaved reward function parameters is challenging. To tackle these issues, we propose an adaptive robot motion planning approach based on digital twin and reinforcement learning. The core idea is to adaptively select geometry-based or RL-based method for robot motion planning through a real-time distance detection mechanism, which can reduce the complexity of RL model training and accelerate the training process. In addition, we integrate Bayesian Optimization within RL training to refine the reward function parameters. We validate the approach with a Digital Twin enabled robot system through five kinds of tasks (Pick and Place, Drawer Open, Light Switch, Button Press, Cube Push) in dynamic environments. Experiment results show that our approach outperforms traditional RL-based method with improved training speed and guaranteed task performance. This work contributes to the practical deployment of adaptive robot motion planning in smart manufacturing.