Traditional seawater desalination requires high amounts of energy, with correspondingly high costs and limited benefits, hindering wider applications of the process. To further improve the comprehensive economic benefits of seawater desalination, the desalination load can be combined with renewable energy sources such as solar energy, wind energy, and ocean energy or with the power grid to ensure its effective regulation. Utilizing energy internet (EI) technology, energy balance demand of the regional power grid, and coordinated control between coastal multi-source multi-load and regional distribution network with desalination load is reviewed herein. Several key technologies, including coordinated control of coastal multi-source multi-load system with seawater desalination load, flexible interaction between seawater desalination and regional distribution network, and combined control of coastal multi-source multi-load storage system with seawater desalination load, are discussed in detail. Adoption of the flexible interaction between seawater desalination and regional distribution networks is beneficial for solving water resource problems, improving the ability to dissipate distributed renewable energy, balancing and increasing grid loads, improving the safety and economy of coastal power grids, and achieving coordinated and comprehensive application of power grids, renewable energy sources, and coastal loads.
Background Policy makers are facing more complicated challenges to balance saving lives and economic development in the post-vaccination era during a pandemic. Epidemic simulation models and pandemic control methods are designed to tackle this problem. However, most of the existing approaches cannot be applied to real-world cases due to the lack of adaptability to new scenarios and micro representational ability (especially for system dynamics models), the huge computation demand, and the inefficient use of historical information. Methods We propose a novel Pandemic Control decision making framework via large-scale Agent-based modeling and deep Reinforcement learning (PaCAR) to search optimal control policies that can simultaneously minimize the spread of infection and the government restrictions. In the framework, we develop a new large-scale agent-based simulator with vaccine settings implemented to be calibrated and serve as a realistic environment for a city or a state. We also design a novel reinforcement learning architecture applicable to the pandemic control problem, with a reward carefully designed by the net monetary benefit framework and a sequence learning network to extract information from the sequential epidemiological observations, such as number of cases, vaccination, and so forth. Results Our approach outperforms the baselines designed by experts or adopted by real-world governments and is flexible in dealing with different variants, such as Alpha and Delta in COVID-19. PaCAR succeeds in controlling the pandemic with the lowest economic costs and relatively short epidemic duration and few cases. We further conduct extensive experiments to analyze the reasoning behind the resulting policy sequence and try to conclude this as an informative reference for policy makers in the post-vaccination era of COVID-19 and beyond. Limitations The modeling of economic costs, which are directly estimated by the level of government restrictions, is rather simple. This article mainly focuses on several specific control methods and single-wave pandemic control. Conclusions The proposed framework PaCAR can offer adaptive pandemic control recommendations on different variants and population sizes. Intelligent pandemic control empowered by artificial intelligence may help us make it through the current COVID-19 and other possible pandemics in the future with less cost both of lives and economy. Highlights We introduce a new efficient, large-scale agent-based epidemic simulator in our framework PaCAR, which can be applied to train reinforcement learning networks in a real-world scenario with a population of more than 10,000,000. We develop a novel learning mechanism in PaCAR, which augments reinforcement learning with sequence learning, to learn the tradeoff policy decision of saving lives and economic development in the post-vaccination era. We demonstrate that the policy learned by PaCAR outperforms different benchmark policies under various reality conditions during COVID-19. We analyze the resulting policy given by PaCAR, and the lessons may shed light on better pandemic preparedness plans in the future.
With the increase of frequency and power, thermal management becomes more and more important in the design and application of RF and microwave circuits. In this paper, the microwave heating problem is studied. The discontinuous Galerkin time-domain (DGTD) method is adopted for EM simulation, while the finite-element time-domain (FETD) method is utilized for thermal simulation. The large-scale MPI parallel programming technique is used to accelerate both the DGTD and FETD method. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method.
This paper quantitatively analyzes different types of image changes according to the characteristics of each algorithm, and put forward different optimal algorithms for different types of pictures. Firstly, four classical matching algorithms are selected and compared for scale, photometric and rotational robustness. In order to solve the limitation of the robustness of single algorithm, three improved algorithms are proposed. Based on the combination of SURF and ORB algorithms and one or more feature point screening, the improved algorithm is used to improve accuracy. Secondly, the improved algorithm is tested by using images with multiple types of changes at the same time. It is concluded that the improved algorithm has strong robustness and can effectively improve image matching accuracy. Finally, the simulation result shows that the selection of the optimal algorithm according to the features of the picture maximizes the advantages of different algorithms to meet the quantity of matching points and the matching accuracy.
Digital compute-in-memory (DCIM) has advantages of performing robust and efficient high-precision, e.g. floating-point, multiplication and accumulation operations (MACs), compared to analog CIM solutions. Prior DCIMs [1–4] normally adopt dataflow with broadcast inputs and stationary weights, which can only obtain peak energy efficiency (EE) and area efficiency (AE) at full utilization. However, the NN model sizes often mismatch with the fixed CIM macro size in practical applications, leading to unavoidable degradation of utilization and efficiency. As shown in Fig. 1, the low utilization issue, e.g. 49.8% for YOLO-v7, becomes more pronounced when it comes to lightweight edge AI models, e.g. 17.6% for EfficientNet-lite4. The low utilization can also incur energy wastage from the unused macro circuits. Furthermore, the large amount of weight updates during DCIM weight preloading or reloading can degrade the system-level EE and AE, which is often overlooked. Although [3–4] hides weight update with simultaneous MAC, the growing model size requires more frequent weight reloading, leading to non-static-weight compute in DCIM.
In Al-edge devices, the changes of input features are normally progressive or occasional, e.g., abnormal surveillance, hence the reprocessing of unchanged data consumes a tremendously redundant amount of energy. Computing-in-memory (CIM) directly executes matrix-vector multiplications (MVMs) in memory, eliminating costly data movement energy in deep neural networks (DNNs) [2–6]. Prior CIM work only explored the sparsity of DNNs to improve energy efficiency, but the trend of employing non-sparse activation functions, e.g., leaky ReLU, degrade the benefits of leveraging sparsity [1]. Even if sparsity can be exploited, the redundant unchanged input features in analog CIM still consume massive amount of dynamic power (Fig. 7.8.1). From a circuit point-of-view, the energy consumption of analog CIMs is dominated by full-precision ADCs. In different DNN applications, the mean of analog CIM outputs is unpredictable and fluctuating, which requires the ADC to have a high dynamic range to guarantee coverage, introducing a high-power overhead.
In this paper, a 400kHz high-frequency dual-buck inverter is fabricated for the applications of small scale renewable energy generation. A systematic calculation method for the converter's loss distribution is proposed to evaluate the efficiency. This method concerns the impacts caused by the high frequency on the loss distribution, which are always neglected by the traditional method. In the last, a 1kW prototype is made to test the effect of the proposed efficiency estimation method. The efficiency at 1kW is 96.1%, while the THD is 1.8%. The loss comparison between the SiC and Si is also given as the proof of the theoretical analyses.
With the rapid development of artificial intelligence (AI) technology and its successful application in various fields, modeling and simulation technology, especially multi-agent modeling and simulation (MAMS), of complex systems has rapidly advanced. In this study, we first describe the concept, technical advantages, research steps, and research status of MAMS. Then we review the development status of the hybrid modeling and simulation combining multi-agent and system dynamics, the modeling and simulation of multi-agent reinforcement learning, and the modeling and simulation of large-scale multi-agent. Lastly, we introduce existing MAMS platforms and their comparative studies. This work summarizes the current research situation of MAMS, thus helping scholars understand the systematic technology development of MAMS in the AI era. It also paves the way for further research on MAMS technology.
The shell condenser is one of the key components of underwater vehicles. To study its thermal performance and to design a more efficient structure, a computational model is generated to simulate condensation inside straight and helical channels. The model combines empirical correlations and a MATLAB-based iterative algorithm. The vapor quality is used as a sign of the degree of condensation. Three calculation models are compared, and the optimal model is verified by a comparison of simulated results and available experimental data. Several cases are designed to reveal the effects of various inlet conditions and the diameter-over-radius (Dh/R) ratio. The results show that the inlet temperature and mass rate significantly affect the flow and heat transfer in the condensation process, the heat transfer capabilities of the helical channels are much better than that of the straight channel, and both the heat transfer coefficient and total pressure drop increase with the decrease of Dh/R. This study may provide a useful reference for performance prediction and structural design of shell condensers used for underwater vehicles and may provide a relatively universal prediction model for condensation in channels.