Abstract Nonlinear tracking control enabling a dynamical system to track a desired trajectory is fundamental to robotics, serving a wide range of civil and defense applications. In control engineering, designing tracking control requires complete knowledge of the system model and equations. We develop a model-free, machine-learning framework to control a two-arm robotic manipulator using only partially observed states, where the controller is realized by reservoir computing. Stochastic input is exploited for training, which consists of the observed partial state vector as the first and its time-delayed copy as the second component so that the neural machine regards the latter as the future state of the former. In the testing (deployment) phase, the time-delayed component is replaced by the desired observational vector from the reference trajectory. We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals, and establish its robustness against measurement noise, disturbances, and uncertainties.
Abstract According to the official report, the first case of COVID-19 and the first death in the United States occurred on January 20 and February 29, 2020, respectively. On April 21, California reported that the first death in the state occurred on February 6, implying that community spreading of COVID-19 might have started earlier than previously thought. Exactly what is time ZERO, i.e., when did COVID-19 emerge and begin to spread in the US and other countries? We develop a comprehensive predictive modeling framework to address this question. Using available data of confirmed infections to obtain the optimal values of the key parameters, we validate the model and demonstrate its predictive power. We then carry out an inverse inference analysis to determine time ZERO for ten representative States in the US, plus New York city, UK, Italy, and Spain. The main finding is that, in both the US and Europe, COVID-19 started around the new year day.
The ever-increasing complexity of modern power grids makes them vulnerable to cyber and/or physical attacks. To protect them, accurate attack detection is essential. A challenging scenario is that a localized attack has occurred on a specific transmission line but only a small number of transmission lines elsewhere can be monitored. That is, full state observation of the whole power grid is not feasible, so attack detection and state estimation need to be done with only limited, partial state observations. We articulate a machine-learning framework to address this problem, where the necessity to deal with sequential time-series data with dynamical memories and to avoid a vanishing gradient has led us to choose the long short-term memory (LSTM) architecture. Leveraging the inherent capabilities of LSTM to handle sequential data and capture temporal dependencies, we demonstrate, using three benchmark power-grid networks, that the complete dynamical state of the whole power grid can be faithfully reconstructed and the attack can be accurately localized from limited, partial state observations even in the presence of noise. The performance improves as more observations become available. Further justification for using the LSTM is provided by our comparing its performance with that of alternative machine-learning architectures such as feedforward neural networks and random forest. Despite the gigantic existing literature on applications of LSTM to power grids, to our knowledge, the problem of locating an attack and estimating the state from limited observations had not been addressed before our work. The method developed can potentially be generalized to broad complex cyber-physical systems. Published by the American Physical Society 2025
According to the official report, the first case of COVID-19 and the first death in the United States occurred on January 20 and February 29, 2020, respectively. On April 21, California reported that the first death in the state occurred on February 6, implying that community spreading of COVID-19 might have started earlier than previously thought. Exactly what is time zero, i.e., when did COVID-19 emerge and begin to spread in the U.S. and other countries? We develop a comprehensive predictive modeling framework to address this question. Using available data of confirmed infections to obtain the optimal values of the key parameters, we validate the model and demonstrate its predictive power. We then carry out an inverse inference analysis to determine time zero for 10 representative states in the U.S., plus New York City, United Kingdom, Italy, and Spain. The main finding is that, in both the U.S. and Europe, COVID-19 started around the new year day.2 MoreReceived 28 August 2020Accepted 28 January 2021DOI:https://doi.org/10.1103/PhysRevResearch.3.013155Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasMedical physics & public healthViral diseasesTechniquesEpidemic spreadingSpreading modelsInterdisciplinary Physics
Nonlinear tracking control enabling a dynamical system to track a desired trajectory is fundamental to robotics, serving a wide range of civil and defense applications. In control engineering, designing tracking control requires complete knowledge of the system model and equations. We develop a model-free, machine-learning framework to control a two-arm robotic manipulator using only partially observed states, where the controller is realized by reservoir computing. Stochastic input is exploited for training, which consists of the observed partial state vector as the first and its immediate future as the second component so that the neural machine regards the latter as the future state of the former. In the testing (deployment) phase, the immediate-future component is replaced by the desired observational vector from the reference trajectory. We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals, and establish its robustness against measurement noise, disturbances, and uncertainties.
Recent research on the Atlantic Meridional Overturning Circulation (AMOC) raised concern about its potential collapse through a tipping point due to the climate-change caused increase in the freshwater input into the North Atlantic. The predicted time window of collapse is centered about the middle of the century and the earliest possible start is approximately two years from now. More generally, anticipating a tipping point at which the system transitions from one stable steady state to another is relevant to a broad range of fields. We develop a machine-learning approach to predicting tipping in noisy dynamical systems with a time-varying parameter and test it on a number of systems including the AMOC, ecological networks, an electrical power system, and a climate model. For the AMOC, our prediction based on simulated fingerprint data and real data of the sea surface temperature places the time window of a potential collapse between the years 2040 and 2065.
We uncover a phenomenon in coupled nonlinear networks with a symmetry: as a bifurcation parameter changes through a critical value, synchronization among a subset of nodes can deteriorate abruptly, and, simultaneously, perfect synchronization emerges suddenly among a different subset of nodes that are not directly connected. This is a synchronization metamorphosis leading to an explosive transition to remote synchronization. The finding demonstrates that an explosive onset of synchrony and remote synchronization, two phenomena that have been studied separately, can arise in the same system due to symmetry, providing another proof that the interplay between nonlinear dynamics and symmetry can lead to a surprising phenomenon in physical systems.
Dataset of chaotic Chua, Lorenz, Lorenz96, Mackey-Glass with tau=17, Mackey-Glass with tau=30, Rossler, Sprott systems. If it helps, please cite us: Zhai, ZM., Moradi, M., Kong, LW. et al. Model-free tracking control of complex dynamical trajectories with machine learning. Nat Commun14, 5698 (2023). https://doi.org/10.1038/s41467-023-41379-3. Thank you!
Detecting a weak physical signal immersed in overwhelming noises entails separating the two, a task for which machine learning is naturally suited. In principle, such a signal is generated by a nonlinear dynamical system of intrinsically high dimension for which a mathematical model is not available, rendering unsuitable traditional linear or nonlinear state-estimation methods that require an accurate system model (e.g., extended Kalman filters). We exploit the architectures of reservoir computing and feed-forward neural networks (FNNs) with time-delayed inputs to solve the weak-signal-detection problem. As a prototypical example, we apply the machine-learning schemes to Earth's magnetic anomaly field-based navigation. In particular, the time series are collected from the interior of the cockpit of a flying aircraft during different maneuvering phases, where the overwhelmingly strong noise background is the result of other components of Earth's magnetic field and the fields generated by the electronic devices in the cockpit. We demonstrate that, when combined with the traditional Tolles-Lawson model for Earth's magnetic field, the articulated machine-learning schemes are effective for accurately detecting the weak anomaly field from the noisy time series. The schemes can be applied to detecting weak signals in other domains of science and engineering.
The benefits of noise to applications of nonlinear dynamical systems through mechanisms such as stochastic and coherence resonances have been well documented. Recent years have witnessed a growth of research in exploiting machine learning to predict nonlinear dynamical systems. It has been known that noise can act as a regularizer to improve the training performance of machine learning. Utilizing reservoir computing as a paradigm, we find that injecting noise to the training data can induce a resonance phenomenon with significant benefits to both short-term prediction of the state variables and long-term prediction of the attractor. The optimal noise level leading to the best performance in terms of the prediction accuracy, stability, and horizon can be identified by treating the noise amplitude as one of the hyperparameters for optimization. The resonance phenomenon is demonstrated using two prototypical high-dimensional chaotic systems.