We explore the applications of a variety of machine learning techniques in relativistic laser-plasma experiments beyond optimization purposes. With the trained supervised learning models, the beam charge of electrons produced in a laser wakefield accelerator is predicted given the laser wavefront change caused by a deformable mirror. Feature importance analysis using the trained models shows that specific aberrations in the laser wavefront are favored in generating higher beam charges, which reveals more information than the genetic algorithms and the statistical correlation do. The predictive models enable operations beyond merely searching for an optimal beam charge. The quality of the measured data is characterized, and anomaly detection is demonstrated. The model robustness against measurement errors is examined by applying a range of virtual measurement error bars to the experimental data. This work demonstrates a route to machine learning applications in a highly nonlinear problem of relativistic laser-plasma interaction for in-depth data analysis to assist physics interpretation.
We formulate multifractal models for velocity differences and gradients which describe the full range of length scales in turbulent flow, namely: laminar, dissipation, inertial, and stirring ranges. The models subsume existing models of inertial range turbulence. In the localized ranges of length scales in which the turbulence is only partially developed, we propose multifractal scaling laws with scaling exponents modified from their inertial range values. In local regions, even within a fully developed turbulent flow, the turbulence is not isotropic nor scale invariant due to the influence of larger turbulent structures (or their absence). For this reason, turbulence that is not fully developed is an important issue which inertial range study can not address. In the ranges of partially developed turbulence, the flow can be far from universal, so that standard inertial range turbulence scaling models become inapplicable. The model proposed here serves as a replacement. Details of the fitting of the parameters for the τ p and ζ p models in the dissipation range are discussed. Some of the behavior of ζ p for larger p is unexplained. The theories are verified by comparing to high resolution simulation data.
We present a novel risk measurement model capable of capturing overnight risk i.e. the risk encountered between the closing time of the previous day and the opening time of the next day. The risk model captures both the overnight risk and also the intraday risk. Statistical models of intraday asset returns must separate the market opening period from the remainder of the day as these follow statistical laws with different properties. Here we present results showing our two models for these two distinct periods.
Over 120 DT ice layer thermonuclear (TN) ignition experiments in inertial confinement fusion (ICF) were conducted on the National Ignition Facility (NIF) in the last eight years. None of the experiments achieved ignition. In fact, the measured neutron outputs from the experiments were well below what was expected. Although experiments to fine-tune the target designs are the focus of the national ICF program, insightful analysis of the existing data is a pressing need. In highly integrated ignition experiments, it is impossible to vary only one design parameter without perturbing all the other implosion variables. Thus, to determine the nonlinear relationships between the design parameters and performance from the data, a multivariate analysis based on physics models is necessary. To this end, we apply machine learning and deep learning methods to the existing NIF experimental data to uncover the patterns and physics scaling laws in TN ignition. In this study, we focus on the scaling laws between the implosion parameters and neutron yield using different supervised learning methods. Descriptions, comparisons, and contrasts between the methods are presented. Our results show that these models are able to infer a relationship between the observed stagnation conditions and neutron yields. This exploratory study will help build new capabilities to evaluate capsule designs and provide suggestions for new designs.
We formulate multifractal models for velocity differences and gradients which describe the full range of length scales in turbulent flow, namely: laminar, dissipation, inertial, and stirring ranges. The models subsume existing models of inertial range turbulence. In the localized ranges of length scales in which the turbulence is only partially developed, we propose multifractal scaling laws with scaling exponents modified from their inertial range values. In local regions, even within a fully developed turbulent flow, the turbulence is not isotropic nor scale invariant due to the influence of larger turbulent structures (or their absence). For this reason, turbulence that is not fully developed is an important issue which inertial range study can not address. In the ranges of partially developed turbulence, the flow can be far from universal, so that standard inertial range turbulence scaling models become inapplicable. The model proposed here serves as a replacement.Details of the fitting of the parameters for the $\tau_p$ and $\zeta_p$ models in the dissipation range are discussed. Some of the behavior of $\zeta_p$ for larger $p$ is unexplained. The theories are verified by comparing to high resolution simulation data.
We analyze the experimental data from high-intensity laser-plasma interactions using supervised learning techniques. We predict the beam charge of electrons produced in a laser wakefield accelerator given the laser wavefront change. This study shows that generating higher beam charges favors specific wavefronts, which is revealed by ranking the feature importance. These machine learning methods can help understand the measured data quality as well as recognize irreproducible data and outliers. To study science with error bars, we also include virtual measurement errors in the dataset to examine model robustness. This work demonstrates how machine learning methods can benefit data analysis and physics interpretation in a highly nonlinear problem of laser-plasma interaction.
Over 120 DT ice layer thermonuclear (TN) ignition experiments in inertial confinement fusion (ICF) were conducted on the National Ignition Facility (NIF) in the last eight years. None of the experiments achieved ignition. In fact, the measured neutron outputs from the experiments were well below what was expected. Although experiments to fine-tune the target designs are the focus of the national ICF program, insightful analysis of the existing data is a pressing need. In highly integrated ignition experiments, it is impossible to vary only one design parameter without perturbing all the other implosion variables. Thus, to determine the nonlinear relationships between the design parameters and performance from the data, a multivariate analysis based on physics models is necessary. To this end, we apply machine learning and deep learning methods to the existing NIF experimental data to uncover the patterns and physics scaling laws in TN ignition. In this study, we focus on the scaling laws between the implosion parameters and neutron yield using different supervised learning methods. Descriptions, comparisons, and contrasts between the methods are presented. Our results show that these models are able to infer a relationship between the observed stagnation conditions and neutron yields. This exploratory study will help build new capabilities to evaluate capsule designs and provide suggestions for new designs.