From Forecast to Decisions in Graphical Models: A Natural Gradient Optimization Approach

2021 
Graphical models and in particular Hidden Markov Models or their continuous space equivalent, the so called Kalman filter model, are a powerful tool to make some inference that can be used in decision making contexts. The estimation of their parameters is usually based on the Expectation Maximization approach as this is a natural statistical way to train them. When used for decision making, it may be more relevant to find parameters that are relevant to our decisions rather than just try to fit the model from a statistical point of view. Hence, we can reformulate the determination of graphical model as an inference problem where the true concern is the quality of the decisions from the forecast given by the model. We show that the resulting optimization problem can be reformulated as an information geometric optimization problem and introduce a natural gradient descent strategy that incorporates additional meta parameters. We show that our approach is a strong alternative to the celebrated EM approach for learning in graphical models. Actually, our natural gradient based strategy leads to learning optimal parameters for the final objective function (which is our decision) without artificially trying to fit a distribution that may not correspond to the real one. We support our theoretical findings with the question of decision in financial markets and show that the learned model performs better than traditional practitioner methods and is less prone to overfitting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []