Generalized Mixability via Entropic Duality
2015
textabstract Mixability is a property of a loss which characterizes when
constant regret is possible in the game of prediction with expert
advice. We show that a key property of mixability generalizes, and
the $\exp$ and $\log$ operations present in the usual theory are not as
special as one might have thought.
In doing so we introduce a
more general notion of $\Phi$-mixability where $\Phi$ is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural
algorithm (the minimizer of a regret bound) which, analogous to the
classical Aggregating Algorithm, is guaranteed a constant regret
when used with $\Phi$-mixable losses.
We characterize which $\Phi$ have non-trivial $\Phi$-mixable losses and
relate $\Phi$-mixability and its associated Aggregating
Algorithm to potential-based methods, a Blackwell-like
condition, mirror descent, and risk measures from finance.
We also define a notion of ``dominance'' between different
entropies in terms of bounds they guarantee and
conjecture that classical mixability gives optimal bounds, for which we
provide some supporting empirical evidence.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
33
References
14
Citations
NaN
KQI