A Generalization of Self-Improving Algorithms.

2020 
Ailon et al.~[SICOMP'11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances $x_1,\cdots,x_n$ follow some unknown \emph{product distribution}. That is, $x_i$ comes from a fixed unknown distribution $\mathcal{D}_i$, and the $x_i$'s are drawn independently. After spending $O(n^{1+\varepsilon})$ time in a learning phase, the subsequent expected running time is $O((n+ H)/\varepsilon)$, where $H \in \{H_\mathrm{S},H_\mathrm{DT}\}$, and $H_\mathrm{S}$ and $H_\mathrm{DT}$ are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the $x_i$'s under the \emph{group product distribution}. There is a hidden partition of $[1,n]$ into groups; the $x_i$'s in the $k$-th group are fixed unknown functions of the same hidden variable $u_k$; and the $u_k$'s are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map $u_k$ to $x_i$'s are well-behaved. After an $O(\mathrm{poly}(n))$-time training phase, we achieve $O(n + H_\mathrm{S})$ and $O(n\alpha(n) + H_\mathrm{DT})$ expected running times for sorting and DT, respectively, where $\alpha(\cdot)$ is the inverse Ackermann function.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    1
    Citations
    NaN
    KQI
    []