Scalable Kernelization for Maximum Independent Sets
2018
The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms—or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. Current algorithms are either slow but produce a small kernel or fast and give a large kernel. Yet it is of critical importance for these algorithms that kernelization is fast and returns a small kernel. We attempt to accomplish both of these goals simultaneously by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels but up to two orders of magnitude faster than possible previously. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
35
References
19
Citations
NaN
KQI