Integrating Profile Caching into the HotSpot Multi-Tier Compilation System

2017 
The Java®HotSpot Virtual Machine includes a multi-tier compilation system that may invoke a compiler at any time. Lower tiers instrument the program to gather information for the highly optimizing compiler at the top tier, and this compiler bases its optimizations on these profiles. But if the assumptions made by the top-tier compiler are proven wrong (e.g., because the profile does not cover all execution paths), the method is deoptimized: the code generated for the method is discarded and the method is then executed at Tier 0 again. Eventually, after profile information has been gathered, the method is recompiled at the top tier again (this time with less-optimistic assumptions). Users of the system experience such deoptimization cycles (discard, profile, compile) as performance fluctuations and potentially as variations in the system's responsiveness. Unpredictable performance however is problematic in many time-critical environments even if the system is not a hard real-time system. A profile cache captures the profile of earlier executions. When the application is executed again, with a fresh VM, the top tier (highly optimizing) compiler can base its decisions on a profile that reflects prior executions and not just the recent history observed during this run. We report in this paper the design and effectiveness of a profile cache for Java applications which is implemented and evaluated as part of the multi-tier compilation system of the HotSpot Java Virtual Machine in OpenJDK version 9. For a set of benchmarks, profile caching reduces the number of (re)compilations by up to 23%, the number of deoptimizations by up to 90%, and thus improves performance predictability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    3
    Citations
    NaN
    KQI
    []