logo
    Design of virtual machine Cache side channel attack detection method based on MINE algorithm
    0
    Citation
    12
    Reference
    10
    Related Paper
    Keywords:
    Algorithm design
    Cache-oblivious algorithm
    Caching strategies can improve the overall performance of a system by allowing the fast processor and slow memory to at a same pace.One important factor in caching is the replacement policy.Advancement in technology results in evolution of a huge number of techniques and algorithms implemented to improve cache performance.In this paper, analysis is done on different cache optimization techniques as well as replacement algorithms.Furthermore this paper presents a comprehensive statistical comparison of cache optimization techniques.To the best of our knowledge there is no numerical measure which can tell us the rating of specific cache optimization technique.We tried to come up with such a numerical figure.By statistical comparison we find out which technique is more consistent among all others.For said purpose we calculated mean and CV (Coefficient of Variation).CV tells us about which technique is more consistent.Comparative analysis of different techniques shows that victim cache has more consistent technique among all.
    Cache-oblivious algorithm
    Cache pollution
    Cache invalidation
    Pace
    Smart Cache
    Factor (programming language)
    Citations (9)
    Recent studies have shown that in highly associative caches, the performance gap between the least recently used (LRU) and the theoretical optimal replacement algorithms is large, suggesting that alternative replacement algorithms can improve the performance of the cache. One main reason for this performance gap is that in the LRU replacement algorithm, a line is only evicted after it becomes the LRU line, long after its last access/touch, while unnecessarily occupying the cache space for a long time. This paper proposes a new approach to deal with the problem: counter-based L2 cache replacement. In this approach, each line in the L2 cache is augmented with an event counter that is incremented when an event of interest, such as a cache access to the same set, occurs. When the counter exceeds a threshold, the line "expires", and becomes evictable. When expired lines are evicted early from the cache, they make extra space for lines that may be more useful, reducing the number of capacity and conflict misses. Each line's threshold is unique and is dynamically learned and stored in a small 40-Kbyte counter prediction table. We propose two new replacement algorithms: access interval predictor (AIP) and live-time predictor (LvP). AIP and LvP speed up 10 (out of 21) SPEC2000 benchmarks by up to 40% and 11% on average.
    Cache-oblivious algorithm
    Cache invalidation
    Smart Cache
    Cache pollution
    Citations (51)
    Cache-oblivious algorithm
    Cache invalidation
    Blocking (statistics)
    Locality of reference
    Cache pollution
    Citations (16)
    We study matrix-matrix multiplication of two matrices, $A$ and $B$, each of size $n \times n$. This operation results in a matrix $C$ of size $n\times n$. Our goal is to produce $C$ as efficiently as possible given a cache: a 1-D limited set of data values that we can work with to perform elementary operations (additions, multiplications, etc.). That is, we attempt to reuse the maximum amount of data from $A$, $B$ and $C$ during our computation (or equivalently, utilize data in the fast-access cache as often as possible). Firstly, we introduce the matrix-matrix multiplication algorithm. Secondly, we present a standard two-memory model to simulate the architecture of a computer, and we explain the LRU (Least Recently Used) Cache policy (which is standard in most computers). Thirdly, we introduce a basic model Cache Simulator, which possesses an $\mathcal{O}(M)$ time complexity (meaning we are limited to small $M$ values). Then we discuss and model the LFU (Least Frequently Used) Cache policy and the explicit control cache policy. Finally, we introduce the main result of this paper, the $\mathcal{O}(1)$ Cache Simulator, and use it to compare, experimentally, the savings of time, energy, and communication incurred from the ideal cache-efficient algorithm for matrix-matrix multiplication. The Cache Simulator simulates the amount of data movement that occurs between the main memory and the cache of the computer. One of the findings of this project is that, in some cases, there is a significant discrepancy in communication values between an LRU cache algorithm and explicit cache control. We propose to alleviate this problem by ``tricking'' the LRU cache algorithm by updating the timestamp of the data we want to keep in cache (namely entries of matrix $C$). This enables us to have the benefits of an explicit cache policy while being constrained by the LRU paradigm (realistic policy on a CPU).
    Cache-oblivious algorithm
    Cache pollution
    Cache invalidation
    Page cache
    Smart Cache
    Matrix (chemical analysis)
    Citations (0)
    The cache oblivious model is a simple and elegant model to design algorithms that perform well in hierarchical memory models ubiquitous on current systems. This model was first formulated in [321] and has since been a topic of intense research. Analyzing and designing algorithms and data structures in this model involves not only an asymptotic analysis of the number of steps executed in terms of the input size, but also the movement of data optimally among the different levels of the memory hierarchy. This chapter is aimed as an introduction to the “ideal-cache” model of [321] and techniques used to design cache oblivious algorithms. The chapter also presents some experimental insights and results.
    Cache-oblivious algorithm
    Memory hierarchy
    Cache invalidation
    Cache pollution
    Citations (31)
    Processors speed is much faster than memory; to bridge this gap cache memory is used. This paper proposes a preeminent pair of replacement algorithms for Level 1 cache (L1) and Level 2 cache (L2) respectively for the matrix multiplication (MM) application. The access patterns of L1 and L2 are different, when CPU not gets the desired data in L1 then it goes to L2. Thus the replacement algorithm which works efficiently for L1 may not be efficient for L2. With the reference string of MM, the paper has analyzed the behavior of various existing replacement algorithms at L1 and L2 respectively. The replacement algorithms which are taken into consideration are: least recently used (LRU), least frequently used (LFU) and first in first out (FIFO). This paper has also proposed new replacement algorithms for L1 (NEW ALGO1) and for L2 (NEW ALGO2) respectively for the same application. Analysis shows that by applying these algorithms at L1 and L2 respectively miss rates are considerably reduced.
    Cache-oblivious algorithm
    Cache oblivious algorithms are designed to get the good benefit from any of the underlying hierarchy of caches without the need to know about the exact structure of the cache. These algorithms are cache oblivious i.e., no variables are dependent on hardware parameters such as cache size and cache line length. Optimal utilization of cache memory has to be done in order to get the full performance potential of the hardware. We present here the miss rate comparison of cache oblivious matrix multiplication using the sequential access recursive technique and normal multiplication program. Varying the cache size the respective miss rates in the L1 cache are taken and then comparison is done. It is found that the miss rates in the L1 cache for the cache oblivious matrix multiplication program using the sequential access recursive technique is comparatively lesser than the naive matrix multiplication program.
    Cache-oblivious algorithm
    Cache invalidation
    Cache pollution
    Smart Cache
    Page cache
    In this paper we study the performance of a family of cache replacement algorithms. The cache is decomposed into lists. Items enter the cache via the first list. An item enters the cache via the first list and jumps to the next list whenever a hit on it occurs. The classical policies FIFO, RANDOM, CLIMB and its hybrids are obtained as special cases. We present explicit expressions for the cache content distribution and miss probability under the IRM model. We develop an algorithm with a time complexity that is polynomial in the cache size and linear in the number of items to compute the exact miss probability. We introduce lower and upper bounds on the latter that can be computed in a time that is linear in the cache size times the number of items.
    Cache invalidation
    Cache-oblivious algorithm
    Cache pollution
    Smart Cache
    Page cache
    Citations (77)