Application of arithmetic coding to compression of VLSI test data

2005 
This paper proposes arithmetic coding for application to data compression for VLSI testing. The use of arithmetic codes results in a codeword whose length is close to the optimal value (as predicted by entropy in information theory), thus achieving a higher compression. Previous techniques (such as those based on Huffman or Golomb coding) result in optimal codes for data sets in which the probability model of the symbols satisfies specific requirements. This paper shows empirically and analytically that Huffman and Golomb codes can result in a large difference between the bound established by the entropy and the attained compression; therefore, the worst-case difference is studied using information theory. Compression results for arithmetic coding are presented using ISCAS benchmark circuits; a practical integer implementation of arithmetic coding/decoding and an analysis of its deviation from the entropy bound are pursued. A software implementation is proposed using embedded DSP cores. In the experimental evaluation, fully specified test vectors and test cubes from two different ATPG programs are utilized. The implications of arithmetic coding on manufacturing test using an ATE are also investigated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []