Recently, a new test point insertion method for pseudo-random built-in self-test (BIST) was proposed in [Yang 09] which tries to use functional flip-flops to drive control test points instead of adding extra dedicated flip-flops for driving the control points. This paper investigates methods to further reduce the area overhead by replacing dedicated flip-flops which could not be replaced in [Yang 09]. A new algorithm (alternative selection algorithm) is proposed to find candidate flip-flops out of the fan-in cone of a test point. Experimental results indicate that most of the not-replaced flip-flops in [Yang 09] can be replaced and hence even more significant area reduction can be achieved with minimizing the loss of testability.
The approach that the author takes for this panel is to start with a vanilla sequential linear decompressor and look at the results it obtains. Then the author adds different enhancements to it along the lines of the factors discussed here and see what impact these have on the results
This paper presents a logic synthesis tool called BETSY (BIST Environment Testable SYnthesis) for synthesizing circuits that achieve complete (100%) fault coverage in a user specified BIST environment. Instead of optimizing the circuit for a generic pseudo-random test pattern generator (by maximizing its random pattern testability), the circuit is optimized for a specific test pattern generator, e.g., an LFSR with a specific characteristic polynomial and initial seed. This solves the problem of having to estimate fault detection probabilities during synthesis and guarantees that the resulting circuit achieves 100% fault coverage. BETSY considers the exact set of patterns that will be applied to the circuit during BIST and applies various transformations to generate an implementation that is fully tested by those patterns. When needed, BETSY inserts test points early in the synthesis process in an optimal way and accounts for them in satisfying timing constraints and other synthesis criteria. Experimental results are shown which demonstrate the benefits of optimizing a circuit for a particular test pattern generator.
The entropy of a set of data is related to the amount of information that it contains and provides a theoretical bound on the amount of compression that can be achieved. While calculating entropy is well understood for fully specified data, this paper explores the use of entropy for incompletely specified test data and shows how theoretical bounds on the maximum amount of test data compression can be calculated. An algorithm for specifying don't cares to minimize entropy for fixed length symbols is presented, and it is proven to provide the lowest entropy among all ways of specifying the don't cares. The impact of different ways of partitioning the test data into symbols on entropy is studied. Different test data compression techniques are analyzed with respect to their entropy bounds. Entropy theory is used to show the limitations and advantages of certain types of test data encoding strategies.
A new lossless test vector compression scheme is presented which combines linear feedback shift register (LFSR) reseeding and statistical coding in a powerful way. Test vectors can be encoded as LFSR seeds by solving a system of linear equations. The solution space of the linear equations can be quite large. The proposed method takes advantage of this large solution space to find seeds that can be efficiently encoded using a statistical code. Two architectures for implementing LFSR reseeding with seed compression are described. One configures the scan cells themselves to perform the LFSR functionality while the other uses a new idea of "scan windows" to allow the use of a small separate LFSR whose size is independent of the number of scan cells. The proposed scheme can be used either for applying a fully deterministic test set or for mixed-mode built-in self-test (BIST), and it can be used in conjunction with other variations of LFSR reseeding that have been previously proposed to further improve the encoding efficiency.
The proliferation of both partially depleted silicon-on-insulator (PDSOI) technology and domino circuit styles has allowed for increases in circuit performance beyond that of scaling traditional bulk CMOS static circuits. However, interactions between dynamic circuit styles and PD-SOI complicate testing. This paper describes the issues of testing domino circuits fabricated in SOI technology and new tests are proposed to address the interactions. A fault modeling analysis is described which demonstrates that the overall fault coverage can be improved beyond that of traditional testing of domino circuits in bulk technology.
When assembling a three-dimensional integrated circuit (3D-IC), there are several degrees of freedom including which die are stacked together, in what order, and with what rotational symmetry. This paper describes strategies for exploiting these degrees of freedom to reduce the cost and complexity of implementing defect tolerance. Conventional defect tolerance schemes involve bypassing defects by reconfiguring the circuitry so that system operation is performed using defect-free circuitry. Explicit reconfiguration circuitry is required to perform the reconfiguration, and the power distribution network must be designed to support all redundant elements. The schemes proposed in this paper use the degrees of freedom that exist when a 3D-IC is assembled at manufacture time to implicitly bypass manufacturing defects without the need for explicit reconfiguration circuitry. Defects are identified during manufacture test, and the 3D-ICs are assembled in a way that avoids the use of the defective circuitry. It is shown that leakage power and performance overhead for defect tolerance can be significantly reduced.
The at-speed functional testing of deep sub-micron devices equipped with high-speed I/O ports and the asynchronous nature of such I/O transactions poses significant challenges. In this paper, the problem of nondeterminism in the output response of the device-under-test (DUT) is described. This can arise due to limited automated test equipment (ATE) edge placement accuracy(EPA) in the source synchronous clock of the stimulus stream to the high-speed I/O port from the tester. A simple yet effective solution that uses a trigger signal to initiate a deterministic transfer of test inputs into the core clock domain of the DUT from the high-speed I/O port is presented. The solution allows the application of at-speed functional patterns to the DUT while incurring a very small hardware overhead and trivial increase in test application time. An analysis of the probability of non-determinism as a function of clock speed and EPA is presented. It shows that as the frequency of operation of high-speed I/Os continues to rise, non-determinism will become a significant problem that can result in an unacceptable yield loss.
This paper presents an innovative method for inserting test points in the circuit-under-test to obtain complete fault coverage for a specified set of test patterns. Rather than using probabilistic techniques for test point placement, a path tracing procedure is used to place both control and observation points. Rather than adding extra scan elements to drive the control points, a few of the existing primary inputs to the circuit are ANDed together to form signals that drive the control points. By selecting which patterns the control point is activated for, the effectiveness of each control point is maximized. A comparison is made with the best previously published results for other test point insertion methods, and it is shown that the proposed method requires fewer test points and less overhead to achieve the same or better fault coverage.
Linear decompressors are the dominant methodology used in commercial test data compression tools. However, they are generally not able to exploit correlations in the test data, and thus the amount of compression that can be achieved with a linear decompressor is directly limited by the number of specified bits in the test data. The paper describes a scheme in which a nonlinear decoder is placed between the linear decompressor and the scan chains. The nonlinear decoder uses statistical transformations that exploit correlations in the test data to reduce the number of specified bits that need to be produced by the linear decompressor. Given a test set, a procedure is presented for selecting a statistical code that effectively "compresses" the number of specified bits (note that this is a novel and different application of statistical codes from what has been studied before and requires new algorithms). Results indicate that the overall compression can be increased significantly using a small nonlinear decoder produced with the procedure described in this paper.