1. Technical Field
This disclosure generally relates to testing and evaluation of electronic integrated circuits such as Application Specific Integrated Circuits (ASIC) and System-on-a-chip (SoC) designs, and more specifically relates to a method and apparatus for self evaluation of an integrated circuits such as an SoC or ASIC with multiple cores for Partial Good (PG) testing of the integrated circuit.
2. Background Art
Digital integrated circuits such as a system-on-a-chip (SoC) with ASIC or custom integrated circuit designs are becoming increasingly complex. SoC designs are including increasing numbers of microprocessor cores, some of which may be redundant for functional or manufacturing yield reasons. These multiple (microprocessor) cores are difficult to test and characterize as they are imbedded within the design. Multiple cores per die also increases manufacturing test time, complexity, and cost. As used herein, a “core” is a microcontroller, processor, digital signal processor (DSP) or other large block of circuitry that is replicated with a number of instances on an integrated circuit.
The testing of these devices is therefore becoming increasingly important. Testing of a device may be important at various stages, including in the design of the device, in the manufacturing of the device, and in the operation of the device. Testing during the manufacturing stage may be performed to ensure that the timing, proper operation and performance of the device are as expected. Ideally, it would be helpful to test the device for every possible defect. Because of the complexity of most devices, however, it is becoming prohibitively expensive to take the deterministic approach of testing every possible combination of inputs to each logic gate and states of the device. A more practical approach applies pseudorandom input test patterns to the inputs of the different logic gates. The outputs of the logic gates are then compared to the outputs generated by a “good” device (one that is known to operate properly) in response to the same pseudorandom input test patterns. The more input patterns that are tested, the higher the probability that the logic circuit being tested operates properly (assuming there are no differences between the results generated by the two devices.)
This non-deterministic approach can be implemented using built in test such as logic built-in self-test (LBIST) techniques. For example, one LBIST technique involves incorporating latches between portions of the logic being tested (the target logic,) loading these latches with pseudorandom bit patterns and then capturing the bit patterns that result from the propagation of the pseudorandom data through the target logic. Conventionally, the captured bit patterns are scanned out of the scan chains into a multiple-input signature register (MISR,) in which the bit patterns are combined with an existing signature value to produce a new signature value. This signature value can be examined (e.g., compared with the signature generated in a device that is known to operate properly) to determine whether or not the device under test functioned properly during the test.
In some devices, such as multiprocessor integrated circuits and SoC, the device may be considered to be “good,” even if some portions of the device include defects. For instance, in a SoC having multiple cores, the SoC may still be functional if one or more of the cores is defective. This is called Partial Good (PG) or PG testing.
The disclosure and claims herein are directed to a method and structure to test a SoC or other integrated circuit having multiple cores for chip characterization or partial good status. A Self Evaluation Engine (SEE) on each core creates a quality metric or partial good value for the core. The SEE executes one or more tests to create a characterization signature for the core. The SEE then compares the characterization signature of a core with a characterization signature of neighboring cores to determine the partial good value for the core. The SEE may output a result to create a full characterization map for detailed diagnostics or a partial good map with values for all cores to produce a partial good status for the entire SoC.
The foregoing and other features and advantages will be apparent from the following more particular description, as illustrated in the accompanying drawings.
The disclosure will be described in conjunction with the appended drawings, where like designations denote like elements, and:
The disclosure and claims herein are directed to a method and structure to test a SoC or other integrated circuit having multiple cores for chip characterization or partial good status. A Self Evaluation Engine (SEE) on each core creates a quality metric or partial good value for the core. The SEE executes one or more tests to create a characterization signature for the core. The SEE then compares the characterization signature of a core with a characterization signature of neighboring cores to determine the partial good value for the core. The SEE may output a result to create a full characterization map for detailed diagnostics or a partial good map with values for all cores to produce a partial good status for the entire SoC.
The multiple core chip 110 illustrated in
The external tester 112 in conjunction with the SEEs in each core test the cores in a manner known in the prior art to produce a signature that represents the state of the cores. The external tester may load a small program into a control core (not shown) and then allow the self test 128 to produce the local signature 210. The cores can concurrently run the self tests, and then each core compares their local signature with the signatures of neighboring cores to efficiently determine a partial good value for the local core. Various methods of comparing the signatures are described in the examples below.
Again referring to
As introduced above, in a core centric compare, a core characterization signature is compared to the characterization signatures of a number of neighboring cores. The number of comparisons is the compare range. The range may be predefined in the SEE or may be programmable from the external tester 112 (
The SEE can evaluate the core in one or more passes of comparing the local characterization signature with the characterization signatures in neighboring cores. For multiple passes, the SEE may include logic to bypass comparisons with neighboring cores that did not meet the threshold for that pass. The number of passes may be one or more. In many cases a single pass is preferred. The number of passes may be preset or programmable from the external tester in the same manner as the compare range and the threshold.
We will consider a specific example with reference to
After the first pass, the results of the comparisons are as in Table 1. In subsequent passes, all cores that have mismatches per the threshold will be bypassed for the next pass. In the above example, if the threshold is set at 0, then cores C4 and C6 would be marked as bad and an additional pass would not be required. However if the failing threshold is 2, then C7 and C8 would also have been marked as bad. In this example we assume the number of passes is set at 2 and the threshold is 0, and another pass will be performed. In table 1A, it can be seen that cores C4 and C6 had all mismatched signatures and will be bypassed in the next pass shown in Table 1B.
Table 1B shows the results of the compares after the second pass. From the data in Table 1B, the comparisons have identified 6 passing cores and 2 failing cores (C4 and C6). The SEE of each core will identify itself as passing (cores C1, C2, C3, C5, C7 and C8) or failing (C4 and C6) and generate a goodness value 516 (
We will consider a second example of a core centric compare. In this example, there are 12 cores, each with its' own internally generated signature similar to that shown in
After making the comparisons for the entire compare range, cores C4, C6 and C11 would each be identified as a bad cores, having matched 0 of 4 and thus failing the threshold set to 0. Note that if the threshold was set to 1 then cores C8, C9 and C12 would also have been marked as bad. These would be marked as bad even though others with the same characterization signature (D) are marked as good because there are multiple cores with the same characterization signature.
In the previous example, rather than moving the threshold to 1, a better approach would be to increase the compare range to 6. This would have continued to have cores C4, C6, and C11 identified as bad, but cores C8, C9 and C12 would have 2 matching flags so also pass a stricter threshold of 1. Conversely if the compare range was dropped to 2 then not only are C4, C6 and C11 marked as bad with a threshold of 0, but so are C5 and C12. Typically a threshold of 0 is sufficient to sort the good from the bad, as it is very unlikely that 2 bad cores will have the same signature. However the wider the compare range the more chances a core gets to find another that it matches and thus save it from being marked as bad. Eliminated Cores, cores marked as bad, could also be picked back up by a later step, by using every good proven passing signature to compare across the full range. However, that would better be accomplished by setting the compare range for the full width of the cores for a single first pass but at additional cost of time and testing resources.
The described method and structure provides a cost effective and efficient way to test a SoC or other integrated circuit having multiple cores for partial good status. A Self Evaluation Engine (SEE) evaluates each of the multiple cores with metrics of goodness to create a partial good status for the SoC by comparing the characterization signature of a core with neighboring cores to determine a partial good value for the core.
One skilled in the art will appreciate that many variations are possible within the scope of the claims. Thus, while the disclosure is particularly shown and described above, it will be understood by those skilled in the art that these and other changes in form and details may be made therein without departing from the spirit and scope of the claims.
Number | Name | Date | Kind |
---|---|---|---|
5732209 | Vigil et al. | Mar 1998 | A |
5963566 | Rajsuman et al. | Oct 1999 | A |
6158029 | Richter et al. | Dec 2000 | A |
6442722 | Nadeau-Dostie et al. | Aug 2002 | B1 |
6487688 | Nadeau-Dostie | Nov 2002 | B1 |
6564348 | Barenys et al. | May 2003 | B1 |
6614263 | Nadeau-Dostie et al. | Sep 2003 | B2 |
6763485 | Whetsel | Jul 2004 | B2 |
7127640 | Parulkar et al. | Oct 2006 | B2 |
7313739 | Menon et al. | Dec 2007 | B2 |
7348796 | Crouch et al. | Mar 2008 | B2 |
7484144 | Han et al. | Jan 2009 | B2 |
7484153 | Kiryu et al. | Jan 2009 | B2 |
7689884 | Seuring | Mar 2010 | B2 |
7814385 | Bahl | Oct 2010 | B2 |
7917820 | Pavie et al. | Mar 2011 | B1 |
8046643 | Andreev et al. | Oct 2011 | B2 |
8046655 | Dubey | Oct 2011 | B2 |
8122312 | Floyd et al. | Feb 2012 | B2 |
8156391 | Andreev et al. | Apr 2012 | B2 |
8248073 | Inoue et al. | Aug 2012 | B2 |
20040003329 | Cote et al. | Jan 2004 | A1 |
20060250864 | Hartmann | Nov 2006 | A1 |
20080023700 | Gschwind | Jan 2008 | A1 |
20080276144 | Huben et al. | Nov 2008 | A1 |
20090089636 | Fernsler et al. | Apr 2009 | A1 |
20120084603 | Riley | Apr 2012 | A1 |
Entry |
---|
Ebadi et al., “Time Domain Multiplexed TAM: Implementation and Comparison”, Proceedings of the Design,Automation and Test in Europe Conference and Exhibition (Date'03), p. 732-737, IEEE Computer Society, 2003. |
Apostolakis et al., “Exploiting Thread-Level Parallelism in Functional Self-Testing of CMT Processors”, 2009 European Test Symposium, p. 33-38, IEEE Computer Society, 2009. |
Lee et al., “On-Chip SOC Test Platform Design Based on IEEE 1500 Standard”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 18, No. 7, p. 1134-1139, Jul. 2010. |
Denq et al., “A Parallel Built-In Diagnostic Scheme for Multiple Embedded Memories”, Records of the 2004 International Workshop on Memory Technology, Design and Testing (MTDT'04), p. 65-69, IEEE Computer Society, 2004. |
International Search Report and Written Opinion of the ISA dated Mar. 13, 2014—International Application No. PCT/CN2013/087904. |
Number | Date | Country | |
---|---|---|---|
20140157072 A1 | Jun 2014 | US |