The present invention is related to the following commonly-owned, United States Patent Applications filed on even date herewith, the entire contents and disclosure of each of which is expressly incorporated by reference herein as if fully set forth herein. U.S. patent application Ser. No. 11/768,645, now U.S. Pat. No. 7,886,084, for “OPTIMIZED COLLECTIVES USING A DMA ON A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,781, now U.S. Pat. No. 7,694,035 for “DMA SHARED BYTE COUNTERS IN A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,784, now U.S. Pat. No. 7,788,334, for “MULTIPLE NODE REMOTE MESSAGING”; U.S. patent application Ser. No. 11/768,697, now U.S. Pat. No. 8,103,832, for “A METHOD AND APPARATUS OF PREFETCHING STREAMS OF VARYING PREFETCH DEPTH”; U.S. patent application Ser. No. 11/768,532, now U.S. Pat. No. 7,877,551, for “PROGRAMMABLE PARTITIONING FOR HIGH-PERFORMANCE COHERENCE DOMAINS IN A MULTIPROCESSOR SYSTEM”; U.S. patent application Ser. No. 11/768,857, now U.S. Pat. No. 7,827,391, for “METHOD AND APPARATUS FOR SINGLE-STEPPING COHERENCE EVENTS IN A MULTIPROCESSOR SYSTEM UNDER SOFTWARE CONTROL”; U.S. patent application Ser. No. 11/768,547, now U.S. Pat. No. 7,669,012, for “INSERTION OF COHERENCE EVENTS INTO A MULTIPROCESSOR COHERENCE PROTOCOL”; U.S. patent application Ser. No. 11/768,791, now U.S. Pat. No. 8,140,925, for “METHOD AND APPARATUS TO DEBUG AN INTEGRATED CIRCUIT CHIP VIA SYNCHRONOUS CLOCK STOP AND SCAN”; U.S. patent application Ser. No. 11/768,795, now U.S. Pat. No. 7,802,025, for “DMA ENGINE FOR REPEATING COMMUNICATION PATTERNS”; U.S. patent application Ser. No. 11/768,799, now U.S. Pat. No. 7,680,971, for “METHOD AND APPARATUS FOR A CHOOSE-TWO MULTI-QUEUE ARBITER”; U.S. patent application Ser. No. 11/768,800 for “METHOD AND APPARATUS FOR EFFICIENTLY TRACKING QUEUE ENTRIES RELATIVE TO A TIMESTAMP”; U.S. patent application Ser. No. 11/768,572, now U.S. Pat. No. 7,701,846, for “BAD DATA PACKET CAPTURE DEVICE”; U.S. patent application Ser. No. 11/768,593 for “EXTENDED WRITE COMBINING USING A WRITE CONTINUATION HINT FLAG”; U.S. patent application Ser. No. 11/768,805, now U.S. Pat. No. 7,793,038, for “A SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS”; U.S. patent application Ser. No. 11/768,905, now U.S. Pat. No. 7,761,687, for “AN ULTRASCALABLE PETAFLOP PARALLEL SUPERCOMPUTER”; U.S. patent application Ser. No. 11/768,810, now U.S. Pat. No. 8,108,738, for “SDRAM DDR DATA EYE MONITOR METHOD AND APPARATUS”; U.S. patent application Ser. No. 11/768,812, now U.S. Pat. No. 7,797,503, for “A CONFIGURABLE MEMORY SYSTEM AND METHOD FOR PROVIDING ATOMIC COUNTING OPERATIONS IN A MEMORY DEVICE”; U.S. patent application Ser. No. 11/768,559, now U.S. Pat. No. 8,010,875, for “ERROR CORRECTING CODE WITH CHIP KILL CAPABILITY AND POWER SAVING ENHANCEMENT”; U.S. patent application Ser. No. 11/768,552, now U.S. Pat. No. 7,873,843, for “STATIC POWER REDUCTION FOR MIDPOINT-TERMINATED BUSSES”; U.S. patent application Ser. No. 11/768,527 for “COMBINED GROUP ECC PROTECTION AND SUBGROUP PARITY PROTECTION”; U.S. patent application Ser. No. 11/768,669, now U.S. Pat. No. 7,984,448, for “A MECHANISM TO SUPPORT GENERIC COLLECTIVE COMMUNICATION ACROSS A VARIETY OF PROGRAMMING MODELS”; U.S. patent application Ser. No. 11/768,813, now U.S. Pat. No. 8,032,892, for “MESSAGE PASSING WITH A LIMITED NUMBER OF DMA BYTE COUNTERS”; U.S. patent application Ser. No. 11/768,619, now U.S. Pat. No. 7,738,443, for “ASYNCRONOUS BROADCAST FOR ORDERED DELIVERY BETWEEN COMPUTE NODES IN A PARALLEL COMPUTING SYSTEM WHERE PACKET HEADER SPACE IS LIMITED”; U.S. patent application Ser. No. 11/768,682 for “HARDWARE PACKET PACING USING A DMA IN A PARALLEL COMPUTER”; and U.S. patent application Ser. No. 11/768,752, now U.S. Pat. No. 8,001,401, for “POWER THROTTLING OF COLLECTIONS OF COMPUTING ELEMENTS”.
1. Field of the Invention
The present invention generally relates to computer systems using multiprocessor architectures and, more particularly, to a novel implementation of performance counters for recording occurrence of certain events.
2. Description of the Prior Art
Many processor architectures include on a chip a set of counters that allow counting processor events and system events on the chip, such as cache misses, pipeline stalls and floating point operations. This counter block is referred to as “performance counters”.
Performance counters are used for monitoring system components such as processors, memory, and network I/O. Statistics of processor events can be collected in hardware with little or no overhead from operating system and application running on it, making these counters a powerful means to monitor an application and analyze its performance. Such counters do not require recompilation of applications.
Performance counters are important for evaluating performance of a computer system. This is particularly important for high-performance computing systems, such as Blue Gene/P, where performance tuning to achieve high efficiency on a highly parallel system is critical. Performance counters provide highly important feedback mechanism to the application tuning specialists.
Many processors available, such as UltraSPARC and Pentium provide performance counters. However, most traditional processors support a very limited number of counters. For example, Intel's X86 and IBM PowerPC implementations typically support 4 to 8 event counters. While typically each counter can be programmed to count specific event from the set of possible counter events, it is not possible to count more than N events simultaneously, where N is the number of counters physically implemented on the chip.
With the advent of chip multiprocessors systems, performance counter design faces new challenges. Some of the multiprocessor systems start from the existing uni-processor designs, and replicate them on a single chip. These designs typically inherit the design point of the processor's performance monitor unit. Thus, each processor has a small number of performance counters associated to it. Each performance unit has to be accessed independently, and counter events which can be counted simultaneously per processor can not exceed N, where N is the number of counters associated to the processor. Thus, even when the total number of performance counters on a chip M, where M=k×N, and k is the number of processors and N is the number of counters per processor, can be quite large, the number of events being counted per processor simultaneously can not exceed N, the number of counters associated per core.
An example of such design is Intel's dual-core Itanium 2 chip, which implements 2 processor cores. Performance counters in Intel's dual core Itanium-2 processor are implemented as two independent units, assigned each to a single processor. Each processor core has 12 performance counters associated to it, and each processor can use only its own 12 counters for counting its events.
While having distributed performance counters assigned to each processor is a simple solution, it makes programming the performance monitor units more complex. For example, getting a snapshot of an application performance at a certain point in time is complicated. To get accurate performance information for an application phase, all processors have to be stopped to read out the value of performance counters. To get performance information for all processors on the chip, multiple performance monitor units have to be accessed, counter values have to be read out, and this information has to be processed into single information. In addition, each counter unit has a plurality of processor events, from which a selected number of events is tracked at any time. In a multiple counter unit design, from each set of counter events a certain subset has to be selected. It is not possible to select more events from that group of events to count simultaneously by mapping these to other counter performance units. Such a design is less flexible in selecting a needed set of counter events, and to count a number of events from a single processor larger then number of implemented counters per processor, multiple application runs have to be performed.
It would be highly desirable to have a design of performance monitor unit in a multiprocessor environment which is easy to program and access, and which allows free allocation of counters between the number of processors. It would be highly desirable that such performance monitor unit allows assigning all performance counters available on a chip for counting processor events to a single processor to count large number of processor events simultaneously, or that such a design allows for flexible allocation of counters to processors as needed for individual performance tuning tasks optimally. This would allow more efficient usage of available resources, and simplify performance tuning by reducing cost.
In the prior art, the following patents address related subject matter to the present invention, as follows:
U.S. Pat. No. 5,615,135 describes implementation of a reconfigurable counter array. The counter array can be configured into counters of different sizes, and can be configured into groups of counters. This invention does not teach or suggest a system and method for using counters for performance monitoring in a multiprocessor environment.
U.S. Patent Application No. US 2005/0262333 A1 describes an implementation of branch prediction unit which uses array to store how many loop iterations each loop is going to be executed to improves branch prediction rate. It does not teach how to implement performance counters in a multiprocessor environment.
Having set forth the limitations of the prior art, it is clear that what is required is a system that allows flexible allocation of performance counters to processors on an as-needed basis, thus increasing the overall system resource utilization without limiting the system design options. While the herein disclosed invention teaches usage of a performance monitor unit which allows flexible allocation of performance counters between multiple processors on a single chip or in a system for counting the large number of individual events in a computer system, such as processors, memory system, and network I/Os, and is described as such in the preferred embodiment, the invention is not limited to that particular usage.
It is therefore an object of the present invention to provide a novel design of a performance counter unit that is shared between multiple processors or within a group of processors in a multiprocessor system. The invention teaches a unified counter unit for counting a number of events from multiple processors simultaneously.
In one embodiment, multiple processors provide performance monitoring events to the performance monitoring unit, and from this set of events, a subset of events is selected. The selection of events to count is flexible, and it can be from the set of events represented by event signals generated from a single processor, from several processors, or from all processors in a processor group or on a chip simultaneously. The selection of event signals to count is programmable, thus providing a flexible solution.
It is a further object of the present invention to provide a method and apparatus for flexible allocation of performance counters to processors in a multiprocessor system on an as-needed basis thus increasing the overall system resource utilization without limiting the system design options. This flexible method allows tracking of much larger number of events per a single processor in a multiprocessor system, or smaller number of events for all processors, simultaneously.
In accordance with one aspect of the invention, there is provided a performance monitoring unit (PMU) for monitoring performance of events occurring in a multiprocessor system, said multiprocessor system comprising a plurality of processor devices, each processor device for generating signals representing occurrences of events at said processor device, said PMU comprising:
a plurality of performance counters each for counting signals representing occurrences of events from one or more said plurality of processor units in said multiprocessor system;
a plurality of input devices for receiving said event signals from one or more processor devices of said plurality of processor units, said plurality of input devices programmable to select event signals for receipt by one or more of said plurality of performance counters for counting,
wherein said PMU is shared between multiple processing units, or within a group of processors in said multiprocessing system.
In one embodiment of the invention, the PMU further comprising means for programmably selecting one or more of said plurality of input devices to allocate performance counters for simultaneously monitoring said event signals from said single, multiple or all processor devices.
In one additional embodiment of the invention, the means for programmably selecting one or more of said plurality of input devices comprises one or more programmable counter configuration registers adapted for configuring select input devices to receive certain event signals from certain processor devices in said multiprocessor system.
In an additional embodiment, the performance monitoring unit further comprises means accessible by said one or more said processor devices for reading a count value from one or more of said plurality of performance counters, and, for writing a value to one or more of said plurality of performance counters.
In accordance with another aspect of the invention, there is provided a multiprocessor system having two or more functional groups of processor units, each functional group including a plurality of processor devices, said system comprising:
an individual performance monitor unit (PMU) associated with a respective group of the two or more groups of processor units, each PMU having:
In accordance with this another aspect of the invention, the PM further comprises:
a means for programmably selecting one or more of said plurality of input devices to allocate performance counters for simultaneously monitoring said event signals from said single, multiple or all processor devices of a functional group,
wherein a respective PMU is shared for tracking event signals only from its dedicated functional group.
Further, in accordance with this another aspect of the invention, the multiprocessor system further includes:
In further accordance with this another aspect of the invention, an individual performance monitor unit (PMU) associated with a respective functional group is further adapted for monitoring event signals from processor devices or non-processor devices sourced from another functional group.
In a further embodiment of the invention, there is provided a central performance monitor unit for providing configuration information for programmably configuring a respective performance monitor unit in one or more functional groups to simultaneously monitor said event signals from processor or non-processor devices in said multiprocessor system in a same or different functional group.
In accordance with yet another aspect of the invention, there is provided a method for monitoring event signals from one or more processor or non-processor devices in a multiprocessor system, each processor and non-processor device for generating signals representing occurrences of events at said processor or non-processor device, said method comprising:
providing an individual performance monitor unit (PMU) for monitoring performance of events occurring in a multiprocessor system;
providing, in said PMU, a plurality of performance counters each for counting signals representing occurrences of events from one or more said plurality of processor or non-processor devices in said multiprocessor system; and,
providing, in said PMU, a plurality of input devices for receiving said event signals from one or more processor devices of said plurality of processor units; and,
programming one or more of said plurality of input devices to select event signals for receipt by one or more of said plurality of performance counters for counting,
wherein said PMU is shared between multiple processor or non-processor devices, or within a respective group of processor or non-processor devices in said multiprocessing system.
Further to this yet another aspect of the invention, said programming one or more of said plurality of input devices comprises implementing logic at said PMU for:
identifying a type of current event signal received from a processor or non-processor device;
determining if a performance counter is configured for receiving said received event signal; and, if a performance counter is configured for receiving said current event signal;
identifying a processor core generating said current of event signal; and,
determining if a performance counter is configured for receiving said current event signal from said identified processor core.
Still further to this yet another aspect of the invention, programming one or more of said plurality of input devices comprises implementing logic for:
determining if a performance counter is associated with a current event signal received;
identifying one or more counters associated with the current event; and,
determining if the identified one or more counters is associated with the current event type and a current processing core; and,
identifying the one or more counters that is associated with the current processor core and with the current event type.
In one advantageous use of the present invention, performance counters of a PMU provide highly important feedback mechanism to the application tuning specialists. That is, event statistics is used to tune applications to increase application performance and ultimately, system performance. This is particularly important for high-performance computing systems, where applications are carefully tuned to achieve high efficiency on a highly parallel multiprocessor system.
The objects, features and advantages of the present invention will become apparent to one skilled in the art, in view of the following detailed description taken in combination with the attached drawings, in which:
Referring now to drawings, and more particularly to
It would be understood by one skilled in the art that other embodiments are also possible: connecting only multitude of processors 100a, . . . , 100k to the PMU 130 without any connection from other blocks in the system, or connecting counter events from the multitude of processors and counter events from one or more non-processor blocks in the system to the performance monitor unit 130 without departing from the scope of this invention. The non-processor blocks providing counter events can be, in addition to said network, memory and floating point blocks, blocks for vector computation, or some other specialized computation, blocks for system initialization and testing, blocks for temperature, voltage or some other environmental monitoring, or some other control system, as it is obvious to anybody skilled in the art.
Referring now to
In one embodiment, the selection of events to monitor is performed at the PMU itself which is programmed to configure the input devices, e.g., multiplexers or like logic gated inputs, and/or configure the parallel performance counters for receiving certain event signal types from certain processing cores. In one embodiment, the performance monitor unit may comprise the hybrid performance monitoring unit such as described in U.S. patent application Ser. No. 11/507,307 entitled METHOD AND APPARATUS FOR EFFICIENT PERFORMANCE MONITORING OF A LARGE NUMBER OF SIMULTANEOUS EVENTS, now U.S. Pat. No. 7,461,383, the whole contents and disclosure of which is incorporated by reference as if fully set forth herein.
In the preferred embodiment, from the set of M counters, all M counters can be used for counting processor events, or any other counter events from the system.
In yet another embodiment, only a subset of Mp counters can be used to count processor events and floating-point units.
In yet another embodiment, Mp counters for counting processor events are implemented in a different way than the remaining M−Mp performance counters. One possible implementation for processor performance counters is to allow counting of events from processors and floating point units that are operated on higher frequencies then the rest of the elements in a multiprocessor system. For example, only Mp performance counters can count events changing at a higher operating frequency, while the remaining M−Mp counters can count only events changing at a lower operating frequency, thus reducing power consumption, and allowing for simpler design.
In yet another embodiment, Mp counters for counting processor events are implemented in the same way as the remaining M−Mp performance counters. For example, all performance counters count events changing at the same operating frequency.
Referring back to
In the preferred embodiment, the processor monitor unit 130 can be accessed from all processors. The multitude of processors 100a, . . . , 100k has access to the processor monitor unit 130 to read out the value of the M performance counters 170. In the preferred embodiment, the multitude of processors 100a, . . . , 100k has access to the performance monitor unit 130 to write and/or clear performance counters 170. In the preferred embodiment, the set of multiplexers 160 to select inputs to M performance counters 170 from the set of all counter events 165 are configured depending on the value written in one or more counter configuration register block 180 which is located in the performance monitor unit PMU 130. The multitude of processors 100a, . . . , 100k has access to performance monitor unit 130 to write to the configuration registers 180 to specify configuration of multiplexers 160 for counter event selection.
In yet another embodiment only one processor from the multitude of processors has an access to the performance monitor unit 130 to read and/or write performance counters 170.
In yet another embodiment, only a subset of processors from the multitude of processors has an access to the performance monitor unit 130 to read and/or write performance counters 170.
In yet another embodiment, only one processor from the multitude of processors has an access to the performance monitor unit 130 to write counter configuration registers 180.
In yet another embodiment, only a subset of processors from the multitude of processors has an access to the performance monitor unit 130 to write counter configuration registers 180.
To write to or retrieve a value from any of the performance counters 170 the processor accesses circuitry provided in the PMU for performing the write or read transaction. For example,
Depending upon the implementation of the PMU and particularly, the width (in bits) of the counter, this write access may be performed in one or two write bus transactions. In the example implementation of a PMU as described in above-referenced, commonly-owned, U.S. patent application Ser. No. 11/507,307, now U.S. Pat. No. 7,461,383, incorporated by reference herein, the performance monitor unit 170 is a hybrid performance monitoring unit requiring an assembly of a least significant part of the counter stored in discrete registers, and a more significant part of the counter stored in a counter memory array. Only after both parts of the counter have been retrieved, a counter value can be returned to the requesting processor. Similarly, on a counter write, the written data are split into two parts: the least significant part to be stored in the discrete registers of the counter, and the most significant part of the counter value to be stored in the memory array.
In yet another embodiment of the invention, counter configuration registers 180 are not located within the performance monitor unit 130, but are located within one or more other units.
In another embodiment, all performance counters in the system are contained in the performance monitor unit.
In yet another embodiment, the multitude of processors in a multiprocessor system include one or more local performance counters within a processor, in addition to performance counters located in the performance counter unit. The local counters in this embodiment are used only by the local processor. The unified performance monitor unit is shared amongst the processors as described in this invention.
Referring now to
Referring now to
In accordance with present invention, each said group of processor, or non-processor units has a performance monitor unit 275a, . . . , 275f, shared only between the units in that functional group, counting only performance events generated within that unit group.
It is to be understood that the number and type of units in a functional group can vary. For example, a group can contain both processor and non-processor elements.
It is further understood that other configurations are possible, e.g., different functional unit groups can contain the same or different number of all processor or all non-processor elements, or different functional unit groups can contain the same or different number of some combination of processor and non-processor functional units.
Referring now to
In accordance with present invention, each said group of processor, or non-processor units has a performance monitor unit 350a, . . . , 350f, shared only between the units in that functional group. In addition to the group performance monitor units 350a, . . . , 350f there is a central performance monitor control unit 360. The central PM control unit 360 contains control or configuration information for programming each performance monitor unit 350a, . . . , 350f of each group. In another embodiment, the central PMU control unit 360 is capable of accessing counter values information from all group PMUs. In yet another embodiment, the central PMU control unit 360 can be accessed by only one, or some set of processors located in the computer system. In yet another embodiment, the central PMU control unit 360 can be accessed by all processors located in the computer system.
Via this design methodology, a single shared counter resource may be shared by all cores. In another embodiment, multiple shared counter resources are available, and each core is connected to one resource. In yet another embodiment, multiple shared counter resources are available, and each core is connected to multiple counter resources.
While there has been shown and described what is considered to be preferred embodiments of the invention, it will, of course, be understood that various modifications and changes in form or detail could readily be made without departing from the spirit of the invention. It is therefore intended that the invention be not limited to the exact forms described and illustrated, but should be constructed to cover all modifications that may fall within the scope of the appended claims.
The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. B548850 awarded by the Department of Energy.
Number | Name | Date | Kind |
---|---|---|---|
4777595 | Strecker et al. | Oct 1988 | A |
5063562 | Barzilai et al. | Nov 1991 | A |
5142422 | Zook et al. | Aug 1992 | A |
5349587 | Nadeau-Dostie et al. | Sep 1994 | A |
5353412 | Douglas et al. | Oct 1994 | A |
5452432 | Macachor | Sep 1995 | A |
5524220 | Verma et al. | Jun 1996 | A |
5615135 | Waclawsky et al. | Mar 1997 | A |
5634007 | Calta et al. | May 1997 | A |
5659710 | Sherman et al. | Aug 1997 | A |
5708779 | Graziano et al. | Jan 1998 | A |
5761464 | Hopkins | Jun 1998 | A |
5796735 | Miller et al. | Aug 1998 | A |
5809278 | Watanabe et al. | Sep 1998 | A |
5825748 | Barkey et al. | Oct 1998 | A |
5890211 | Sokolov et al. | Mar 1999 | A |
5917828 | Thompson | Jun 1999 | A |
6023732 | Moh et al. | Feb 2000 | A |
6061511 | Marantz et al. | May 2000 | A |
6072781 | Feeney et al. | Jun 2000 | A |
6112318 | Jouppi et al. | Aug 2000 | A |
6122715 | Palanca et al. | Sep 2000 | A |
6185214 | Schwartz et al. | Feb 2001 | B1 |
6219300 | Tamaki | Apr 2001 | B1 |
6263397 | Wu et al. | Jul 2001 | B1 |
6295571 | Scardamalia et al. | Sep 2001 | B1 |
6311249 | Min et al. | Oct 2001 | B1 |
6324495 | Steinman | Nov 2001 | B1 |
6356106 | Greeff et al. | Mar 2002 | B1 |
6366984 | Carmean et al. | Apr 2002 | B1 |
6442162 | O'Neill et al. | Aug 2002 | B1 |
6466227 | Pfister et al. | Oct 2002 | B1 |
6564331 | Joshi | May 2003 | B1 |
6594234 | Chard et al. | Jul 2003 | B1 |
6598123 | Anderson et al. | Jul 2003 | B1 |
6601144 | Arimilli et al. | Jul 2003 | B1 |
6631447 | Morioka et al. | Oct 2003 | B1 |
6647428 | Bannai et al. | Nov 2003 | B1 |
6662305 | Salmon et al. | Dec 2003 | B1 |
6718403 | Davidson et al. | Apr 2004 | B2 |
6735174 | Hefty et al. | May 2004 | B1 |
6775693 | Adams | Aug 2004 | B1 |
6799232 | Wang | Sep 2004 | B1 |
6880028 | Kurth | Apr 2005 | B2 |
6889266 | Stadler | May 2005 | B1 |
6894978 | Hashimoto | May 2005 | B1 |
6954887 | Wang et al. | Oct 2005 | B2 |
6986026 | Roth et al. | Jan 2006 | B2 |
7007123 | Golla et al. | Feb 2006 | B2 |
7058826 | Fung | Jun 2006 | B2 |
7065594 | Ripy et al. | Jun 2006 | B2 |
7143219 | Chaudhari et al. | Nov 2006 | B1 |
7191373 | Wang et al. | Mar 2007 | B2 |
7239565 | Liu | Jul 2007 | B2 |
7280477 | Jeffries et al. | Oct 2007 | B2 |
7298746 | De La Iglesia et al. | Nov 2007 | B1 |
7363629 | Springer et al. | Apr 2008 | B2 |
7373420 | Lyon | May 2008 | B1 |
7401245 | Fischer et al. | Jul 2008 | B2 |
7454640 | Wong | Nov 2008 | B1 |
7454641 | Connor et al. | Nov 2008 | B2 |
7461236 | Wentzlaff | Dec 2008 | B1 |
7461383 | Gara et al. | Dec 2008 | B2 |
7463529 | Matsubara | Dec 2008 | B2 |
7539845 | Wentzlaff et al. | May 2009 | B1 |
7613971 | Asaka | Nov 2009 | B2 |
7620791 | Wentzlaff et al. | Nov 2009 | B1 |
7698581 | Oh | Apr 2010 | B2 |
7996839 | Farkas et al. | Aug 2011 | B2 |
20010055323 | Rowett et al. | Dec 2001 | A1 |
20020078420 | Roth et al. | Jun 2002 | A1 |
20020087801 | Bogin et al. | Jul 2002 | A1 |
20020100020 | Hunter et al. | Jul 2002 | A1 |
20020129086 | Garcia-Luna-Aceves et al. | Sep 2002 | A1 |
20020138801 | Wang et al. | Sep 2002 | A1 |
20020156979 | Rodriguez | Oct 2002 | A1 |
20020184159 | Tadayon et al. | Dec 2002 | A1 |
20030007457 | Farrell et al. | Jan 2003 | A1 |
20030028749 | Ishikawa et al. | Feb 2003 | A1 |
20030050714 | Tymchenko | Mar 2003 | A1 |
20030050954 | Tayyar et al. | Mar 2003 | A1 |
20030074616 | Dorsey | Apr 2003 | A1 |
20030105799 | Khan et al. | Jun 2003 | A1 |
20030163649 | Kapur et al. | Aug 2003 | A1 |
20030177335 | Luick | Sep 2003 | A1 |
20030188053 | Tsai | Oct 2003 | A1 |
20030235202 | Van Der Zee et al. | Dec 2003 | A1 |
20040003184 | Safranek et al. | Jan 2004 | A1 |
20040019730 | Walker et al. | Jan 2004 | A1 |
20040024925 | Cypher et al. | Feb 2004 | A1 |
20040073780 | Roth et al. | Apr 2004 | A1 |
20040103218 | Blumrich et al. | May 2004 | A1 |
20040210694 | Shenderovich | Oct 2004 | A1 |
20040243739 | Spencer | Dec 2004 | A1 |
20050007986 | Malladi et al. | Jan 2005 | A1 |
20050053057 | Deneroff et al. | Mar 2005 | A1 |
20050076163 | Malalur | Apr 2005 | A1 |
20050160238 | Steely et al. | Jul 2005 | A1 |
20050216613 | Ganapathy et al. | Sep 2005 | A1 |
20050251613 | Kissell | Nov 2005 | A1 |
20050262333 | Gat | Nov 2005 | A1 |
20050270886 | Takashima | Dec 2005 | A1 |
20050273564 | Lakshmanamurthy et al. | Dec 2005 | A1 |
20060050737 | Hsu | Mar 2006 | A1 |
20060080513 | Beukema et al. | Apr 2006 | A1 |
20060168170 | Korzeniowski | Jul 2006 | A1 |
20060206635 | Alexander et al. | Sep 2006 | A1 |
20060248367 | Fischer et al. | Nov 2006 | A1 |
20070055832 | Beat | Mar 2007 | A1 |
20070094455 | Butt et al. | Apr 2007 | A1 |
20070133536 | Kim et al. | Jun 2007 | A1 |
20070168803 | Wang et al. | Jul 2007 | A1 |
20070174529 | Rodriguez et al. | Jul 2007 | A1 |
20070195774 | Sherman et al. | Aug 2007 | A1 |
20080040634 | Matsuzaki et al. | Feb 2008 | A1 |
20080114873 | Chakravarty et al. | May 2008 | A1 |
20080147987 | Cantin et al. | Jun 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20090007134 A1 | Jan 2009 | US |