Compact Disc Appendix, which is a part of the present disclosure, includes a recordable Compact Disc (CD-R) containing information that is part of the disclosure of the present patent document. A portion of the disclosure of this patent document contains material that is subject to copyright protection. All the material on the Compact Disc is hereby expressly incorporated by reference into the present application. The copyright owner of that material has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
In the illustrated particular embodiment, NID 3 includes an application specific integrated circuit (ASIC) 9, an amount of dynamic random access memory (DRAM) 10, and Physical Layer Interface (PHY) circuitry 11. NID 3 includes specialized protocol accelerating hardware for implementing “fast-path” processing whereby certain types of network communications are accelerated in comparison to “slow-path” processing whereby the remaining types of network communications are handled at least in part by a software protocol processing stack. In one embodiment, the certain types of network communications accelerated are TCP/IP communications. The embodiment of NID 3 illustrated in
For additional information on examples of a network interface device (sometimes called an Intelligent Network Interface Card or “INIC”), see: U.S. Pat. No. 6,247,060; U.S. Pat. No. 6,226,680; Published U.S. Patent Application No. 20010021949; Published U.S. Patent Application No. 20010027496; and Published U.S. Patent Application No. 20010047433 (the contents of each of the above-identified patents and published patent applications is incorporated herein by reference). System 1 of
NID 3 includes Media Access Control circuitry 12, three processors 13-15, a pair of Content Addressable Memories (CAMs) 16 and 17, an amount of Static Random Access Memory (SRAM) 18, queue manager circuitry 19, a receive processor 20, and a transmit sequencer 21. Receive processor 20 executes code stored its own control store 22.
In some embodiments where NID 3 fully offloads or substantially fully offloads CPU 4 of the task of performing TCP/IP protocol processing, NID 3 includes a processor 23. Processor 23 may, for example, be a general purpose microprocessor. Processor 23 performs slow-path processing such as TCP error condition handling and exception condition handling. In some embodiments, processor 23 also performs higher layer protocol processing such as, for example, iSCSI layer protocol processing such that NID 3 offloads CPU 4 of all iSCSI protocol processing tasks. In the example of
Overview of One Embodiment of a Fast-Path Receive Path:
Operation of NID 3 is now described in connection with the receipt onto NID 3 of a TCP/IP packet from network 2. DRAM 10 is initially partitioned to include a plurality of buffers. Receive processor 20 uses the buffers in DRAM 10 to store incoming network packet data as well as status information for the packet. For each buffer, a 32-bit buffer descriptor is created. Each 32-bit buffer descriptor indicates the size of the associated buffer and the location in DRAM of the associated buffer. The location is indicated by a 19-bit pointer.
At start time, the buffer descriptors for the fee buffers are pushed onto on a “free-buffer queue” 24. This is accomplished by writing the buffer descriptors to queue manager 19. Queue manager 19 maintains multiple queues including the “free-buffer queue” 24. In this implementation, the heads and tails of the various queues are located in SRAM 18, whereas the middle portion of the queues are located in DRAM 10.
The TCP/IP packet is received from the network 2 via Physical Layer Interface (PHY) circuitry 11 and MAC circuitry 12. As the MAC circuitry 12 processes the packet, the MAC circuitry 12 verifies checksums in the packet and generates “status” information. After all the packet data has been received, the MAC circuitry 12 generates “final packet status” (MAC packet status). The status information (also called “protocol analyzer status”) and the MAC packet status information is then transferred to a free one of the DRAM buffers obtained from the free-buffer queue 24. The status information and MAC packet status information is stored prepended to the associated data in the buffer.
After all packet data has been transferred to the free DRAM buffer, receive processor 20 pushes a “receive packet descriptor” (also called a “summary”) onto a “receive packet descriptor” queue 25. The “receive packet descriptor” includes a 14-bit hash value, the buffer descriptor, a buffer load-count, the MAC ID, and a status bit (also called an “attention bit”). The 14-bit hash value was previously generated by the receive processor 20 (from the TCP and IP source and destination addresses) as the packet was received. If the “attention bit” of the receive packet descriptor is a one, then the packet is not a “fast-path candidate”; whereas if the attention bit is a zero, then the packet is a “fast-path candidate”. In the present example of a TCP/IP offload engine, the attention bit being a zero indicates that the packet employs both the TCP protocol and the IP protocol.
Once the “receive packet descriptor” (including the buffer descriptor that points to the DRAM buffer where the data is stored) has been placed in the “receive packet descriptor” queue 25 and the packet data has been placed in the associated DRAM buffer, one of the processors 13 and 14 can retrieve the “receive packet descriptor” from the “receive packet descriptor” queue 25 and examine the “attention bit”.
If the attention bit is a digital one, then the processor determines that the packet is not a “fast-path candidate” and the packet is handled in “slow-path”. In one embodiment where the packet is a TCP/IP packet, wherein the attention bit indicates the packet is not a “fast-path candidate”, and where NID 3 performs full offload TCP/IP functions, general purpose processor 23 performs further protocol processing on the packet (headers and data). In another embodiment where there is no general purpose processor 23 and where NID 3 performs partial TCP/IP functions, the entire packet (headers and data) are transferred from the DRAM buffer and across host bus 6 such that CPU 4 performs further protocol processing on the packet.
If, on the other hand, the attention bit is a zero, then the processor determines that the packet is a “fast-path candidate”. If the processor determines that the packet is a “fast-path candidate”, then the processor uses the buffer descriptor from the “receive packet descriptor” to initiate a DMA transfer the first approximately 96 bytes of information from the pointed to buffer in DRAM 10 into a portion of SRAM 18 so that the processor can examine it. This first approximately 96 bytes contains the IP source address of the IP header, the IP destination address of the IP header, the TCP source address (“TCP source port”) of the TCP header, and the TCP destination address (“TCP destination port”) of the TCP header. The IP source address of the IP header, the IP destination address of the IP header, the TCP source address of the TCP header, and the TCP destination address of the TCP header together uniquely define a single “connection context” with which the packet is associated.
While this DMA transfer from DRAM to SRAM is occurring, the processor uses the 14-bit hash from the “receive packet descriptor” to identify the connection context of the packet and to determine whether the connection context is one of a plurality of connection contexts that are under the control of NID 3. The hash points to one hash bucket in a hash table 104 in SRAM 18. In the diagram of
If the connection context is determined not to be one of the contexts under the control of NID 3, then the “fast-path candidate” packet is determined not to be an actual “fast-path packet.” In one embodiment where NID 3 includes general purpose processor 23 and where NID 3 performs full TCP/IP offload functions, processor 23 performs further TCP/IP protocol processing on the packet. In another embodiment where NID 3 performs partial TCP/IP offload functions, the entire packet (headers and data) is transferred across host bus 6 for further TCP/IP protocol processing by the sequential protocol processing stack of CPU 4.
If, on the other hand, the connection context is one of the connection contexts under control of NID 3, then software executed by the processor (13 or 14) checks for one of numerous exception conditions and determines whether the packet is a “fast-path packet” or is not a “fast-path packet”. These exception conditions include: 1) IP fragmentation is detected; 2) an IP option is detected; 3) an unexpected TCP flag (urgent bit set, reset bit set, SYN bit set or FIN bit set) is detected; 4) the ACK field in the TCP header shrinks the TCP window; 5) the ACK field in the TCP header is a duplicate ACK and the ACK field exceeds the duplicate ACK count (the duplicate ACK count is a user settable value); and 6) the sequence number of the TCP header is out of order (packet is received out of sequence).
If the software executed by the processor (13 or 14) detects an exception condition, then the processor determines that the “fast-path candidate” is not a “fast-path packet.” In such a case, the connection context for the packet is “flushed” (control of the connection context is passed back to the stack) so that the connection context is no longer present in the list of connection contexts under control of NID 3. If NID 3 is a full TCP/IP offload device including general purpose processor 23, then general purpose processor 23 performs further TCP/IP processing on the packet. In other embodiments where NID 3 performs partial TCP/IP offload functions and NID 3 includes no general purpose processor 23, the entire packet (headers and data) is transferred across host bus 6 to CPU 4 for further “slow-path” protocol processing.
If, on the other hand, the processor (13 or 14) finds no such exception condition, then the “fast-path candidate” packet is determined to be an actual “fast-path packet”. The processor executes a software state machine such that the packet is processed in accordance with the IP and TCP protocols. The data portion of the packet is then DMA transferred to a destination identified by another device or processor. In the present example, the destination is located in storage 5 and the destination is identified by a file system controlled by CPU 4. CPU 4 does no or very little analysis of the TCP and IP headers on this “fast-path packet”. All or substantially all analysis of the TCP and IP headers of the “fast-path packet” is done on NID 3.
Description of a TCB Lookup Method:
As set forth above, information for each connection context under the control of NID 3 is stored in a block called a “Transmit Control Block” (TCB). An incoming packet is analyzed to determine whether it is associated with a connection context that is under the control of NID 3. If the packet is associated with a connection context under the control of NID 3, then a TCB lookup method is employed to find the TCB for the connection context. This lookup method is described in further detail in connection with
NID 3 is a multi-receive processor network interface device. In NID 3, up to sixteen different incoming packets can be in process at the same time by two processors 13 and 14. (Processor 15 is a utility processor, but each of processors 13 and 14 can perform receive processing or transmit processing.) A processor executes a software state machine to process the packet. As the packet is processed, the state machine transitions from state to state. One of the processors, for example processor 13, can work on one of the packets being received until it reaches a stopping point. Processor 13 then stops work and stores the state of the software state machine. This stored state is called a “processor context”. Then, at some later time, either the same processor 13 or the other processor 14 may resume processing on the packet. In the case where the other processor 14 resumes processing, processor 14 retrieves the prior state of the state machine from the previous “processor context”, loads this state information into its software state machine, and then continues processing the packet through the state machine from that point. In this way, up to sixteen different flows can be processed by the two processors 13 and 14 working in concert.
In this example, the TCB lookup method starts after the TCP packet has been received, after the 14-bit hash and the attention bit has been generated, and after the hash and attention bit have been pushed in the form of a “receive packet descriptor” onto the “receive packet descriptor queue”.
In a first step (step 200), one of processors 13 or 14 obtains an available “processor context”. The processor pops (step 201) the “receive packet descriptor” queue 25 to obtain the “receive packet descriptor”. The “receive packet descriptor” contains the previously-described 14-bit hash value 101 (see
If the attention bit is set (step 202), then processing proceeds to slow-path processing. As set forth above, if NID 3 is a TCP/IP full-offload device and if the packet is a TCP/IP packet, then further TCP/IP processing is performed by general purpose processor 23. As set forth above, if NID 3 is a TCP/IP partial offload device, then the packet is sent across host bus 6 for further protocol processing by CPU 4.
If, on the other hand, the attention bit is not set (step 203), then the processor initiates a DMA transfer of the beginning part of the packet (including the header) from the identified buffer in DRAM 10 to SRAM 18. 14-bit hash value 101 (see
If the hash bucket is in the SRAM hash table 104 (step 205), then processing is suspended until the DMA transfer of the header from DRAM to SRAM is complete.
If, on the other hand, the hash bucket is not in the SRAM hash table 104 (step 206), then a queue (Q_FREEHASHSLOTS) identifying free rows in hash table 104 is accessed (the queue is maintained by queue manager 19) and a free hash bucket row (sometimes called a “slot’) is obtained. The processor then causes the hash bucket to be copied or moved from DRAM and into the free hash bucket row. Once the hash bucket is present in SRAM hash table 104, the processor updates the pointer field in the associated hash byte to indicate that the hash bucket is now in SRAM and is located at the row now containing the hash bucket.
Once the pointed to hash bucket is in SRAM hash table 104, the up to four possible hash bucket entries in the hash bucket are searched one by one (step 207) to identify if the TCP and IP fields of an entry match the TCP and IP fields of the packet header 106 (the TCP and IP fields from the packet header were obtained from the receive descriptor).
In the example of
If all of the entries in the hash bucket are searched and a match is not found (step 208), then processing proceeds by the slow-path. If, on the other hand, a match is found (step 209), then the TCB# portion 108 of the matching entry identifies the TCB of the connection context.
NID 3 supports both fast-path receive processing as well as fast-path transmit processing. A TCP/IP connection can involve bidirectional communications in that packets might be transmitted out of NID 3 on the same TCP/IP connection that other packets flow into NID 3. A mechanism is provided so that the context for a connection can be “locked” by one processor (for example, a processor receiving a packet on the TCP/IP connection) so that the another processor (for example, a processor transmitting a packet on the same TCP/IP connection) will not interfere with the connection context. This mechanism includes two bits for each of the up to 8192 connections that can be controlled by NID 3: 1) a “TCB lock bit” (SRAM_tcblock), and 2) a “TCB in-use bit” (SRAM_tcbinuse). The “TCB lock bits” 109 and the “TCB in-use bits” 110 are maintained in SRAM 18.
The processor attempts to lock the designated TCB (step 210) by attempting to set the TCB's lock bit. If the lock bit indicates that the TCB is already locked, then the processor context number (a 4-bit number) is pushed onto a linked list of waiting processor contexts for that TCB. Because there are sixteen possible processor contexts, a lock table 112 is maintained in SRAM 18. There is one row in lock table 112 for each of the sixteen possible processor contexts. Each row has sixteen four-bit fields. Each field can contain the 4-bit processor context number for a waiting processor context. Each row of the lock table 112 is sixteen entries wide because all sixteen processor contexts may be working on or waiting for the same TCB.
If the lock bit indicates that the TCB is already locked (step 211), then the processor context number (a four-bit number because there can be up to sixteen processor contexts) is pushed onto the row of the lock table 112 associated with the TCB. A lock table content addressable memory (CAM) 111 is used to translate the TCB number (from TCB field 108) into the row number in lock table 112 where the linked list for that TCB number is found. Accordingly, lock table CAM 111 receives a sixteen-bit TCB number and outputs a four-bit row number. When the processor context that has the TCB locked is ready to suspend itself, it consults the lock table CAM 111 and the associated lock table 112 to determine if there is another processor context waiting for the TCB. If there is another processor context waiting (there is an entry in the associated row of lock table 112), then it restarts the first (oldest) of the waiting processor contexts in the linked list. The restarted processor context is then free to lock the TCB and continue processing.
If, on the other hand, the TCB is not already locked, then the processor context locks the TCB by setting the associated TCB lock bit 109. The processor context then supplies the TCB number (sixteen bits) to an IN SRAM CAM 113 (step 212) to determine if the TCB is in one of thirty-two TCB slots 114 in SRAM 18. (Up to thirty-two TCBs are cached in SRAM, whereas a copy of all “in-use” TCBs is kept in DRAM). The IN SRAM CAM 113 outputs a sixteen-bit value, five bits of which point to one of the thirty-two possible TCB slots 114 in SRAM 18. One of the bits is a “found” bit.
If the “found” bit indicates that the TCB is “found”, then the five bits are a number from one to thirty-two that points to a TCB slot in SRAM 18 where the TCB is cached. The TCB has therefore been identified in SRAM 18, and fast-path receive processing continues (step 213).
If, on the other hand, the “found” bit indicates that the TCB is not found, then the TCB is not cached in SRAM 18. All TCBs 115 under control of NID 3 are, however, maintained in DRAM 10. The information in the appropriate TCB slot in DRAM 10 is then written over one of the thirty-two TCB slots 114 in SRAM 18. In the event that one of the SRAM TCB slots is empty, then the TCB information from DRAM 10 is DMA transferred into that free SRAM slot. If there is no free SRAM TCB slot, then the least-recently-used TCB slot in SRAM 18 is overwritten.
Once the TCB is located in SRAM cache 114, the IN SRAM CAM 113 is updated to indicate that the TCB is now located in SRAM at a particular slot. The slot number is therefore written into the IN SRAM CAM 113. Fast-path receive processing then continues (step 216).
When a processor context releases control of a TCB, it is not always necessary for the TCB information in SRAM 18 to be written to DRAM to update the version of the TCB in DRAM. If, for example, the TCB is a commonly used TCB and the TCB will be used again in the near future by the next processor context, then the next processor context can use the updated TCB in SRAM without the updated TCB having to have been written to DRAM and then having to be transferred back from DRAM to SRAM. Avoiding this unnecessary transferring of the TCB is advantageous. In accordance with one embodiment of the present invention, the processor context releasing control of a TCB does not update the DRAM version of the TCB, but rather the processor context assuming control of the TCB has that potential responsibility. A “dirty bit” 116 is provided in each TCB. If the releasing processor context changed the contents of the TCB (i.e., the TCB is dirty), then the releasing processor context sets this “dirty bit” 116. If the next processor context needs to put another TCB into the SRAM TCB slot held by the dirty TCB, then the next processor first writes the dirty TCB information (i.e., updated TCB information) to overwrite the corresponding TCB information in DRAM (i.e., to update the DRAM version of the TCB). If, on the other hand, the next processor does not need to move a TCB into an SRAM slot held by a dirty TCB, then the next processor does not need to write the dirty TCB information to DRAM. If need be, the next processor can either just update a TCB whose dirty bit is not set, or the next processor can simply overwrite the TCB whose dirty bit is not set (for example, to move another TCB into the slot occupied by the TCB whose dirty bit is not set).
In one specific embodiment, the instruction set of processors 13-15 includes the instructions in Table 1 below.
One embodiment of the code executed by processors 13-15 is written using functions. These functions are in turn made up of instructions including those instructions set forth in Table 1 above. The functions are set forth in the file SUBR.MAL of the CD Appendix (the files on the CD Appendix are incorporated by reference into the present patent document). These functions include:
1) The INSRAM_CAM_INSERT function: Executing this function causes the TCB number present in a register (register cr11) to be written into the IN SRAM CAM (CAM A of the processor). The particular CAM slot written to is identified by the lower sixteen bits of the value present in another register (register TbuffL 18).
2) The INSRAM_CAM_REMOVE function: Executing this function causes the CAM entry in the IN SRAM CAM slot identified by a register (register cr11) to be invalidated (i.e., removed). The entry is invalidated by setting bit 16 of a register (register CAM_CONTENTS_A).
3) The INSRAM_CAM SEARCH function: Executing this function causes a search of the IN SRAM CAM for the TCB number identified by the TCB number present in a register (register cr11). The result of the search is a five-bit slot number that is returned in five bits of another register (register TbuffL 18). The value returned in a sixth bit of the register TbuffL 18 indicates whether or not the TCB number was found in the INSRAM_CAM.
4) The LOCKBL_CAM_INSERT function: Executing this function causes the sixteen-bit TCB number present in a register (register cr11) to be written into the LOCK TABLE CAM (CAM C of the processor). The particular CAM slot written to is identified by the value present in a register (register cr10).
5) The LOCKBL_CAM_REMOVE function: Executing this function causes the CAM entry in the LOCK TABLE CAM slot identified by a register (register cr10) to be invalidated (i.e., removed). The entry is invalidated by setting bit of another register (register CAM_CONTENTS_C).
6) The LOCK_TABLE_SEARCH function: Executing this function causes a search of the LOCK TABLE CAM for the TCB number identified by the TCB number present in a register (register cr11). The result of the search is a four-bit number of a row in the lock table. The four-bit number is four bits of another register (register cr10). The value returned in a fifth bit of the register cr10 indicates whether or not the TCB number was found in the LOCK TABLE CAM.
Compact Disc Appendix:
The Compact Disc Appendix includes a folder “CD Appendix A”, a folder “CD Appendix B”, a folder “CD Appendix C”, and a file “title page.txt”. CD Appendix A includes a description of an integrated circuit (the same as ASIC 9 of
The CD Appendix A includes the following: 1) a folder “Mojave verilog code” that contains a hardware description of an embodiment of the integrated circuit, and 2) a folder “Mojave microcode” that contains code that executes on the processors (for example, processors 13 and 14 of
A description of the instruction set executed by processors 13-15 of
The CD Appendix B includes the following: 1) a folder entitled “simba (device driver software for Mojave)” that contains device driver software executable on the host computer; 2) a folder entitled “atcp (free BSD stack and code added to it)” that contains a TCP/IP stack [the folder “atcp” contains: a) a TCP/IP stack derived from the “free BSD” TCP/IP stack (available from the University of California, Berkeley) so as to make it run on a Windows operating system, and b) code added to the free BSD stack between the session layer above and the device driver below that enables the BSD stack to carry out “fast-path” processing in conjunction with the NID]; and 3) a folder entitled “include (set of files shared by ATCP and device driver)” that contains a set of files that are used by the ATCP stack and are used by the device driver.
The CD Appendix C includes the following: 1) a file called “mojave_rcv_seq (instruction set description).mdl” that contains a description of the instruction set of the receive processor, and 2) a file called “mojave_rcv_seq (program executed by receive processor).mal” that contains a program executed by the receive processor.
System Configurations:
Rather than being considered coupled to a host, network interface device (NID) 301 can be considered part of a host as shown in
In one version, NID 501 is a full TCP/IP offload device. In another version, NID is a partial TCP/IP offload device. The terms “partial TCP/IP” are used here to indicate that all or substantially all TCP and IP protocol processing on certain types of packets is performed by the offload device, whereas substantial TCP and IP protocol processing for other types of packets is performed by the stack.
In the realization of different embodiments, the techniques, methods, and structures set forth in the documents listed below are applied to the system, and/or to the network interface device (NID), and/or to the application specific integrated circuit (ASIC) set forth in present patent document: U.S. Pat. No. 6,389,479; U.S. Pat. No. 6,470,415; U.S. Pat. No. 6,434,620; U.S. Pat. No. 6,247,060; U.S. Pat. No. 6,226,680; Published U.S. Patent Application 20020095519; Published U.S. Patent Application No. 20020091844; Published U.S. Patent Application No. 20010021949; Published U.S. Patent Application No. 20010047433; and U.S. patent application Ser. No. 09/801,488, entitled “Port Aggregation For Network Connections That Are Offloaded To Network Interface Devices”, filed Mar. 7, 2001. The content of each of the above-identified patents, published patent applications, and patent application is incorporated herein by reference.
Although certain specific exemplary embodiments are described above in order to illustrate the invention, the invention is not limited to the specific embodiments. NID 3 can be part of a memory controller integrated circuit or an input/output (I/O) integrated circuit or a bridge integrated circuit of a microprocessor chip-set. In some embodiments, NID 3 is part of an I/O integrated circuit chip such as, for example, the Intel 82801 integrated circuit of the Intel 820 chip set. NID 3 may be integrated into the Broadcom ServerWorks Grand Champion HE chipset, the Intel 82815 Graphics and Memory Controller Hub, the Intel 440BX chipset, or the Apollo VT8501 MVP4 North Bridge chip. The instructions executed by receive processor 20 and/or processors 13-15 are, in some embodiments, downloaded upon power-up of NID 3 into a memory on NID 3, thereby facilitating the periodic updating of NID functionality. High and low priority transmit queues may be implemented using queue manager 19. Hardcoded transmit sequencer 21, in some embodiments, is replaced with a transmit processor that executes instructions. Processors 13, 14 and 15 can be identical processors, each of which can perform receive processing and/or transmit processing and/or utility functions. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the following claims that follow the “Mojave Hardware Specification” section below.
Features
1) Peripheral Component Interconnect (PCI) Interface.
2) Network Interface.
3) Memory Interface.
4) Protocol Processor.
5) Power.
6) Packaging.
Mojave (See
When combined with the 802.3/GMII compliant Phy and Synchronous Dram (SDRAM), Mojave comprises one complete ethernet node. It contains one 802.3/ethernet compliant Mac, a PCI Bus Interface Unit (BIU), a memory controller, transmit fifo, receive fifo and a custom TCP/IP protocol processor. Mojave supports 10 Base-T, 100 Base-TX and 1000 Base-TX via the GMII interface attachment of appropriate Phys. Mojave also supports 100 Base-FX, and 1000 Base-FX via the TBI interface attachment of external Serdes.
The Mojave Mac provides statistical information that may be used for SNMP. The Mac can operate in promiscuous mode allowing Mojave to function as a network monitor, receive broadcast and multicast packets and implement multiple Mac addresses for each node.
Any 802.3/GMII/TBI compliant PHY/SERDES can be utilized, allowing Mojave to support 10 BASE-T, 10 BASE-T2, 100 BASE-TX, 100 Base-FX, 100 BASE-T4, 1000 BASE-TX or 1000 BASE-FX as well as future interface standards. PHY identification and initialization is accomplished through host driver initialization routines. PHY status registers can be polled continuously by Mojave to detect PHY status changes which are then reported to the host driver. The Mac can be configured to support a maximum frame size of 1518 bytes or 9018 bytes.
The 64-bit, multiplexed BIU provides a direct interface to the PCI bus for both slave and master functions. Mojave is capable of operating in either a 64-bit or 32-bit PCI environment, while supporting 64-bit addressing in either configuration. PCI bus frequencies up to 33 MHz are supported yielding instantaneous bus transfer rates of 266 MB/s. Both 5.0V and 3.3V signaling environments can be utilized by Mojave. Configurable cache-line size up to 256B will accommodate future architectures, and Expansion ROM/Flash support will allow for diskless system booting. Non-PC applications are supported via programmable big and little endian modes. Host based communication has been utilized to provide the best system performance possible.
Mojave supports Plug-N-Play auto-configuration through the PCI configuration space. Support of an external eeprom allows for local storage of configuration information such as Mac addresses.
External SDRAM provides frame buffering, which is configurable as 1 MB, 2 MB, 4 MB or 8 MB using the appropriate technology and width selections. Use of −10 speed grades yields an external buffer bandwidth of 88 MB/s. The buffer provides temporary storage of both incoming and outgoing frames. The protocol processor accesses the frames within the buffer in order to implement TCP/IP and NETBIOS. Incoming frames are processed, assembled then transferred to host memory under the control of the protocol processor. For transmit, data is moved from host memory to buffers where various headers are created before being transmitted out via the Mac.
1) Datapath Bandwidth (See
2) Cpu Bandwidth (See
3) Performance Features.
4) Pin Assignments (See
Processor.
The processor (See
The first instruction phase writes the instruction results of the last instruction to the destination operand, modifies the program counter (Pc), selects the address source for the instruction to fetch, then fetches the instruction from the control store. The fetched instruction is then stored in the instruction register at the end of the clock cycle.
The processor instructions reside in the on-chip control-store, which is implemented as a mixture of ROM and Sram. The ROM contains 4K instructions starting at address 0×0000 and aliases every 0×1000 locations throughout the first 0×8000 locations of instruction space. The Sram (WCS) will hold up to 0×1000 instructions starting at address 0×8000 and aliasing each 0×1000 locations throughout the last 0×8000 of instruction space. The ROM and Sram are both 49-bits wide accounting for bits [48:0] of the instruction microword. A separate mapping ram provides bits [55:49] of the microword (MapAddr) to allow replacement of faulty ROM based instructions. The mapping ram has a configuration of 512×7 which is insufficient to allow a separate ma address for each of the 4K ROM locations. To allow re-mapping of the entire 4K ROM space, the map ram address lines are connected to the address bits Fetch [9:3]. The result is that the ROM is re-mapped in blocks of 8 contiguous locations.
The second instruction phase decodes the instruction which was stored in the instruction register. It is at this point that the map address is checked for a non-zero value which will cause the decoder to force a Jmp instruction to the map address. If a non-zero value is detected then the decoder selects the source operands for the Alu operation based on the values of the OpdASel, OpdBSel and AluOp fields. These operands are then stored in the decode register at the end of the clock cycle. Operands may originate from File, Sram, or flip-flop based registers. The second instruction phase is also where the results of the previous instruction are written to the Sram.
The third instruction phase is when the actual Alu operation is performed, the test condition is selected and the Stack push and pop are implemented. Results of the Alu operation are stored in the results register at the end of the clock cycle.
Instruction Set.
The micro-instructions are divided into nine types according to the program control directive. The micro-instruction is further divided into sub-fields for which the definitions are dependant upon the instruction type. The six instruction types are listed IN
All instructions (See
The conditional jump (Jct/Jcf) instruction causes the program counter to be altered if the condition selected by the “test select” (TstSel) field is true/false. The new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section and the Literal field may be used as a source for the Alu or the ram address if the new Pc value is sourced by the Alu.
The “jump” (Jmp) instruction causes the program counter to be altered unconditionally. The new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section. The format allows instruction bits 22:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pc value is sourced by the Alu.
The “jump subroutine” (Jsr) instruction causes the program counter to be altered unconditionally. The new program counter (Pc) value is loaded from either the Literal field or the AluOut as described in the following section. The old program counter value is stored on the top location of the Pc-Stack which is implemented as a LIFO memory. The format allows instruction bits 22:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address if the new Pc value is sourced by the Alu.
The “Cont” (Cont) instruction causes the program counter to increment. The format allows instruction bits 22:16 to be used to perform a flag operation and the Literal field may be used as a source for the Alu or the ram address.
The “return from subroutine” (Rts) instruction, or the conditional Rts (Rtt/Rtf) if the selected condition is true/false, causes the current Pc value to be replaced with the last value stored in the stack. The Literal field may be used as a source for the Alu or the ram address. The unconditional return (Rts) allows instruction bits 22:16 to be used to perform a flag operation.
The Map instruction is provided to allow replacement of instructions which have been stored in ROM and is implemented any time the “map enable” (MapEn) bit has been set and the content of the “map address” (MapAddr) field is non-zero. The instruction decoder forces a jump instruction with the Alu operation and destination fields set to pass the MapAddr field to the program control block.
Program Errors:
Hardware will detect certain program errors. Any sequencer generating a program error will be forced to continue executing from location 0004. The program errors detected are:
1. Stack Overflow: A JSR is attempted and the stack registers are full.
2. Stack Underflow: An RTS is attempted and the stack registers are empty.
3. Incompatible Sram Size & Sram Alignment: An Sram Operation is attempted where the size and the Sram address would cause the operation to extend beyond the size of the word, e.g. Size=4 Address=401 or Size=2 Address=563.
4. An Sram read is attempted immediately following an Sram write. Because an Sram write is actually done in the clock cycle of the following instruction, the sram interface will be busy during that phase, and an Sram read is illegal at this time.
5. An attempt was made to access a non-existent register.
Sram Control Sequencer (SramCtrl).
Sram is the nexus for data movement within Mojave. A hierarchy of sequencers, working in concert, accomplish the movement of data between dram, Sram, Cpu, ethernet and the Pci bus. Slave sequencers, provided with stimulus from master sequencers, request data movement operations by way of the Sram, Pci bus and Dram. The slave sequencers prioritize, service and acknowledge the requests
The data flow block diagram of
The block diagram of
Sram Control Sequencer (SramCtrl).
The Sram control sequencer (See
The block diagram of
External Memory Control (memctrl).
Memctrl (See
memregs: The memregs module provides the configuration and control registers for all the functions of memctrl. memregs also implements the GPIO interface registers for reading, writing and directional control, the FLASH control registers for configuring and accessing FLASH, and registers associated with configuring the SDPAM controller. memregs is accessed through the CPU data path with all of its registers mapped to a CPU register address.
dramcfg_seq: The dramcfg_seq module contains the refresh logic, timers, and sequencer for the various configuration accesses that are performed. This also includes operations which take place during initialization.
flash_seq: The flash_seq module performs the various FLASH memory access sequences. This module also implements the programmable nature of the access time delays between the control signals and data accesses.
dramif: The dramif module arbitrates between the memctrl modules requesting access to the memory interface. This includes the dramcfg_seq, flash_seq, memregs, dramwrt and dramrd modules. The dramif module also muxes the row and column address for the SDRAM accesses, muxs the read and write control signals between dramrd, dramwrt, etc., and also controls the direction of the data bus interface. dramif attempts to ping-pong between reads and writes to maximize the overlap between read and write buffers and for fairness. This fairness can be overriden if a requester asserts it's urgent request signal for high priority conditions like impending buffer overflow or underflow. When the flash_seq has access to the interface the checkbits become address and control signals and the FSH_CS_L signal is asserted.
dramwrt: The dramwrt module implements the data and control path for all masters requesting write access to SDRAM. The dramwrt submodule dramwrt_mux arbitrates across all six dma requesters giving the following priorities from highest to lowest: RcvA, Q2d, Psi, S2d, P2d and D2d. dramwrt_mux will then mux the selected requester's data and address. The dramwrt_ldctrl will buffer the granted requester's data and ack the appropriate requester while the dramwrt_seq will proceed to initiate an SDRAM write operation. After dramwrt_seq gains control of the SDRAM interface via dramif, the buffered data will be selected from dramwrt_data data buffers and written to memory. If ECC is enabled, the dramwrt_data block will also compute the checkbits as the data passes through. This block can also force ECC errors at any bit in any location. Also, as the data is being written, the dramwrt_cksum block will checksum the data and indicate to the DMA requester when the checksum is complete. P2d and D2d are the only two requesters which have checksums calculated for their transactions.
dramrd: The dramrd module implements the data and control path for all masters requesting read access from SDRAM. The dramrd submodule dramrd_mux arbitrates across all six dma requesters, giving the following priorities from highest to lowest: XmtA, Pso, D2s, D2q, D2p and D2d. dramrd_mux also implements a state machine to overlap multiple read operations. So when a requester's read operation is being satisfied from SDRAM, another operation can be in progress with respect to bank activation and addressing. Once the dramrd_mux starts a transaction the dramrd_seq intiates the request for the interface via dramif and starts the actual read sequence. Once data starts to come back from the SDRAM the dramrd_data block will check it for ECC errors, if ECC correction and detection is enabled. The data is then stored in a 64 byte read buffer. Once there is enough data to write to the sram, the dramrd_unld sequencer will select data from the read buffer and request access to sram. The acks coming back from these sram writes are directed by the dramrd_mux to the original DMA requestor. Once all the requested data is delivered to the requestor, this operation is then complete.
External Memory Read Operations (dramrd).
The dramrd controller (See
The Memory Controller Block Diagram (See
Contiguous dram burst cycles are not guaranteed to the dramrd controller as an algorithm is implemented in the dramif which ensures highest priority to refresh cycles followed by ping-pong access between dram writes and dram reads and then confiuration and flash cycles.
External Memory Write Operations (dramwrt).
The dramwrt controller (See
The memctrl block diagram (See
Since the ECC is a 8 bit ECC for a 64 bit word, writes not aligned to a 64 bit boundary will necessitate a read/modify/write cycle. When the dramwrt_ldctrl sequencer detects that a non-aligned write is required, it will generate a request for the read to the dramrd controller. The dramrd controller then returns the read data which is loaded into the write buffers. The dramwrt_ldctrl sequencer can then request the new data from the Sram, proceeding from this point in the same way as for an aligned operation.
Contiguous dram burst cycles are not guaranteed to the dramwrt controller as an algorithm is implemented in the dramif which ensures highest priority to refresh cycles followed by ping-pong access between dram writes and dram reads and then configuration and flash cycles.
Pci Master-Out Sequencer (Pmo).
The Pmo sequencer (See
Pmo receives requests from two separate sources; the dram to Pci (D2p) module and the Sram to Pci (S2p) module. An operation (See
Pci Master-In Sequencer (Pmi).
The Pmi sequencer (See
Pmi receive requests from two separate sources; the Pci to dram (P2d) module and the Pci to Sram (P2s) module. An operation (See
Dram to Pci Sequencer (D2p).
The D2p sequencer (See
D2p can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, D2p fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the dram address, Pci address, Pci endian and request size. D2p then issues a request to the D2s sequencer causing the Sram based fifo to fill with dram data. Once the fifo contains sufficient data for a Pci transaction, D2s issues a request to Pmo which in turn moves data from the fifo to a Pci target. The process repeats until the entire request has been satisfied at which time D2p writes ending status in to the Sram dma descriptor area and sets the channel done bit associated with that channel. D2p then monitors the dma channels for additional requests.
Pci to Dram Sequencer (P2d).
The P2d sequencer (See
P2d can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, P2d, operating as a slave sequencer, fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the dram address, Pci address, Pci endian and request size. P2d then issues a request to Pmo which in turn moves data from the Pci target to the Sram fifo. Next, P2d issues a request to the Dwr sequencer causing the Sram based fifo contents to be written to the dram. The process repeats until the entire request has been satisfied at which time P2d writes ending status in to the Sram dma descriptor area and sets the channel done bit associated with that channel. P2d then monitors the dma channels for additional requests.
Sram to Pci Sequencer (S2p).
The S2p sequencer (See
S2p can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, S2p, operating as a slave sequencer, fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the Sram address, Pci address, Pci endian and request size. S2p then issues a request to Pmo which in turn moves data from the Sram to a Pci target. The process repeats until the entire request has been satisfied at which time S2p writes ending status in to the Sram dma descriptor area and sets the channel done bit associated with that channel. S2p then monitors the dma channels for additional requests.
Pci To Sram Sequencer (P2s).
The P2s sequencer (See
P2s can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, P2s, operating as a slave sequencer, fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the Sram address, Pci address, Pci endian and request size. P2s then issues a request to Pmo which in turn moves data from the Pci target to the Sram. The process repeats until the entire request has been satisfied at which time P2s writes ending status in to the dma descriptor area of Sram and sets the channel done bit associated with that channel. P2s then monitors the dma channels for additional requests.
Dram to Sram Sequencer (D2s).
The D2s sequencer (See
D2s can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, D2s, operating as a slave sequencer, fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the dram address, Sram address and request size. D2s then issues a request to the Drd sequencer causing the transfer of data to the Sram. The process repeats until the entire request has been satisfied at which time D2s writes ending status in to the Sram dma descriptor area and sets the channel done bit associated with that channel. D2s then monitors the dma channels for additional requests.
Sram to Dram Sequencer (S2d).
The S2d sequencer (See
S2d can receive requests from any of the processor's thirty-two dma channels. Once a command request has been detected, S2d, operating as a slave sequencer, fetches a dma descriptor from an Sram location dedicated to the requesting channel which includes the dram address, Sram address, checksum reset and request size. S2d then issues a request to the Dwr sequencer causing the transfer of data to the dram. The process repeats until the entire request has been satisfied at which time S2d writes ending status in to the Sram dma descriptor area and sets the channel done bit associated with that channel. S2d then monitors the dma channels for additional requests.
Pci Slave Input Sequencer (Psi).
The Psi sequencer (See
Psi manages write requests to configuration space, expansion rom, dram, Sram and memory mapped registers. Psi separates these Pci bus operations in to two categories with different action taken for each. Dram accesses result in Psi generating write request to an Sram buffer followed with a write request to the Dwr sequencer. Subsequent write or read dram operations are retry terminated until the buffer has been emptied. An event notification is set for the processor allowing message passing to occur through dram space.
All other Pci write transactions result in Psi posting the write information including Pci address, Pci byte marks and Pci data to a reserved location in Sram, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, Sram or registers are terminated with retry until the processor clears the event flag. This allows Mojave to keep pipelining levels to a minimum for the posted write and give the processor ample time to modify data for subsequent Pci read operations.
Pci Slave Output Sequencer (Pso).
The Pso sequencer (See
Pso manages read requests to configuration space, expansion rom, dram, Sram and memory mapped registers. Pso separates these Pci bus operations in to two categories with different action taken for each. Dram accesses result in Pso generating read request to the Drd sequencer followed with a read request to Sram buffer. Subsequent write or read dram operations are retry terminated until the buffer has been emptied.
All other Pci read transactions result in Pso posting the read request information including Pci address and Pci byte marks to a reserved location in Sram, then setting an event flag which the event processor monitors. Subsequent writes or reads of configuration, expansion rom, Sram or registers are terminated with retry until the processor clears the event flag. This allows Mojave to use a microcoded response mechanism to return data for the request. The processor decodes the request information, formulates or fetches the requested data and stores it in Sram then clears the event flag allowing Pso to fetch the data and return it on the Pci bus.
Frame Receive Sequencer (RcvX).
The receive sequencer (RcvSeq)(See
Receive Priorities.
The receive sequencer (See
Frame Transmit Sequencer (XmtX).
The transmit sequencer (XmtSeq)(See
Queue Manager (Qmg).
Mojave includes special hardware assist for the implementation of message and pointer queues. The hardware assist is called the queue manager (Qmg) (See
Qmg (See
There are a total of 32 queues. The first 8 are dedicated to a specific function as shown in
Dma Operations.
DMA operations are accomplished by seven dma sequencers (DmaSeq). Commands are sent to these sequencers via hardware queues. The queue Ids are fixed in hardware and are as shown in
Microcode will initiate a DMA by writing a command to the appropriate queue. The DMA sequencer will read a command from the queue, and fetch the descriptor block from Sram. It will then do the DMA. At the end of the DMA, if the DMA chain bit is not set, the DMA sequencer will terminate the DMA.
For DMAs that complete without error, the DMA Context byte (bits 31:24 of the command) will be written to the termination queue indicated by bits 20:16 of the command. Each entry in the termination queue is 32 bits, but only the least significant 8 bits (7:0) are used and wriiten with the DMA Context.
For DMAs that complete with error, the termination queue will not be written. Instead a bit in the DMA Error Register will be set. This is a 32 bit register and the least significant 5 bits of the DMA context will be used to decide which bit should be written in the following manner:
DMA Error Register [1<<DMA command [28:24]]=1;
If the Dummy DMA bit is set, no DMA is performed but the DMA context is written directly to the DMA termination queue.
If the DMA chain bit is set and the DMA completes without error, the DMA descriptor block is updated, but no other termination information is written. If the DMA chain bit is set and the DMA completes with an error, the DMA descriptor block is updated, and the error is propogated to subsequent DMA commands until the sequencer finds one that does not have the chain bit, when the DMA Error Register will be written as above, without writing to the termination queue.
The format of the P2d or P2s descriptor is shown in
The format of the S2p or D2p descriptor is shown in
The format of the S2d, D2d or D2s descriptor is shown in
The format of the ending status of any dma is as shown in
Slave Dram Interface: This block controls the interface to Dram when Dram is being accessed directly by the host or by another PCI master.
Slave Sram Interface: This block controls the the access to Sram for PCI slave accesses to read Sram, or to read or write Dram.
Pci Configuration Registers: This block contains the configuration registers that control the PCI space.
DMA Master In: This block does PCI master transfers on behalf of the P2D and P2S DMA sequencers. There is synchronization logic to synchronize between the PCI bus and the SRAM which are being clcoked by different clocks. It has 256 bytes of buffering to minimize latencies caused by this synchronization.
DMA Master Out: This block does PCI master transfers on behalf of the D2P and S2P DMA sequencers. There is synchronization logic to synchronize between the PCI bus and the SRAM which are being clcoked by different clocks. It has 256 bytes of buffering to minimize latencies caused by this synchronization.
PCI Slave Interface: This block has the state machine for PCI slave accesses to Mojave, from the host or from another PCI master.
PCI Parity: This block generates and checks parity on the PCI bus.
PCI Master Interface: This block has the state machine for PCI master accesses to host memory or to another PCI slave, done on behalf of the DMA sequencers.
This application claims the benefit under 35 U.S.C. § 119(e) of Provisional Application Ser. No. 60/374,788, filed Apr. 22, 2002. The complete disclosure of Provisional Application Ser. No. 60/374,788 is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4366538 | Johnson et al. | Dec 1982 | A |
4589063 | Shah et al. | May 1986 | A |
4700185 | Balph et al. | Oct 1987 | A |
4991133 | Davis et al. | Feb 1991 | A |
5056058 | Hirata et al. | Oct 1991 | A |
5058110 | Beach et al. | Oct 1991 | A |
5097442 | Ward et al. | Mar 1992 | A |
5163131 | Row et al. | Nov 1992 | A |
5212778 | Dally et al. | May 1993 | A |
5280477 | Trapp | Jan 1994 | A |
5289580 | Latif et al. | Feb 1994 | A |
5303344 | Yokoyama et al. | Apr 1994 | A |
5412782 | Hausman et al. | May 1995 | A |
5418912 | Christenson | May 1995 | A |
5448566 | Richter et al. | Sep 1995 | A |
5485455 | Dobbins et al. | Jan 1996 | A |
5485460 | Schrier et al. | Jan 1996 | A |
5485579 | Hitz et al. | Jan 1996 | A |
5506966 | Ban | Apr 1996 | A |
5511169 | Suda | Apr 1996 | A |
5517668 | Szwerinski et al. | May 1996 | A |
5524250 | Chesson et al. | Jun 1996 | A |
5535375 | Eshel | Jul 1996 | A |
5548730 | Young et al. | Aug 1996 | A |
5566170 | Bakke et al. | Oct 1996 | A |
5574919 | Netravali et al. | Nov 1996 | A |
5588121 | Reddin et al. | Dec 1996 | A |
5590328 | Seno et al. | Dec 1996 | A |
5592622 | Isfeld et al. | Jan 1997 | A |
5598410 | Stone | Jan 1997 | A |
5619650 | Bach et al. | Apr 1997 | A |
5629933 | Delp et al. | May 1997 | A |
5633780 | Cronin | May 1997 | A |
5634099 | Andrews et al. | May 1997 | A |
5634127 | Cloud et al. | May 1997 | A |
5642482 | Pardillos | Jun 1997 | A |
5664114 | Krech, Jr. et al. | Sep 1997 | A |
5671355 | Collins | Sep 1997 | A |
5678060 | Yokoyama et al. | Oct 1997 | A |
5682534 | Kapoor et al. | Oct 1997 | A |
5692130 | Shobu et al. | Nov 1997 | A |
5699317 | Sartore et al. | Dec 1997 | A |
5699350 | Kraslavsky | Dec 1997 | A |
5701434 | Nakagawa | Dec 1997 | A |
5701516 | Cheng et al. | Dec 1997 | A |
5727142 | Chen | Mar 1998 | A |
5742765 | Wong et al. | Apr 1998 | A |
5749095 | Hagersten | May 1998 | A |
5751715 | Chan et al. | May 1998 | A |
5752078 | Delp et al. | May 1998 | A |
5758084 | Silverstein et al. | May 1998 | A |
5758089 | Gentry et al. | May 1998 | A |
5758186 | Hamilton et al. | May 1998 | A |
5758194 | Kuzma | May 1998 | A |
5768618 | Erickson et al. | Jun 1998 | A |
5771349 | Picazo, Jr. et al. | Jun 1998 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5778013 | Jedwab | Jul 1998 | A |
5778419 | Hansen et al. | Jul 1998 | A |
5790804 | Osborne | Aug 1998 | A |
5794061 | Hansen et al. | Aug 1998 | A |
5802258 | Chen | Sep 1998 | A |
5802580 | McAlpice | Sep 1998 | A |
5809328 | Nogales et al. | Sep 1998 | A |
5809527 | Cooper et al. | Sep 1998 | A |
5812775 | Van Seeters et al. | Sep 1998 | A |
5815646 | Purcell et al. | Sep 1998 | A |
5828835 | Isfeld et al. | Oct 1998 | A |
5848293 | Gentry | Dec 1998 | A |
5872919 | Wakeland | Feb 1999 | A |
5878225 | Bilansky et al. | Mar 1999 | A |
5892903 | Klaus | Apr 1999 | A |
5898713 | Melzer et al. | Apr 1999 | A |
5913028 | Wang et al. | Jun 1999 | A |
5920566 | Hendel et al. | Jul 1999 | A |
5930830 | Mendelson et al. | Jul 1999 | A |
5931918 | Row et al. | Aug 1999 | A |
5935205 | Murayama et al. | Aug 1999 | A |
5937169 | Connery et al. | Aug 1999 | A |
5941969 | Ram et al. | Aug 1999 | A |
5941972 | Hoese et al. | Aug 1999 | A |
5950203 | Stakuis et al. | Sep 1999 | A |
5987022 | Geiger et al. | Nov 1999 | A |
5991299 | Radogna et al. | Nov 1999 | A |
5996013 | Delp et al. | Nov 1999 | A |
5996024 | Blumenau | Nov 1999 | A |
6005849 | Roach et al. | Dec 1999 | A |
6009478 | Panner et al. | Dec 1999 | A |
6009504 | Krick | Dec 1999 | A |
6016513 | Lowe | Jan 2000 | A |
6021446 | Gentry et al. | Feb 2000 | A |
6021507 | Chen | Feb 2000 | A |
6026452 | Pitts | Feb 2000 | A |
6034963 | Minami et al. | Mar 2000 | A |
6038562 | Anjur et al. | Mar 2000 | A |
6041058 | Flanders et al. | Mar 2000 | A |
6041381 | Hoese | Mar 2000 | A |
6044438 | Olnowich | Mar 2000 | A |
6047323 | Krause | Apr 2000 | A |
6047356 | Anderson et al. | Apr 2000 | A |
6049528 | Hendel et al. | Apr 2000 | A |
6057863 | Olarig | May 2000 | A |
6061368 | Hitzelberger | May 2000 | A |
6065096 | Day et al. | May 2000 | A |
6067569 | Khaki et al. | May 2000 | A |
6070200 | Gates et al. | May 2000 | A |
6078564 | Lakshman et al. | Jun 2000 | A |
6078733 | Osborne | Jun 2000 | A |
6097734 | Gotesman et al. | Aug 2000 | A |
6101555 | Goshey et al. | Aug 2000 | A |
6111673 | Chang et al. | Aug 2000 | A |
6115615 | Ota et al. | Sep 2000 | A |
6122670 | Bennett et al. | Sep 2000 | A |
6141701 | Whitney | Oct 2000 | A |
6141705 | Anand et al. | Oct 2000 | A |
6145017 | Ghaffari | Nov 2000 | A |
6157955 | Narad et al. | Dec 2000 | A |
6172980 | Flanders et al. | Jan 2001 | B1 |
6173333 | Jolitz et al. | Jan 2001 | B1 |
6181705 | Branstad et al. | Jan 2001 | B1 |
6202105 | Gates et al. | Mar 2001 | B1 |
6226680 | Boucher et al. | May 2001 | B1 |
6233242 | Mayer et al. | May 2001 | B1 |
6246683 | Connery et al. | Jun 2001 | B1 |
6247060 | Boucher et al. | Jun 2001 | B1 |
6279051 | Gates et al. | Aug 2001 | B1 |
6289023 | Dowling et al. | Sep 2001 | B1 |
6298403 | Suri et al. | Oct 2001 | B1 |
6324649 | Eyres et al. | Nov 2001 | B1 |
6334153 | Boucher et al. | Dec 2001 | B2 |
6343360 | Feinleib | Jan 2002 | B1 |
6345301 | Burns et al. | Feb 2002 | B1 |
6345302 | Bennett et al. | Feb 2002 | B1 |
6356951 | Gentry et al. | Mar 2002 | B1 |
6370599 | Anand et al. | Apr 2002 | B1 |
6385647 | Wills et al. | May 2002 | B1 |
6389468 | Muller et al. | May 2002 | B1 |
6389479 | Boucher | May 2002 | B1 |
6393487 | Boucher et al. | May 2002 | B2 |
6421742 | Tillier | Jul 2002 | B1 |
6421753 | Hoese et al. | Jul 2002 | B1 |
6427169 | Elzur | Jul 2002 | B1 |
6427171 | Craft et al. | Jul 2002 | B1 |
6427173 | Boucher et al. | Jul 2002 | B1 |
6434620 | Boucher et al. | Aug 2002 | B1 |
6434651 | Gentry, Jr. | Aug 2002 | B1 |
6449656 | Elzur et al. | Sep 2002 | B1 |
6453360 | Muller et al. | Sep 2002 | B1 |
6470415 | Starr et al. | Oct 2002 | B1 |
6473425 | Bellaton et al. | Oct 2002 | B1 |
6480489 | Muller et al. | Nov 2002 | B1 |
6487202 | Klausmeier et al. | Nov 2002 | B1 |
6487654 | Dowling | Nov 2002 | B2 |
6490631 | Teich et al. | Dec 2002 | B1 |
6502144 | Accarie | Dec 2002 | B1 |
6523119 | Pavlin et al. | Feb 2003 | B2 |
6526446 | Yang et al. | Feb 2003 | B1 |
6570884 | Connery et al. | May 2003 | B1 |
6591302 | Boucher et al. | Jul 2003 | B2 |
6591310 | Johnson | Jul 2003 | B1 |
6648611 | Morse et al. | Nov 2003 | B2 |
6650640 | Muller et al. | Nov 2003 | B1 |
6657757 | Chang et al. | Dec 2003 | B1 |
6658480 | Boucher et al. | Dec 2003 | B2 |
6678283 | Teplitsky | Jan 2004 | B1 |
6681364 | Calvignac et al. | Jan 2004 | B1 |
6687758 | Craft et al. | Feb 2004 | B2 |
6697868 | Craft et al. | Feb 2004 | B2 |
6751665 | Philbrick et al. | Jun 2004 | B2 |
6757746 | Boucher et al. | Jun 2004 | B2 |
6765901 | Johnson et al. | Jul 2004 | B1 |
6807581 | Starr et al. | Oct 2004 | B1 |
6842896 | Redding et al. | Jan 2005 | B1 |
6912522 | Edgar | Jun 2005 | B2 |
6938092 | Burns | Aug 2005 | B2 |
6941386 | Craft et al. | Sep 2005 | B2 |
6965941 | Boucher et al. | Nov 2005 | B2 |
6996070 | Starr et al. | Feb 2006 | B2 |
7042898 | Blightman et al. | May 2006 | B2 |
7076568 | Philbrick et al. | Jul 2006 | B2 |
7089326 | Boucher et al. | Aug 2006 | B2 |
7093099 | Bodas et al. | Aug 2006 | B2 |
7124205 | Craft et al. | Oct 2006 | B2 |
7133940 | Blightman et al. | Nov 2006 | B2 |
7167926 | Boucher et al. | Jan 2007 | B1 |
7167927 | Philbrick et al. | Jan 2007 | B2 |
7174393 | Boucher et al. | Feb 2007 | B2 |
7185266 | Blightman et al. | Feb 2007 | B2 |
7191241 | Boucher et al. | Mar 2007 | B2 |
7191318 | Tripathy et al. | Mar 2007 | B2 |
7237036 | Boucher et al. | Jun 2007 | B2 |
7254696 | Mittal et al. | Aug 2007 | B2 |
7284070 | Boucher et al. | Oct 2007 | B2 |
20010004354 | Jolitz | Jun 2001 | A1 |
20010025315 | Jolitz | Jun 2001 | A1 |
20010013059 | Dawson et al. | Aug 2001 | A1 |
20010014892 | Gaither et al. | Aug 2001 | A1 |
20010014954 | Purcell et al. | Aug 2001 | A1 |
20010048681 | Bilic et al. | Dec 2001 | A1 |
20010053148 | Bilic et al. | Dec 2001 | A1 |
20020073223 | Damell et al. | Jun 2002 | A1 |
20020112175 | Makofka et al. | Aug 2002 | A1 |
20030066011 | Oren | Apr 2003 | A1 |
20030110344 | Szczepanek et al. | Jun 2003 | A1 |
20030165160 | Minami et al. | Sep 2003 | A1 |
20040054814 | McDaniel | Mar 2004 | A1 |
20040059926 | Angelo et al. | Mar 2004 | A1 |
20040153578 | Elzur | Aug 2004 | A1 |
20040213290 | Johnson et al. | Oct 2004 | A1 |
20040246974 | Gyugi et al. | Dec 2004 | A1 |
Number | Date | Country |
---|---|---|
WO 9819412 | May 1998 | WO |
WO 9850852 | Nov 1998 | WO |
WO 9904343 | Jan 1999 | WO |
WO 9965219 | Dec 1999 | WO |
WO 0013091 | Mar 2000 | WO |
WO 0104770 | Jan 2001 | WO |
WO 0105107 | Jan 2001 | WO |
WO 0105116 | Jan 2001 | WO |
WO 0105123 | Jan 2001 | WO |
WO 0140960 | Jun 2001 | WO |
WO 0159966 | Aug 2001 | WO |
WO 0186430 | Nov 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20040062245 A1 | Apr 2004 | US |
Number | Date | Country | |
---|---|---|---|
60374788 | Apr 2002 | US |