1. Field of the Invention
2. Description of the Related Art
In complex computer systems, particularly those in large transaction processing environments as shown in
This is shown additionally in
Legacy operating systems such as Linux 2.4 or Microsoft NT4 were architected assuming that each “I/O Service” is provided by an independent adapter. An “I/O Service” is defined as the portion of adapter functionality that connects a server onto one of the network fabrics. Referring to
Modern operating systems such as Microsoft Windows Server 2003 provide a mechanism called a consolidated driver model, which could be used to export all ECA I/O Services using only a single PCI function. However, the software associated with the consolidated driver model has implicit inefficiencies due to the layers involved in virtualizing each I/O Service using host software. In some deployment environments, it may be desirable to support the consolidated driver model, but in environments that are sensitive to latency and CPU utilization it is desirable to deploy an ECA using multiple PCI functions.
Microsoft has made some progress in integrating networking and clustering using the Winsock Direct (WSD) model. One issue with WSD is that it does not export the various RDMA (Remote Direct Memory Access) APIs (Application Programming Interfaces), such as DAPL (Direct Access Provider Library) or MPI (Message Passing Interface), that have been widely accepted by the clustering community. One approach to exporting DAPL and MPI when not natively supported on an operating system is to use an independent PCI function for clustering. Another issue with WSD is that it is not deployed on all Microsoft operating systems, so hardware vendors cannot rely on it to export their adapter I/O services in all Microsoft operating system environments.
Future operating systems architectures will certainly start to take into account the unique characteristics of ECAs, e.g., multiple network ports and multiple I/O Services implemented in one adapter. Network ports, accelerated connections, and memory registration resources are all examples of resources that the operating system has an interest of managing in a way that is intuitive and in a way that takes the best advantage of the functionality provided by an ECA. This results in a very high probability for even more deployment models which would be desirable to support.
To address these various deployment models and yet provide the broadest use of a single ECA at its full capabilities it would be desirable to have an ECA that is able to adapt to each deployment model.
In a design according to the present invention, a flexible arrangement allows a single arrangement of ECA hardware functions to appear as needed to conform with various operating system deployment models. A PCI interface presents a logical model appropriate to the relevant operating system. Mapping parameters and values are associated with the packet streams to allow the packet streams to be properly processed according to the presented logical model and needed operations. The mapping arrangement allows different logical models to be presented and yet have only a single hardware implementation. Mapping occurs at both the host side and at the network side to allow the multiple operations of the ECA to be performed while still allowing proper delivery at each interface.
In the preferred embodiment as shown in
Referring to
Referring then to
Referring then to
In basic operations, a series of tasks are performed by the various modules or sub-modules in the protocol engine 616 to handle the various iWARP, iSCSI and regular Ethernet traffic. A context manager 704 is provided with a dedicated datapath to the local memory interface 610. As each connection which is utilized by the ECA 400 must have a context, various subcomponents or submodules are connected to the context manager 704 as indicated by the arrows captioned by cm. Thus all of the relevant submodules can determine context of the various packets as needed. The context manager 704 contains a context cache 706, which caches the context values from the local memory, and a work available memory region cache 708, which contains memory used to store transmit scheduling information to determine which operations should be performed next in the protocol engine 616. The schedules are effectively developed in a work queue manager (WQM) 710. The WQM 710 handles scheduling for all transmissions of all protocols in the protocol engine 616. One of the main activities of the WQM 710 is to determine when data needs to be retrieved from the external memory 506 or 512 or from host memory 504 for operation by one of the various modules. The WQM 710 handles this operation by requesting a time slice from the protocol engine arbiter 702 to allow the WQM 710 to retrieve the desired information and place it in on-chip storage. A completion queue manager (CQM) 712 acts to provide task completion indications to the CPUs 500. The CQM 712 handles this task for various submodules with connections to those submodules indicated by arrows captioned by cqm. A doorbell submodule 713 receives commands from the host, such as “a new work item has been posted to SQ x,” and converts these commands into the appropriate context updates.
A TCP off-load engine (TOE) 714 includes submodules of transmit logic 716 and receive logic 718 to handle processing for accelerated TCP/IP connections. The receive logic 716 parses the TCP/IP headers, checks for errors, validates the segment, processes received data, processes acknowledges, updates RTT estimates and updates congestion windows. The transmit logic 716 builds the TCP/IP headers for outgoing packets, performs ARP table look-ups, and submits the packet to the transaction switch 608. An iWARP module 719 includes a transmit logic portion 720 and a receive logic portion 722. The iWARP module 719 implements various layers of the iWARP specification, including the MPA, DDP and RDMAP layers. The receive logic 722 accepts inbound RDMA messages from the TOE 714 for processing. The transmit logic 720 creates outbound RDMA segments from PCI data received from the host CPUs 500.
A NIC module 724 is present and connected to the appropriate items, such as the work queue manager 710 and the protocol engine arbiter 702. An iSCSI module 726 is present to provide hardware acceleration to the iSCSI protocol as necessary.
Typically the host operating system provides the ECA 400 with a set of restrictions defining which user-level software processes are allowed to use which host memory address ranges in work requests posted to the ECA 400. Enforcement of these restrictions is handled by an accelerated memory protection (AMP) module 728. The AMP module 728 validates the iWARP STag using the memory region table (MRT) and returns the associated physical buffer list (PBL) information. An HDMA block 730 is provided to carry out the DMA transfer of information between host memory 504, via one of the bus interfaces 602 or 604, and the transaction switch 608 on behalf of the WQM 710 or the iWARP module 719. An ARP module 732 is provided to retrieve MAC destination addresses from an on-chip memory. A free list manager (FLM) 734 is provided to work with various other modules to determine the various memory blocks which are available. Because the data, be it data packets or control structures, is all contained in packets, a list of the available data blocks is required and the FLM 734 handles this function.
The protocol engine 616 of the preferred embodiment also contains a series processors to perform required operations, each processor including the appropriate firmware for the function of the processor. The first processor is a control queue processor (CQP) 738. The control queue processor 738 performs commands submitted by the various host drivers via control queue pairs. This is relevant as queue pairs are utilized to perform RDMA operations. The processor 738 has the capability to initialize and destroy queue pairs and memory regions or windows. A second processor is the out-of-order processor (OOP) 740. The out-of-order processor 740 is used to handle the problem of TCP/IP packets being received out-of-order and is responsible for determining and tracking the holes and properly placing new segments as they are obtained. A transmit error processor (TEP) 742 is provided for exception handling and error handling for the TCP/IP and iWARP protocols. The final processor is an MPA reassembly processor 744. This processor 744 is responsible for managing the receive window buffer for iWARP and processing packets that have MPA FPDU alignment or ordering issues.
The components and programming of the ECA 400 are arranged and configured to allow the ECA 400 to work with the known deployment models described above, including independent adapter, consolidated driver and Winsock Direct, and potential future deployment models. The ECA 400 can present itself on the PCI bus as one or many PCI functions as appropriate for the deployment model. The various I/O services, such as networking, clustering and block storage, can then be arranged in various manners to map to the presented PCI function or functions as appropriate for the particular deployment model. All of the services are then performed using the protocol engine 616 effectively independent of the deployment model as the various services are mapped to the protocol engine 616.
Prior to proceeding with the description, following are definitions of various terms.
Virtual Device: Generic term for the “I/O adapters” inside ECA 400. The ECA 400 of the preferred embodiments implements these virtual devices: four host NICs, which are connected to the operating system; 12 internal NICs, which are private or internal NICs that are not exposed to the operating system directly; four management NICs; one TCP Offload Engine (TOE); one iSCSI acceleration engine; and one iWARP acceleration engine.
I/O Service: One or more virtual devices are used in concert to provide the I/O Services implemented by ECA 400. The four major ECA 400 I/O Services are: Network, Accelerated Sockets, Accelerated RDMA, and Block Storage. A given I/O Service may be provided by different underlying virtual devices, depending on the software environment that ECA 400 is operating in. For example, the Accelerated Sockets I/O Service is provided using TOE and Host NIC(s) in one scenario, but is provided using TOE and Internal NIC(s) in another scenario. Virtual devices are often not exclusively owned by the I/O Services they help provide. For example, both the Accelerated Sockets and Accelerated RDMA I/O Services are partly provided using the TOE virtual device. The only virtual device exclusively owned is iSCSI, which is owned by Block Storage.
PCI Function: ECA 400 is a PCI multi-function device as defined in the PCI Local Bus Specification, rev 2.3. ECA 400 implements from one to eight PCI Functions, depending on configuration. Each PCI Function exports a group of I/O Services that is programmed by the same device driver. A PCI Function usually has at least one unique IP address and always has at least one unique MAC address.
Endnode: A virtual device or set of virtual devices with a unique Ethernet MAC address.
ECA Logical Model: The ECA Logical Model describes how ECA 400 functionality (e.g. Ethernet ports, virtual devices, I/O Services, etc) will be presented to end users. It is to be understood that certain aspects of the ECA Logical Model do not map directly and simply to the physical ECA 400 implementation. For example, there are no microswitches in the ECA 400 implementation. Microswitches are virtual, and the transaction switch 608 implements their functionality. Further the ECA Logical Model is dynamic. For example, different software environments and different ECA 400 Ethernet port configurations will lead to different ECA Logical Models. Some of the things that can change from one ECA Logical Model to another: number of microswitches can vary from 1 to 4, number of active PCI Functions can vary from 1 to 8, number of I/O Services can vary from 1 to 7, and number of virtual devices can vary widely. Management and configuration software will save information in NVRAM that defines the Logical Model currently in use. Following are several examples of ECA Logical Models.
The following comments apply to any of the ECA Logical Models:
Each microswitch basically has the functionality of a layer 2 Ethernet switch. Each arrow connecting to a microswitch represents a unique endnode. The ECA 400 preferably comprises at least 20 unique Ethernet unicast MAC addresses as shown.
A microswitch is only allowed to connect between one active Ethernet port or link aggregated port group and a set of ECA 400 endnodes. This keeps the microswitch from requiring a large forwarding table, resulting in a microswitch being like a leaf switch with a single default uplink port. Inbound packets always terminate at one or more ECA 400 endnodes so that there is no possibility of switching from one external port to another. Outbound packets sent from one ECA 400 endnode may be internally switched to another ECA 400 endnode connected to the same microswitch. If internal switching is not required, the packet always gets forwarded out the Ethernet or uplink port.
Each Ethernet port has its own unique unicast MAC address, termed an ECA 400 “management MAC address”. Packets using one of these management MAC addresses are always associated with a management NIC virtual device. Packets sent to these addresses will often be of the fabric management variety.
A box labeled “mgmt filter” within the microswitch represents special filtering rules that apply only to packets to/from the management NIC virtual devices. An example rule: Prevent multicast packets transmitted from a management NIC from internally switching.
If there is a “mux” or multiplexer in an ECA Logical Model, this signifies packet classification. In
Each I/O Service is associated with an “affiliated NIC group”. An “affiliated NIC group” always contains four NIC virtual devices. The number of active NIC virtual devices within an “affiliated NIC group” is always equal to the number of ECA 400 Ethernet ports in use. Organizing ECA 400 NIC virtual devices into “affiliated NIC groups” is useful because it helps determine which NIC should receive an inbound packet when link aggregation is active and because it helps prevent outbound packets from being internally switched in some cases.
Each accelerated I/O Service (Accelerated Sockets, Accelerated RDMA, and Block Storage) is associated with an “affiliated NIC group” because it provides a portion of its services using an “affiliated” TCP/IP stack running on the host or server. The “affiliated” TCP/IP stack transmits and receives packets on ECA 400 Ethernet ports via these affiliated NICs. There may be multiple TCP/IP stacks simultaneously running on the host to provide all of the ECA 400 I/O Services. The portion of services provided by an “affiliated” TCP/IP stack are:
Initiates TCP/IP connection: An affiliated TCP/IP stack is responsible for initiating each TCP/IP connection, and then notifying the ECA 400. Once notified, the ECA 400 will perform the steps required to transfer the connection from the host to the corresponding Accelerated I/O Service, and will then inform the host of the success or failure of the transfer in an asynchronous status message.
Performs IP fragment reassembly: the ECA 400 does not process inbound IP fragmented packets. Fragmented packets are received by their affiliated TCP/IP stack for reassembly, and are then returned to the ECA 400 for higher layer processing.
Processes fabric management, e.g. ARP or ICMP, messages.
This portion of services is algorithmically complex, subject to numerous interoperability concerns, is favored by Denial of Service (DoS) attackers, and does not require hardware acceleration to achieve good performance in typical scenarios. For these reasons, in the preferred embodiment, these functions are provided using a host software solution rather than on-board logic. It is understood that on-board logic could be utilized if desired.
All I/O Services transfer data between the ECA 400 and the host using the Queue Pair (QP) concept from iWARP verbs. While the specific policy called out in the iWARP verbs specification may not be enforced on every I/O Service, the concepts of submitting work and completion processing are consistent with iWARP verbs. This allows a common method for submitting and completing work across all I/O Services. The WQE and CQE format used on QPs and CQs across QPs on different I/O Services vary significantly, but the mechanisms for managing WQs (work queues) and CQs (completion queues) are consistent across all I/O Services.
The ECA 400 preferably uses a flexible interrupt scheme that allows mapping of any interrupt to any PCI Function. The common elements of interrupt processing are the Interrupt Status Register, Interrupt Mask Register, CQ, and the Completion Event Queue (CEQ). ECA 400 has sixteen CEQs that can be distributed across the eight PCI Functions. CEQs may be utilized to support quality of service (QOS) and work distribution across multiple processors. CQs are individually assigned to one of the sixteen CEQs under software control. Each WQ within each QP can be mapped to any CQ under software control. This model allows maximum flexibility for work distribution.
The ECA 400 has 16 special QPs that are utilized for resource assignment operations and contentious control functions. These Control QPs (CQPs) are assigned to specific PCI Functions. Access to CQPs is only allowed to privileged entities. This allows overlapped operation between verbs applications and time consuming operations, such as memory registration.
System software controls how the ECA 400 resources are allocated among the active I/O Services. Many ECA 400 resources can be allocated or reallocated during run time, including Memory Regions, PBL resources, and QPs/CQs associated with Accelerated I/O Services. Other ECA 400 resources, such as protection domains, must be allocated once upon reset. By allowing most ECA 400 resources to be allocated or reallocated during run time, the number of reboots and driver restarts required when performing ECA 400 reconfiguration is minimized.
As noted above, the ECA 400 allows I/O Services to be mapped to PCI Functions in many different ways. This mapping is done with strapping options or other types of power on configuration settings, such as NVRAM config bits. This flexibility is provided to support a variety of different operating systems. There are two major operating system types:
Unaware operating systems: In the context of this description, unaware operating systems are those that do not include a TCP/IP stack that can perform connection upload/download to an Accelerated Sockets, Accelerated RDMA, or Block Storage I/O Service. The TCP/IP stack is unaware of these various ECA 400 I/O Services. With such operating systems, the host TCP/IP stack is only used for unaccelerated connections, and one or more additional TCP/IP stacks, referred to throughout this description as internal stacks exist to perform connection setup and fabric management for connections that will use Accelerated I/O Services. For example, any application that wishes to use an Accelerated RDMA connection will establish and manage the connection through an internal stack, not through the host stack.
Aware operating systems: In the context of this description, aware operating systems are those that include a TCP/IP stack that can perform connection upload/download to one or more of: Accelerated Sockets, Accelerated RDMA, or Block Storage I/O Service, i.e. the TCP/IP stack is aware of these various I/O Services. Currently those operating systems are only from Microsoft. Future Microsoft operating systems will incorporate a TOE chimney or TOE/RDMA chimney, enabling connection transfer between the host TCP/IP stack and the Accelerated Sockets or Accelerated RDMA I/O Services. Typically the host TCP/IP stack is used to establish a connection and then the ECA 400 performs connection transfer to the Accelerated Sockets or Accelerated RDMA I/O Service. The advantage of this cooperation between the host stack and the ECA 400 is to eliminate the need for many or all of the internal stacks.
Each of the operating system types described above can be further classified by what driver model they support as described above. The two driver models are described below:
Independent Driver model: Legacy operating systems such as Windows NT4 typically support only this model. These operating systems require a separate, independent driver to load for each I/O Service. With this model, the I/O Service to PCI Function ratio is always 1:1.
Consolidated Driver model: Also known as a Bus Driver model. Newer operating systems such as Windows 2000 and to a greater extent Windows Server 2003 support this type of driver. Here a single operating system driver can control multiple I/O Services, which means that the I/O Service to PCI Function ratio can be greater than one.
All of the examples below in this section show one Ethernet port per microswitch. It is understood that the ECA 400 can be configured where there is more than one Ethernet port assigned per microswitch.
The first example is unaware operating systems, independent driver model and is shown in
The Block Storage I/O Service 806 has access to both the iSCSI 812 and iWARP virtual devices 814, which allows it to support both iSCSI and iSER transfers.
If the host supports the simultaneous use of more than one RDMA API, VI and DAPL, then these APIs connect to the ECA 400 through a single shared PCI Function.
This model uses this fixed mapping between I/O Services and PCI Functions:
It is understood that administration of a machine with multiple active TCP/IP stacks is more complicated than administration of a machine with a single active TCP/IP stack and that attempts to interact between stacks must use unconventional means to provide a robust implementation since no OS-architected method for interaction is available.
Thus the Logical Model according to
The second example is the unaware operating systems, consolidated driver model as shown in
All I/O Services plus ECA 400 management can be programmed via a common PCI Function. For some operating systems, the Block Storage I/O Service might continue to require its own PCI Function.
By consolidating the Accelerated Sockets, Accelerated RDMA, and Block Storage I/O Services under a common PCI Function, I/O Services are able to share a common internal stack. Since only two stacks are used, the used number of IP addresses can be reduced from 16 to 8. Further, eight Internal NICs are not used, reducing the required number of MAC addresses from 20 to 12.
This model uses this fixed mapping between I/O Services and PCI Functions: PCI Function 0=Management network, Accelerated Sockets, Accelerated RDMA and Block Storage I/O Service.
The operating system software overhead is higher in this model as discussed above, especially in the interrupt distribution area. The device driver portion of the bus model is also more complicated to implement than legacy device drivers.
The virtual devices presented in the Logical Model according to
The third model is the aware operating system, consolidated driver model and is shown in
With the operating system aware, the host NICs and host TCP/IP stack can be used to set up accelerated TOE and iWARP connections. An internal stack is present to supply the Block Storage I/O Service and may be used to supply the Accelerated RDMA I/O Service as well, for those RDMA APIs that are not native to the operating system. For example, the DAPL API will not be native to the Microsoft chimney-enabled operating system. The used number of IP addresses is eight. The used number of MAC addresses is 12
The Logical Model according to
The Windows Sockets Direct API model has two variations.
As common background, WSD requires a SAN NIC to support both accelerated RDMA-enabled traffic and unaccelerated host TCP/IP traffic. The SAN NIC accomplishes this by providing a normal NDIS driver interface for connection to the host TCP/IP stack and by providing a proprietary interface to the WSD Provider or SAN Provider and the WSD Proxy or SAN Management Driver for SAN services.
WSD allows for each SAN NIC to connect to a fabric that contains some IP subnets that are RDMA-enabled, and some that are not. For example, on an InfiniBand SAN, there might be an IP over IB gateway that connects the SAN to an Ethernet network that is reachable only via the SAN. Also for example, on an iWARP SAN, there might be some subnets that do not have ECA 400 adapters, but rather are connected using ordinary Ethernet NICs.
The Windows Sockets Switch keeps a list of IP subnets that are RDMA-enabled. When both endnodes in a sockets session are not RDMA-enabled, or are not on the same IP subnet, or if the session is not using TCP transport, then the Windows Sockets Switch implements the session using the host TCP/IP stack. Only when both endnodes in a sockets session are RDMA-enabled, and on the same IP subnet, and when the session is using TCP transport, will the Windows Sockets Switch implement the connection using the WSD Provider path. The concern here is that there will be a combination of accelerated and unaccelerated traffic on the RDMA-enabled IP subnets of the SAN.
In one implementation the WSD proxy driver includes an internal stack for initiation of accelerated connections etc. The WSD architecture assumes that the SAN fabric does not use IP addressing, and that a translation from IP addresses to SAN addresses is required. The translation is expected to take place in the NIC driver for unaccelerated traffic, and in the WSD Proxy Driver for accelerated traffic. Of course, this assumption is not correct for the ECA 400. The ECA 400 NIC driver does not require address translation capability. However, a translation is still required for accelerated traffic, so that accelerated traffic can be distinguished from unaccelerated traffic on the RDMA-enabled IP subnets of the SAN. This translation is carried out in the WSD Proxy Driver.
According to the Logical Model of
ECA 400 configuration software uses silicon capabilities combined with user input to configure which PCI functions to enable and which I/O Services are mapped to which enabled PCI functions. This configuration information, termed “EEPROM Boot-up Register Overrides”, is stored in the ECA 400 EEPROM (not shown). Upon hard reset, the ECA 400 automatically reads this configuration information out of EEPROM, and applies it to the ECA 400 PCI configuration space registers. Typical registers that require EEPROM Boot-up Register Override include Device ID, Class Code, Subsystem Vendor ID, Subsystem ID, Interrupt Pin, and Config Overrides.
During reset initialization, the ECA 400 decides which PCI functions to enable using information stored in the “Config Overrides” PCI Configuration register. When a given PCI function is not enabled, then attempts to access its config space will result in master abort.
Thus the variation between Logical Models of the ECA 700 can be seen. The configuration registers 605 are configured to present the appropriate Functions or I/O Services, and their related register sets, to the PCI bus. For example, eight separate Functions are presented in
As the protocol engine 616 is a single unit, mapping values inside the protocol engine 616 are used to associate I/O Services and related virtual devices to the exposed PCI Functions. Exemplary mapping values include the NIC or NICs associated with a given MAC address, the outcome of the quad hash function, and connection context fields including protocol, such as iSCSI, iWARP, etc.; a value designating the responsible NIC; and the relevant PCI Function. A given NIC is only a virtual or logical construct inside the protocol engine 616, as only one actual hardware grouping is provided to do each function.
Each packet received from the Ethernet fabric 310 is identified using its destination MAC address, quad, and other packet header fields with a set of mapping values managed by the protocol engine, which determine the Virtual Device(s) that will perform processing on the packet and the I/O Service and PCI function the packet is affiliated with. The protocol engine 616 uses the mapping values to transfer relevant portions of this packet across the PCI interface 602 or 604 and into host memory 504 using the proper PCI Function. In the preferred embodiment the ECA 400 supports the programming of any I/O Service and any Virtual Device from any PCI Function. When drivers load, they learn through configuration parameters which I/O Services and Virtual Devices are configured as active on their PCI function and restrict themselves to programming only these I/O Services and Virtual Devices. When a driver posts a new command to the adapter, mapping values inside the protocol engine 616 are used to associate each command with the appropriate I/O Service, Virtual Device(s) and an Ethernet port. This enables the protocol engine 616 to determine the correct sequence of Virtual Devices that must process the command in order to carry it out. When processing a command involves transmission of packets, the packets are transmitted on the Ethernet port defined by said mapping values. The mapping values are chosen and resulting values are sufficiently flexible to allow handling of the various instances described above and others that will arise in the future.
As an example, consider the logical model of
Each I/O Service has one or more dedicated host memory 504 work queues (not shown in
When a packet is received at Ethernet port 2802, the ECA 400 uses its header fields to identify it with a set of mapping values. In this case a first packet's header fields might identify it with mapping values that affiliate the packet with PCI Function 6, Block Storage I/O Service, NES NIC 14. A second packet's header fields might identify it with mapping values that affiliate the packet with PCI Function 6, Block Storage I/O Service, TOE virtual device, and iWARP virtual device 814. This knowledge of Virtual Devices enables the protocol engine 616 to determine the correct sequence of submodules to carry out packet processing, which for second packet would be TRX 718, then IRX 722, then WQM 710, then CQM 712. The mapping values enable the protocol engine 616 to interpret any received packet in the context of the configured Logical Model, to carry out received packet processing using the correct set of Virtual Device(s), and to transfer relevant portions of this packet across the PCI interface 602 or 604 using the proper PCI Function.
Had the same packet stream going to the same storage device be provided in a case according to
By having the mapping capability and the flexibility in the mapping capability and the various internal components, numerous operating system deployment models can be handled by a single ECA 400. This flexibility allows maximum usage of the ECA 400 in the maximum number of environments without requiring different ECAs or major user reconfiguration.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5400326 | Smith | Mar 1995 | A |
5434976 | Tan et al. | Jul 1995 | A |
5758075 | Graziano et al. | May 1998 | A |
5832216 | Szczepanek | Nov 1998 | A |
5953511 | Sescila, III et al. | Sep 1999 | A |
6052751 | Runaldue et al. | Apr 2000 | A |
6067300 | Baumert et al. | May 2000 | A |
6145045 | Falik et al. | Nov 2000 | A |
6199137 | Aguilar et al. | Mar 2001 | B1 |
6243787 | Kagan et al. | Jun 2001 | B1 |
6389479 | Boucher et al. | May 2002 | B1 |
6400730 | Latif et al. | Jun 2002 | B1 |
6408347 | Smith et al. | Jun 2002 | B1 |
6418201 | Holland et al. | Jul 2002 | B1 |
6427171 | Craft et al. | Jul 2002 | B1 |
6502156 | Sacker et al. | Dec 2002 | B1 |
6535518 | Hu et al. | Mar 2003 | B1 |
6591310 | Johnson | Jul 2003 | B1 |
6594329 | Susnow | Jul 2003 | B1 |
6594712 | Pettey et al. | Jul 2003 | B1 |
6601126 | Zaidi et al. | Jul 2003 | B1 |
6625157 | Niu et al. | Sep 2003 | B2 |
6658521 | Biran et al. | Dec 2003 | B1 |
6661773 | Pelissier et al. | Dec 2003 | B1 |
6675200 | Cheriton et al. | Jan 2004 | B1 |
6690757 | Bunton et al. | Feb 2004 | B1 |
6693901 | Byers et al. | Feb 2004 | B1 |
6694394 | Bachrach | Feb 2004 | B1 |
6697868 | Craft et al. | Feb 2004 | B2 |
6704831 | Avery | Mar 2004 | B1 |
6751235 | Susnow et al. | Jun 2004 | B1 |
6760307 | Dunning et al. | Jul 2004 | B2 |
6763419 | Hoese et al. | Jul 2004 | B2 |
6778548 | Burton et al. | Aug 2004 | B1 |
7093024 | Craddock et al. | Aug 2006 | B2 |
7149817 | Pettey | Dec 2006 | B2 |
7149819 | Pettey | Dec 2006 | B2 |
7177941 | Biran et al. | Feb 2007 | B2 |
7299266 | Boyd et al. | Nov 2007 | B2 |
7376755 | Pandya | May 2008 | B2 |
7376770 | Arndt et al. | May 2008 | B2 |
7383483 | Biran et al. | Jun 2008 | B2 |
7401126 | Pekkala et al. | Jul 2008 | B2 |
7426674 | Anderson et al. | Sep 2008 | B2 |
7451197 | Davis et al. | Nov 2008 | B2 |
20010049740 | Karpoff | Dec 2001 | A1 |
20020073257 | Beukema et al. | Jun 2002 | A1 |
20020085562 | Hufferd et al. | Jul 2002 | A1 |
20020147839 | Boucher et al. | Oct 2002 | A1 |
20020161919 | Boucher et al. | Oct 2002 | A1 |
20020172195 | Pekkala et al. | Nov 2002 | A1 |
20030031172 | Grinfeld | Feb 2003 | A1 |
20030050990 | Craddock et al. | Mar 2003 | A1 |
20030097428 | Afkhami et al. | May 2003 | A1 |
20030165160 | Minami et al. | Sep 2003 | A1 |
20030169775 | Fan et al. | Sep 2003 | A1 |
20030200284 | Philbrick et al. | Oct 2003 | A1 |
20030217185 | Thakur et al. | Nov 2003 | A1 |
20030237016 | Johnson et al. | Dec 2003 | A1 |
20040010545 | Pandya | Jan 2004 | A1 |
20040015622 | Avery | Jan 2004 | A1 |
20040030770 | Pandya | Feb 2004 | A1 |
20040037319 | Pandya | Feb 2004 | A1 |
20040049600 | Boyd et al. | Mar 2004 | A1 |
20040049774 | Boyd et al. | Mar 2004 | A1 |
20040062267 | Minami et al. | Apr 2004 | A1 |
20040083984 | White | May 2004 | A1 |
20040085984 | Elzur | May 2004 | A1 |
20040093389 | Mohamed et al. | May 2004 | A1 |
20040093411 | Elzur et al. | May 2004 | A1 |
20040098369 | Elzur | May 2004 | A1 |
20040100924 | Yam | May 2004 | A1 |
20040153578 | Elzur | Aug 2004 | A1 |
20040193908 | Garcia et al. | Sep 2004 | A1 |
20040221276 | Raj | Nov 2004 | A1 |
20050044264 | Grimminger et al. | Feb 2005 | A1 |
20050080982 | Vasilevsky et al. | Apr 2005 | A1 |
20050102682 | Shah et al. | May 2005 | A1 |
20050149623 | Biran et al. | Jul 2005 | A1 |
20050220128 | Tucker et al. | Oct 2005 | A1 |
20050223118 | Tucker et al. | Oct 2005 | A1 |
20060045098 | Krause | Mar 2006 | A1 |
20060126619 | Teisberg et al. | Jun 2006 | A1 |
20060230119 | Hausauer et al. | Oct 2006 | A1 |
20060236063 | Hausauer et al. | Oct 2006 | A1 |
20060248047 | Grier et al. | Nov 2006 | A1 |
20060251109 | Muller et al. | Nov 2006 | A1 |
20060259644 | Boyd et al. | Nov 2006 | A1 |
20060274787 | Pong | Dec 2006 | A1 |
20070083638 | Pinkerton et al. | Apr 2007 | A1 |
20070136554 | Biran et al. | Jun 2007 | A1 |
20070165672 | Keels et al. | Jul 2007 | A1 |
20070168567 | Boyd et al. | Jul 2007 | A1 |
20070198720 | Rucker | Aug 2007 | A1 |
20070208820 | Makhervaks et al. | Sep 2007 | A1 |
20070226750 | Sharp et al. | Sep 2007 | A1 |
20080043750 | Keels et al. | Feb 2008 | A1 |
20080147822 | Benhase et al. | Jun 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20070226386 A1 | Sep 2007 | US |