System and method for network interfacing

Information

  • Patent Grant
  • 7934021
  • Patent Number
    7,934,021
  • Date Filed
    Monday, June 8, 2009
    15 years ago
  • Date Issued
    Tuesday, April 26, 2011
    13 years ago
Abstract
Systems and methods for network interfacing may include a communication data center with a first tier, a second tier and a third tier. The first tier may include a first server with a first single integrated convergent network controller chip. The second server may include a second server with a second single integrated convergent network controller chip. The third tier may include a third server with a third single integrated convergent network controller chip. The second server may be coupled to the first server via a single fabric with a single connector. The third server may be coupled to the second server via the single fabric with the single connector. The respective first, second and third server, each processes a plurality of different traffic types concurrently via the respective first, second and third single integrated convergent network chip over the single fabric that is coupled to the single connector.
Description
FIELD OF THE INVENTION

Certain embodiments of the invention relate to interfaces for networks. More specifically, certain embodiments of the invention relate to a method and system for network interfacing.


BACKGROUND OF THE INVENTION

More information is being processed and stored as network traffic (e.g., Internet traffic) continues to grow at an astonishing rate. The average size of a file or a message continues to increase as larger amounts of data are generated, especially with respect to media rich files and messages. Consequently, more servers and more storage are being employed. To deal with the deluge of information, Data Centers used by Enterprises or Internet Service Providers (ISPs) have gained in popularity. Data Centers are high-density computing configurations generally characterized by high performance, low power and minimal real estate requirements.



FIG. 1 shows a general arrangement for a Data Center in three tiers, although in some cases the tiers may be collapsed. The first tier interfaces the external network (e.g., a local area network (LAN) or a wide area network (WAN)) and directly serves the clients that typically run transmission control protocol/Internet protocol (TCP/IP) applications (e.g., hypertext transport protocol (HTTP) 1.0 and HTTP 1.1). The first tier has static information from which it can draw via its direct attached storage (DAS). To satisfy requests for dynamic content or for transactions, the first tier interfaces with the second tier servers. The second tier is also known as the Application Tier and has multiple communication requirements: high performance storage access for content typically serviced over a Fibre Channel Storage Area Network (SAN); communication with first tier servers over LAN with TCP/IP over Ethernet; communication with the third tier for data base access with a low latency, low central processing unit (CPU) utilization fabric such as a Virtual Interface Architecture (VIA) for clustering and Interprocess Communication (IPC). Second tier servers often communicate among themselves for load sharing and concurrent execution of the application. Hence, a second tier machine may employ three different fabrics: LAN, SAN and clustering. A similar description is applicable to the third tier. Hence, each server has a collection of adapters to serve its requirements for networking, storing and clustering. Such requirements produce systems with substantial power and space requirements.


The three separate networks are quite different from each other. Cluster, small computer system interface (SCSI) for DAS and Fibre Channel use Host Bus Adapters (HBAs) and operate directly on application data and run complete layer 2 (L2), layer 3 (L3), layer 4 (L4) and layer 5 (L5) protocol processing stacks within on-board computers. These programs are large (typically greater than 100 KB) and computationally quite intensive. Furthermore, these programs expect large amounts of memory to operate. Additional adapters are also required for clustering/disk accesses. Block level storage access requires a dedicated network to run the SCSI command set (e.g., SCSI-architecture-model-2 (SAM-2)) on a dedicated cable (e.g., DAS) or a specialized infrastructure like Fibre Channel to satisfy unique requirements to provide high bandwidth, low latency and robustness. Clustering usually employs a specialized adapter that supports a very low latency network infrastructure that is usually proprietary. It also uses a special software interface to the operating system (OS) to minimize host overhead by employing OS Kernel bypass techniques. As these different networks have evolved separately with their own unique requirements, separate adapter cards were needed.


As density requirements for servers increase, as evidenced by the use of Server Blades in servers, the space required for the three different types of adapters is becoming less available. The problem is further exacerbated since additional adapters may be used to provide fault tolerance or load balancing. Furthermore, financial considerations tend to drive organizations to seek a lower Total-Cost-of-Ownership (TCO) solution. The cost of managing three different fabrics and the toll on the information technology (IT) departments which must provide personnel trained in multiple technologies are substantial burdens to bear.



FIGS. 2 and 3 show conventional HBA arrangements and illustrate data and control flow. The HBA used for Fibre Channel and SCSI implements the complete protocol stack on the HBA and exposes the OS to a SCSI-like interface such as, for example, a command descriptor block (CDB). This places the burden of implementation of the complete conversion from an application memory down to the wire protocol on the adapter. Consequently, a large CPU with a large attached memory is used. The large attached memory is used to store the CPU program, transmit (TX) data, receive (RX) data as well as copies of host control structures.


Remote direct memory access (RDMA) adapters for clustering such as used in Infiniband systems have similar architectures with even greater requirements for local memory to keep a copy of a memory translation table and other control structures.


Until recently, TCP/IP was not considered a feasible solution as it runs in software which generally involves more CPU overhead and high latencies. Furthermore, TCP/IP does not guarantee all segments are received from the wire in the order that they were transmitted. Consequently, the TCP layer has to re-order the received segments to reconstruct the originally transmitted message. Nevertheless, protocols have been developed that run on TCP/IP. For example, Internet SCSI (iSCSI) places the SCSI command set on top of TCP/IP. In another example, iWARP places the IPC technology of RDMA on top of TCP/IP.



FIGS. 4 and 5 show conventional servers. In FIG. 4, each type of traffic has its respective subsystem. For example, the storage subsystem has its own Ethernet connector, its own storage HBA and its own driver. The conventional server may even include one or more proprietary network interfaces. FIG. 5 shows another conventional server in which a layer 4/layer 5 (L4/L5) Ethernet switch is employed to reduce the number of Ethernet connectors. However, the conventional server still employs separate adapters and network interface cards (NICs). Furthermore, the conventional server may still employ a proprietary network interface which cannot be coupled to the L4/L5 Ethernet switch.


Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings.


BRIEF SUMMARY OF THE INVENTION

Aspects of the present invention may be found in, for example, systems and methods that provide network interfacing. In one embodiment, the present invention may provide a data center. The data center may include, for example, a first tier, a second tier and a third tier. The first tier may include, for example, a first server. The second tier may include, for example, a second server. The third tier may include, for example, a third server. At least one of the first server, the second server and the third server may handle a plurality of different traffic types over a single fabric.


In another embodiment, the present invention may provide a server. The server may include, for example, an integrated chip and an Ethernet connector. The Ethernet connector may be coupled to the integrated chip. The Ethernet connector and the integrated chip may handle, for example, a plurality of different types of traffic.


In yet another embodiment, the present invention may provide a method for communicating with a server. The method may include, for example, one or more of the following: using a single fabric for a plurality of different types of traffic; and handling the plurality of different types of traffic via a single layer 2 (L2) connector of the server.


In yet still another embodiment, the present invention may provide a method for communicating in a data center. The method may include, for example, one or more of the following: accessing a storage system over a single fabric; accessing a cluster over the single fabric; and accessing a network over the single fabric.


These and other advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.





BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 shows a general arrangement for a Data Center in three tiers.



FIG. 2 shows conventional host bus adapter (HBA) arrangements.



FIG. 3 shows a conventional HBA arrangement.



FIG. 4 shows a conventional server.



FIG. 5 shows another conventional server.



FIG. 6 shows a representation illustrating an embodiment of a Data Center according to the present invention.



FIG. 7 shows a representation illustrating an embodiment of a converged network controller (CNC) architecture and a host system according to the present invention



FIG. 8 shows a representation illustrating an embodiment of a remote-direct-memory-access network interface card interface (RI) according to the present invention



FIG. 9 shows a representation illustrating an embodiment of a server according to the present invention.



FIG. 10 shows a representation illustrating an embodiment of a server blade according to the present invention.



FIG. 11 shows a representation illustrating an embodiment of a TCP offload engine during receive according to the present invention.



FIG. 12 shows a representation illustrating an embodiment of a TCP offload engine during transmit according to the present invention.



FIG. 13 shows an embodiment of a method for storing and fetching context information according to the present invention.



FIG. 14 shows a representation illustrating an embodiment of a CNC software interface according to the present invention.



FIG. 15 shows a representation illustrating an embodiment of a CNC kernel remote-direct-memory-access (RDMA) software interface according to the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Aspects of the present invention may be found in, for example, systems and methods that provide network interfacing. Some embodiments according to the present invention provide a device that can handle all the communication needs of a computer (e.g., a server, a desktop, etc.) The device may use protocols that are running on transmission control protocol/Internet protocol (TCP/IP) such as, for example, TCP/IP/Ethernet. Storage traffic may be handled using an Internet small computer system interface (iSCSI) protocol which relies on TCP as the transport and may employ a TCP offload engine to accelerate its operation. Clustering traffic may be handled using a remote direct memory access (RDMA) protocol that runs on top of TCP. Clustering may be combined into the same device as TCP offload. Further convergence may be achieved if the iSCSI protocol uses the RDMA fabric.


Some embodiments according to the present invention may provide for the convergence of three fabrics into a single TCP/IP/Ethernet-based fabric. The present invention also contemplates the convergence of more or less than three fabrics into a single TCP/IP/Ethernet-based fabric. One device may be placed on a motherboard and may support, for example, local area network (LAN), storage area network (SAN), network attached storage (NAS) and Cluster/Interprocess Communication (IPC). The device may allow for flexible resource allocation among the different fabrics and may allow for an implementation in which space is limited (e.g., a Server Blade environment). For example, a single device may replace, for example, three subsystems. Technology may be implemented using a single back plane to convey all types of traffic instead of three different protocols each requiring three dedicated lanes. Such technology may reduce cost and complexity as well as save on space.


Some embodiments according to the present invention provide an architectural approach (e.g., a multiple-in-one approach) in which storage, clustering and network requirements that need or otherwise would benefit from hardware acceleration are identified and implemented in hardware. Some aspects of complex processing may still be handled by the host. The multiple-in-one device may provide savings in silicon, processing power and memory requirements. Furthermore, the cost of each implementation and space used by each implementation may be substantially reduced such that features and functions may be combined on a single chip.


Some embodiments according to the present invention provide for a flow-through network interface card (NIC). The flow-through NIC may be optimized to minimize the resources used to handle different traffic types and different interfaces. For example, an iSCSI state may be kept mainly on a host memory. The host memory may be used, for example, for buffering of incomplete or uncommitted TCP, iSCSI or clustering data. Information tracking this data may be loaded into the single chip as needed. Consequently, the program on the adapter may be substantially smaller and use less central processing unit (CPU) power and memory.


Some embodiments according to the present invention may provide that some time-consuming mechanisms (e.g., per byte processing) may be performed in hardware. Some examples of time-consuming processes include, for example, header-data separation, zero-copy service to application buffers, data integrity checks and digests, and notification optimizations.


Some embodiments according to the present invention may provide for mode co-existence. On-chip resources (e.g., a context memory) that are used to keep the state information may be used with substantial flexibility based upon, for example, the needs of the host. The host may control the number of connections used for each communication type. The number of bytes used to store the context varies by connection type (e.g., TCP, TCP and IPSec, iSCSI on TCP, iSCSI on RDMA, and RDMA) and by number of outstanding activities (e.g., the number of windows/regions, RDMA Reads, Commands, etc.) The host may control the context size per connection, thereby further optimizing the device utility. The device may also be fully compatible with existing LAN controllers. LAN traffic may be supported concurrently with TCP offload, iSCSI and RDMA traffic.



FIG. 6 shows a representation illustrating an embodiment of a Data Center according to the present invention. The Data Center is illustrated as a three tier architecture, however, the present invention also contemplates architectures with more or less than three tiers. In each tier, a server is shown with a layer 2/layer 4/layer 5 (L2/L4/L5) adapter. The single adapter may handle, for example, network traffic, storage traffic, cluster traffic and management traffic. In one embodiment according to the present invention, the single fabric may be based on TCP/IP. As a consequence of using a single L2/L4/L5 adapter for a particular server or server blade, the particular server or server blade may have a single IP address.



FIG. 7 shows a representation illustrating an embodiment of a converged network controller (CNC) architecture and a host system according to the present invention. The CNC architecture may be adapted to provide a flow-through NIC. In one embodiment, the CNC architecture provides a TCP enabled Ethernet controller (TEEC) that provides TCP offload services. Hardware, firmware and software may be added to provide layer 5 (L5) functionality. Protocols such as iSCSI and RDMA may be considered L5 technologies. Unlike conventional host bus adapter (HBA) architectures, the CNC architecture may provide for a different functionality split according to some embodiments of the present invention.


iSCSI may provide, for example, control functions and data transfer functions. The control functions may include, for example, discover, security, management, login and error recovery. The data transfer portion may build iSCSI protocol data units (PDUs) from the SCSI CDBs it gets from the operating system (OS) and may submit them to the iSCSI peer for execution. An iSCSI session might include multiple TCP connections with each carrying commands, data and status information. For each connection, iSCSI may keep state information that is updated by iSCSI PDUs transmitted or received. The CNC architecture may keep all of this data on the host, thereby saving the costs and complexity of running it on the CNC. This also may overcome the limitations imposed by the limited memory available on a conventional HBA. The software interface may be exposed to the OS may be similar or identical to a conventional HBA. For example, the CNC may support the same SCSI miniport or Stor Port interface as in a Microsoft OS. The CNC may partition the work between the SCSI miniport running on the host and the CNC hardware differently from a conventional HBA.


An embodiment of a hardware interface to the CNC for iSCSI operations according to the present invention is set forth below.


During transmission, the host may get the SCSI CDB and the iSCSI context for a connection and may then construct an iSCSI command with or without data. The host may transfer to the CNC an iSCSI PDU without the Digest(s) and the marker. A separate header and payload digest may be used. A cyclical redundancy check (CRC) such as CRC32c may be used. Specific fields in the PUD may carry the CRC results. The fields may also provide for a finite-interval marker (FIM) or other types of markers. The marker may be a 32-bit construct that may be placed in the TCP byte stream in a predetermined interval that is possibly negotiated during login. The CNC may construct TCP segments that carry the iSCSI PDUs, may compute the CRC, may insert the CRC in the corresponding fields and may insert the marker. Since the overhead of constructing an iSCSI PDU may be limited as compared with per-byte operations or squeezing a marker inside the data, which may necessitate a copy or break of the data to allow for the marker, these operations may be moved into the CNC. Via a direct memory access (DMA) engine, the iSCSI PDU may be placed in the CNC in pieces that meet the TCP maximum transmission unit (MTU). If the PDU is larger than the TCP MTU, then the CNC may chop the PDU into MTU size segments. Each section of the PDU may then be encapsulated into a TCP segment. The CNC may account for the marker (e.g., the FIM) and the locations of the header and data digests and may insert them in place for all the segments that combined form the iSCSI PDU.


Some embodiments according to the present invention may benefit from the construction of the iSCSI PDU by the host instead of the on an HBA since the host CPU may be much faster than an embedded CPU and memory on an HBA, may have more memory resources and may be constrained by fewer limitations in constructing the iSCSI PDU at wire speed or faster with low CPU utilization. Consequently, the CNC may be leanly designed and may focus on data transfer acceleration rather than on control.


For a SCSI Write command encapsulated into an iSCSI command, a driver may keep an initiator task tag (ITT), a command sequence number (CmdSN), a buffer tag from the SCSI command and a host location for the data. The driver may use the information to program the CNC such that it may be ready for an incoming R2T. The iSCSI target may reply with an R2T for parts of the data or for the whole buffer. With possibly no software assistance, the CNC may automatically fetch the requested data from the host memory for transmission.


For a SCSI read command encapsulated into an iSCSI command, a driver may keep the IIT, the CmdSN, the buffer tag from the SCSI command and the host location for the data. The driver may use the information to program the CNC such that it may be ready for an incoming DATA_IN.


During reception, as TCP segments are received for a connection that is an iSCSI connection, the CNC may keep track of iSCSI PDU boundaries. In keeping track of the iSCSI PDU boundaries, the CNC may process iSCSI PDU headers or markers (e.g., FIMs) and may receive assistance from drivers. When iSCSI traffic flows in order, the CNC may process one iSCSI PDU header after another to get the type of the PDU and its length. The marker, if used, may be placed in known intervals in the TCP sequence number. If the CNC is looking for the beginning of the next PDU in the TCP byte stream, it may get it from the marker. If the marker is not used, the driver may be of some assistance with out-of-order TCP segments. The driver may re-order the TCP data, may process the iSCSI PDU headers and may then feed the CNC with the next expected TCP sequence number for the next iSCSI PDU.


When the iSCSI PDU boundaries are known, the CNC may locate markers (e.g., FIM, if used) and may remote the markers from the data stream. The marker may not be part of the digest computation. The CNC may compute digests (if used), header and data and compare them to the values found in the incoming PDU. In case of an error, the CNC may flag it and may drop the PDU or may pass it to the iSCSI software for further processing and possible recovery.


For DATA_IN, the CNC may separate the iSCSI PDU header and data. The CNC may use the ITT and the buffer offset to look into a table built when a SCSI Read command was sent. The data portion of the PDU may be stored in a designated buffer based upon, for example, a look-up value from the table. The header may be passed to the driver for further processing such as, for example, state updating.


The CNC may receive an R2T iSCSI command. Using the ITT, the buffer offset and a desired data transfer length, the CNC may fetch data from the host and construct an iSCSI DATA_OUT PDU in response.


The CNC may integrate in it complete RDMA capabilities. RDMA may be used to move data between two machines with minimal software overhead and minimal latency. RDMA may be used, for example, for IPC and for latency sensitive applications. Using RDMA, data transfer may be accelerated and may be separated from the control plane. It may accelerate any application without have to add any application knowledge to the CNC. For example, in support of iSCSI, the CNC might have to parse the iSCSI PDU, keep iSCSI specific state information and follow the iSCSI protocol for generating some actions such as, for example, a DATA_OUT in response to an R2T). RDMA may reduce or eliminate the need for additional application knowledge. Thus, the CNC may accelerate many applications over its RDMA service.


An RDMA NIC (RNIC) may support a marker-based upper layer protocol data unit (ULPDU) aligned (MPA) framing protocol such as, for example, MPA/direct data placement (DDP) as well as RDMA. The RNIC may support such protocols while exposing queue interfaces to the software as illustrated in FIG. 8. FIG. 8 shows a representation illustrating an embodiment of RNIC interface (RI) according to the present invention. The RI may include, for example, the RNIC and the RNIC driver and library. In the illustrated queue pair (QP) model, each queue (e.g., a send queue (SQ), a receive queue (RQ) and a completion queue (CQ)) may have work queue elements (WQEs) or completion queue elements (CQEs) with a producer/consumer index. The CNC may process each WQE and may provide a CQE per the RDMA protocol. The CNC implementation may also be quite efficient with respect to the amount of memory and state kept on-chip.


The CNC may support multiple types of communications concurrently. For example, the CNC may support Ethernet traffic, TCP/IP traffic, iSCSI traffic, kernel RDMA, user-space RDMA and management traffic as illustrated in FIG. 9. FIG. 9 shows a representation illustrating an embodiment of a server (or client) according to the present invention. The server is shown with a single Ethernet connector coupled to a unified controller. The unified controller may allow for the sharing of particular components such as, for example, an L2 NIC and a TCP/IP processor. The software of the server includes, for example, a unified driver which provides the drivers for the multiple types of communication. The data may also flow along a unified path through the unified driver to the various services.



FIGS. 14 and 15 show embodiments of support by the CNC of different types of traffic according to the present invention. For example, FIG. 14 shows an embodiment of a CNC software interface according to the present invention. In another example, FIG. 15 shows an embodiment of a CNC kernel RDMA software interface according to the present invention.


The CNC may be configured to carry different traffic types on each TCP/IP connection. For every TCP/IP connection supported, the CNC may keep context information as illustrated in FIG. 13. The context information may be dynamically allocated to support any mix of traffic and may be flexible enough to allow for different amounts of resources even among connections of the same type. For example, some RDMA connections may be supported by many memory windows/regions while other RDMA connections may be supported by only a few memory windows/regions. The context information may be adapted to best serve each connection.


The CNC may integrate communication technologies that were traditionally delivered in separate integrated circuits and typically on separate adapters. The CNC may provide a TCP offload engine. FIGS. 11 and 12 show representation illustrating an embodiment of a TCP offload engine during receive and transmit, respectively, according to the present invention. The CNC may focus on data transfer in the hardware and connection set-up and tear-down in the software. The CNC may also provide iSCSI acceleration in hardware with minimal hardware that may deal with intensive per-byte operations or with accelerations (e.g., performance critical accelerations, R2T). In addition, the CNC may provide full-functionality RDMA with minimal memory foot print and may exhibit lower cast than a other proprietary solutions. A software unified driver architecture may manage the hardware resources and may allocate them to different communications mechanisms. The CNC approach also provides for high versatility mapping of received frames payload to a set of different types of host buffer structures (e.g., physical address, linked lists and virtual addressing). The CNC approach may also allow for simultaneous operation of all of the communication types and for dynamic resource allocation for them.


One or more embodiments according to the present invention may have one or more of the advantages as set forth below.


Some embodiments according to the present invention may provide a unified data path and a unified control path. A special block for each may be provided.


Some embodiments according to the present invention may provide multiple functions supported by a single IP address in a hardware accelerated environment.


Some embodiments according to the present invention may provide an efficient approach toward context memory through, for example, a flexible allocation of limited hardware resources for the various protocols in a hardware accelerated TCP offload engine. In at least one embodiment, the memory may be pooled instead of providing a dedicated resource per function.


Some embodiments according to the present invention may provide a single TCP stack with hardware acceleration that supports multiple protocols.


Some embodiments according to the present invention may provide acceleration of converged network traffic that may allow for the elimination of a multiple deep packet lookup and a series of dedicated IC to process each of the protocols separately.


Some embodiments according to the present invention may provide for a low cast acceleration of converged network traffic by a single integrated circuit into multiple host software interfaces and may provide multiple distinct existing services.


Some embodiments according to the present invention may provide for a low cost acceleration of converged network traffic by a single integrated circuit into a single bus interface (e.g., peripheral component interface (PCI)) on a host hardware and may provide multiple distinct existing services. Multiple separate bus slots may be eliminated and low cost system chipsets may be allowed.


Some embodiments according to the present invention may provide for single chip that may not need an external memory or a physical interface and that may lower the cost and foot print to allow penetration into price sensitive markets. The single chip concept may allow for substantially higher volume than existing traditional designs.


Some embodiments according to the present invention may provide higher density servers via, for example, server blades adapted for the CNC approach, by converging all of the communications interfaces for the server on one connector. All of the server connectivity may be funneled through one connection on the back plane instead of multiple connections. The minimal footprint of the CNC may provide benefits especially in a space constrained environment such as servers (e.g., Server Blade servers). FIG. 10 shows a representation illustrating an embodiment of a server blade according to the present invention.


Some embodiments according to the present invention may eliminate the need for a plurality of registered jack-45 (RJ-45) connectors, thereby saving the cost of the connectors and the cabling along with alleviating the need to run multiple, twisted-pair cables in a physically constrained environment.


Some embodiments according to the present invention may provide functionality and integration, for example, by supporting all of the communication needs of the server. The need for separate IPC adapters and separate storage adapters may be reduced or eliminated. By using a single chip, limited real estate and one connector in a server, communications cost may be substantially reduced.


Some embodiments according to the present invention may provide high density servers by allowing the removal of functionality from each server. Density may be increased, for example, by eliminating the need for hard disk, any storage adapter on each server or the need for a separate KVM for each server.


Some embodiments according to the present invention may provide high density servers with minimal power consumption by using smaller power supplies and by minimizing the need for cooling that may allow for smaller mechanical form factors.


Some embodiments according to the present invention may provide for low cost servers with low cost CPUs that may deliver the same performance as may be expected from high cost CPUs with non-accelerated network controllers.


Some embodiments according to the present invention may provide for the integration of server management. The integration of server management may eliminate, for example, the need for a dedicated connector and may save the cost for a three-way switch typically used to split the management traffic from the rest of the communication traffic.


Some embodiments according to the present invention may replace the functionality provided by four or more separate adapters and may eliminate the need for a dedicated L4/L5 switch in front of them.


While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.


Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.


The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.


While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.

Claims
  • 1. A communication system, comprising: a first tier comprising a first server, the first server comprising a first single integrated convergent network controller (ICNC) chip;a second tier coupled to the first tier via a single fabric coupled to a single connector, the second tier comprising a second server, the second server comprising a second single ICNC chip, wherein the single fabric is operable to communicate utilizing a protocol of a group comprising TCP/IP and Ethernet;a third tier coupled to the second tier via the single fabric coupled to the single connector, the third tier comprising a third server, the third server comprising a third single ICNC chip; andwherein the first server, the second server and the third server process, respectively via the first single ICNC chip, the second single ICNC chip and the third single ICNC chip, a plurality of different traffic types concurrently over the single fabric that is coupled to the single connector,wherein each of the single first, second, and third ICNC chip comprises a layer 2/layer 4/layer 5 (L2/L4/L5) adapter, and at least one of the first, second, and third ICNC chips processes said plurality of different traffic types, wherein said plurality of different traffic types comprises: network traffic, storage traffic, interprocess communication (IPC) traffic, and cluster traffic over the single fabric, wherein the network traffic comprises: Internet or Ethernet traffic, and wherein the storage traffic comprises traffic from storage devices accessible via a network.
  • 2. The communication system according to claim 1, wherein the first server processes via the first single ICNC chip, at least one traffic type, said traffic types comprising: said network traffic and a direct attached storage (DAS) traffic over the single fabric.
  • 3. The communication system according to claim 1, wherein the second server processes via the second single ICNC chip at least two of said traffic types, said traffic types comprising: said network traffic, said storage traffic, said interprocess communication (IPC) traffic, and said cluster traffic over the single fabric.
  • 4. The communication system according to claim 1, wherein the second single ICNC chip of the second server processes at least two of said traffic types, said traffic types comprising: said network traffic, said storage traffic, said interprocess communication (IPC) traffic, and said cluster traffic over the single fabric.
  • 5. The communication system according to claim 4, wherein the storage traffic comprises traffic from a redundant-array-of-independent-disks (RAID) configuration.
  • 6. The communication system according to claim 1, wherein the second tier comprises an application tier.
  • 7. The communication system according to claim 1, wherein the third server processes via the third single ICNC chip at least two of said traffic types, said traffic types comprising: said network traffic, said storage traffic, said interprocess communication (IPC) traffic, and said cluster traffic over the single fabric.
  • 8. The communication system according to claim 1, wherein the third single ICNC chip of the third server processes at least two of said traffic types, said traffic types comprising: said network traffic, said storage traffic, said interprocess communication (IPC) traffic, and said cluster traffic over the single fabric.
  • 9. The communication system according to claim 1, wherein the single fabric utilizes an OSI transport layer and/or network layer protocol.
  • 10. The communication system according to claim 9, wherein the OSI transport layer and/or network layer protocol comprises said transmission control protocol/Internet protocol (TCP/IP).
  • 11. The communication system according to claim 1, wherein one or more of the first server, the second server and/or the third server uses an Internet small computer system interface (iSCSI) protocol in communicating with said storage device over the single fabric.
  • 12. The communication system according to claim 11, wherein the iSCSI protocol runs on top of said TCP/IP.
  • 13. The communication system according to claim 11, wherein the iSCSI protocol runs on top of a remote direct memory access protocol (RDMAP).
  • 14. The communication system according to claim 1, wherein one or more of the first server, the second server and/or the third server uses a RDMAP to process said interprocess communication (IPC) traffic.
  • 15. The communication system according to claim 1, wherein said single ICNC chip comprises a single OSI Physical Layer (PHY) coupled between said single connector and a single Media Access Controller (MAC) for handling said plurality of different types of traffic for said single ICNC chip.
  • 16. The communication system according to claim 1, wherein said single ICNC chip comprises a single frame parser for identifying each of said plurality of different types of traffic.
  • 17. The communication system according to claim 16, wherein said frame parser parses incoming frames of said plurality of different types of traffic into respective headers and data packets for subsequent data processing by the single ICNC chip.
  • 18. The communication system according to claim 1, wherein said single fabric comprises a single backplane for transporting said plurality of different types of traffic to the plurality of servers.
  • 19. A method for communication, the method comprising: routing a plurality of different types of traffic for a plurality of servers via a single fabric comprising a single connector, wherein the single fabric is operable to communicate utilizing a protocol of a group comprising TCP/IP and Ethernet, wherein each of said plurality of servers each comprises a single integrated convergent network controller (ICNC) chip; andconcurrently processing the plurality of different types of traffic for the plurality of servers, which is routed via the single fabric and the single connector, utilizing the single ICNC chip within each of the plurality of servers;wherein the single ICNC chip comprises a layer 2/layer 4/layer 5 (L2/L4/L5) adapter, and the single ICNC chip processes said plurality of different types of traffic, wherein said plurality of different types of traffic comprises: network traffic, storage traffic, interprocess communication (IPC) traffic, and cluster traffic over the single fabric, wherein the network traffic comprises: Internet or Ethernet traffic, wherein the storage traffic comprises traffic from storage devices accessible via a network.
  • 20. The method according to claim 19, wherein the single fabric utilizes an OSI transport layer protocol and/or network layer protocol-based fabric.
  • 21. The method according to claim 19, wherein said single ICNC chip comprises a single OSI Physical Layer (PHY) coupled between said single connector and a single Media Access Controller (MAC) for handling said plurality of different types of traffic for said single ICNC chip.
  • 22. The method according to claim 19, wherein said single ICNC chip comprises a single frame parser for identifying each of said plurality of different types of traffic.
  • 23. The method according to claim 22, wherein said frame parser parses incoming frames of said plurality of different types of traffic into respective headers and data packets for subsequent data processing by the single ICNC chip.
  • 24. The method according to claim 19, wherein said single fabric comprises a single backplane for transporting said plurality of different types of traffic to the plurality of servers.
  • 25. The communication system according to claim 1, wherein said single connector comprises an OSI Layer 2 (L2) connector.
  • 26. The method according to claim 19, wherein said single connector comprises an OSI Layer 2 (L2) connector.
  • 27. A system, comprising: an integrated convergent network controller (ICNC) chip for use by a server of a first tier of a multi-tier system, the ICNC chip comprising:a layer 2/layer 4/layer 5(L2/L4/L5) adapter; anda module operable to process traffic,wherein the first tier is coupled to a second tier and a third tier via a single fabric and a single connector, wherein the single fabric is operable to facilitate communication by utilizing a protocol of a group comprising TCP/IP and Ethernet,wherein a plurality of different traffic types are processed concurrently over the single fabric, wherein said plurality of different traffic types comprises: network traffic, storage traffic, interprocess communication (IPC) traffic, and cluster traffic over the single fabric, wherein the network traffic comprises: Internet or Ethernet traffic, and wherein the storage traffic comprises traffic from a storage device accessible via a network.
  • 28. The system according to claim 27, wherein the module processes at least one traffic type in a group comprising: said network traffic and a direct attached storage (DAS) traffic over the single fabric.
  • 29. The system according to claim 27, wherein the module processes at least two of said traffic types, said traffic types comprising: said network traffic, said storage traffic, said interprocess communication (IPC) traffic, and said cluster traffic over the single fabric.
  • 30. The system according to claim 29, wherein the storage traffic comprises traffic from a redundant-array-of-independent-disks (RAID) configuration.
  • 31. The system according to claim 27, wherein the first tier comprises an application tier.
  • 32. The system according to claim 27, wherein the single fabric utilizes an OSI transport layer and/or network layer protocol.
  • 33. The system according to claim 32, wherein the OSI transport layer and/or network layer protocol comprises said transmission control protocol/Internet protocol (TCP/I P).
  • 34. The system according to claim 27, wherein storage traffic is communicated using an Internet small computer system interface (iSCSI) protocol over the single fabric.
  • 35. The system according to claim 34, wherein the iSCSI protocol runs on top of said TCP/IP.
  • 36. The system according to claim 34, wherein the iSCSI protocol runs on top of a remote direct memory access protocol (RDMAP).
  • 37. The system according to claim 27, wherein interprocess communication IPC traffic is processed using a RDMAP.
  • 38. The system according to claim 27, wherein said ICNC chip comprises a single OSI Physical Layer (PHY) coupled between said single connector and a single Media Access Controller (MAC) for processing traffic.
  • 39. The system according to claim 27, wherein said ICNC chip comprises a single frame parser for identifying each of said plurality of different types of traffic.
  • 40. The communication system according to claim 39, wherein said frame parser parses incoming frames of said plurality of different types of traffic into respective headers and data packets for subsequent data processing by the ICNC chip.
  • 41. The system according to claim 27, wherein said single fabric comprises a single backplane for transporting said plurality of different types of traffic to a plurality of servers.
CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY REFERENCE

This application is a divisional of U.S. patent application Ser. No. 10/652,330, filed on Aug. 29, 2003, which claims priority to and claims benefit from U.S. Provisional Patent Application Ser. No. 60/477,279, entitled “System and Method for Network Interfacing in a Multiple Network Environment” and filed on Jun. 10, 2003; U.S. application Ser. No. 10/652,327, entitled “System and Method for Network Interfacing in a Multiple Network Environment” and filed on Aug. 29, 2003; U.S. Provisional Patent Application Ser. No. 60/478,106, entitled “System and Method for Network Interfacing” and filed on Jun. 11, 2003; U.S. application Ser. No. 10/652,330, entitled “System and Method for Network Interfacing” and filed on Aug. 29, 2003; U.S. Provisional Patent Application Ser. No. 60/408,617, entitled “System and Method for TCP/IP Offload” and filed on Sep. 6, 2002; U.S. Provisional Patent Application Ser. No. 60/407,165, entitled “System and Method for TCP Offload” and filed on Aug. 30, 2002; U.S. Provisional Patent Application Ser. No. 60/456,265, entitled “System and Method for TCP Offload” and filed on Mar. 30, 2003; U.S. patent application Ser. No. 10/652,267 entitled “System and Method for TCP Offload” and filed on Aug. 29, 2003, which is issued to U.S. Pat. No. 7,346,701 on Mar. 18, 2008; U.S. Provisional Patent Application Ser. No. 60/456,260, entitled “System and Method for Handling Out-of-Order Frames” and filed on Mar. 20, 2003; U.S. patent application Ser. No. 10/651,459, entitled “System and Method for Handling Out-of-Order Frames” and filed on Aug. 29, 2003, which is issued to U.S. Pat. No. 7,411,959 on Aug. 12, 2008; U.S. Provisional Patent Application Ser. No. 60/410,022, entitled “System and Method for TCP Offloading and Uploading” and filed on Sep. 11, 2002; U.S. patent application Ser. No. 10/298,817, entitled “System and Method for TCP Offloading and Uploading” and filed on Nov. 18, 2002; U.S. Provisional Patent Application Ser. No. 60/411,294, entitled “System and Method for Handling Frames in Multiple Stack Environments” and filed on Sep. 17, 2002; U.S. patent application Ser. No. 10/302,474, entitled “System and Method for Handling Frames in Multiple Stack Environments” and filed on Nov. 21, 2002, which is issued to U.S. Pat. No. 7,426,579 on Sep. 16, 2008; U.S. Provisional Patent Application Ser. No. 60/408,207, entitled “System and Method for Fault Tolerant TCP Offload” and filed on Sep. 4, 2002; U.S. patent application Ser. No. 10/337,029, entitled “System and Method for Fault Tolerant TCP Offload” and filed on Jan. 6, 2003, which is issued to U.S. Pat. No. 7,224,692 on May 29, 2007; U.S. Provisional Patent Application Ser. No. 60/405,539, entitled “Remote Direct Memory Access over TCP/IP using Generic Buffers for Non-posting TCP” and filed on Aug. 23, 2002; U.S. patent application Ser. No. 10/644,205, entitled “Method and System for TCP/IP Using Generic Buffers for Non-Posting TCP Applications” and filed on Aug. 20, 2003, which is issued to U.S. Pat. No. 7,457,845 on Nov. 25, 2008; U.S. Provisional Patent Application Ser. No. 60/398,663, entitled “Dual TCP/IP Stacks Connection Management for Winsock Direct (WSD)” and filed on Jul. 26, 2002; U.S. Patent application Ser. No. 10/336,983, entitled “System and Method for Managing Multiple Stack Environments” and filed on Jan. 6, 2003 now U.S. Pat. No. 7,647,414; U.S. Provisional Patent Application Ser. No. 60/434,503, entitled “System and Method for Handling Multiple Stack Environments” and filed on Dec. 18, 2002; U.S. Provisional Patent Application Ser. No. 60/403,817, entitled “One Shot RDMA Having Only a 2 Bit State” and filed on Aug. 14, 2002; U.S. patent application Ser. No. 10/642,023, entitled “One Shot RDMA Having a 2-Bit State” and filed on Aug. 14, 2003, which is issued to U.S. Pat. No. 7,398,300 on Jul. 8, 2008; U.S. Provisional Patent Application Ser. No. 60/404,709, entitled “Optimizing RDMA for Storage Applications” and filed on Aug. 19, 2002; U.S. patent application Ser. No. 10/643,331 entitled “System and Method for Transferring Data Over a Remote Direct Memory Access (RDMA) Network” and filed on Aug. 19, 2003; U.S. Provisional Patent Application Ser. No. 60/419,354, entitled “System and Method for Statistical Provisioning” and filed on Oct. 18, 2002; U.S. patent application Ser. No. 10/688,392 entitled “System and Method for Received Queue Provisioning” and filed on Oct. 17, 2003, which is issued to U.S. Pat. No. 7,508,837 on Mar. 24, 2009; U.S. Provisional Patent Application Ser. No. 60/420,901, entitled “System and Method for Statistical Provisioning” and filed on Oct. 24, 2002; U.S. Provisional Patent Application Ser. No. 60/439,951, entitled “System and Method for Statistical Provisioning” and filed on Jan. 14, 2003; U.S. patent application Ser. No. 10/688,373 entitled “System and Method for Receive Queue Provisioning” and filed on Oct. 17, 2003, which is issued to U.S. Pat. No. 7,430,211 on Sep. 30, 2008; U.S. Provisional Patent Application Ser. No. 60/442,360, entitled “System and Method for Statistical Provisioning” and filed on Jan. 24, 2003; U.S. Provisional Patent Application Ser. No. 60/425,959, entitled “Joint Memory Management for User Space and Storage” and filed on Nov. 12, 2002; U.S. patent application Ser. No. 10/704,891, entitled “System and Method for Managing Memory ” and filed on Nov. 10, 2003; U.S. Provisional Patent Application Ser. No. 60/456,266, entitled “Self-Describing Transport Protocol Segments” and filed on Mar. 20, 2003; U.S. patent application Ser. No. 10/803,719 entitled “Self-Describing Transport Protocol Segments” and filed on Mar. 18, 2004, which is issued to U.S. Pat. No. 7,385,974 on Jun. 10, 2008; U.S. Provisional Patent Application Ser. No. 60/437,887, entitled “Header Alignment and Complete PDU” and filed on Jan. 2, 2003; U.S. patent application Ser. No. 10/751,732 entitled “System and Method for Handling Transport Protocol Segments ” and filed on Jan. 2, 2004; U.S. Provisional Patent Application Ser. No. 60/456,322, entitled “System and Method for Handling Transport Protocol Segments” and filed on Mar. 20, 2003; and U.S. patent application Ser. No. 10/230,643, entitled “System and Method for Identifying Upper Layer Protocol Message Boundaries” and filed on Aug. 29, 2002, which is issued to U.S. Pat. No. 7,295,555 on Nov. 13, 2007. The above-referenced United States patent applications are hereby incorporated herein by reference in their entirety.

US Referenced Citations (438)
Number Name Date Kind
4333020 Maeder Jun 1982 A
4395774 Rapp Jul 1983 A
4433378 Leger Feb 1984 A
4445051 Elmasry Apr 1984 A
4449248 Leslie May 1984 A
4463424 Mattson Jul 1984 A
4519068 Krebs May 1985 A
4545023 Mizzi Oct 1985 A
4590550 Eilert May 1986 A
4599526 Paski Jul 1986 A
4649293 Ducourant Mar 1987 A
4680787 Marry Jul 1987 A
4717838 Brehmer Jan 1988 A
4721866 Chi Jan 1988 A
4727309 Vajdic Feb 1988 A
4737975 Shafer Apr 1988 A
4760571 Schwarz Jul 1988 A
4761822 Maile Aug 1988 A
4777657 Gillaspie Oct 1988 A
4791324 Hodapp Dec 1988 A
4794649 Fujiwara Dec 1988 A
4804954 Macnak Feb 1989 A
4806796 Bushey Feb 1989 A
4807282 Kazan Feb 1989 A
4817054 Banerjee Mar 1989 A
4817115 Campo Mar 1989 A
4821034 Anderson Apr 1989 A
4850009 Zook Jul 1989 A
4890832 Komaki Jan 1990 A
4894792 Mitchell Jan 1990 A
4916441 Gombrich Apr 1990 A
4964121 Moore Oct 1990 A
4969206 Desrochers Nov 1990 A
4970406 Fitzpatrick Nov 1990 A
4977611 Maru Dec 1990 A
4995099 Davis Feb 1991 A
5008879 Fischer Apr 1991 A
5025486 Klughart Jun 1991 A
5029183 Tymes Jul 1991 A
5031231 Miyazaki Jul 1991 A
5033109 Kawano Jul 1991 A
5041740 Smith Aug 1991 A
5055659 Hendrick Oct 1991 A
5055660 Bertagna Oct 1991 A
5079452 Lain Jan 1992 A
5081402 Koleda Jan 1992 A
5087099 Stolarczyk Feb 1992 A
5115151 Hull May 1992 A
5117501 Childress May 1992 A
5119502 Kallin Jun 1992 A
5121408 Cai Jun 1992 A
5122689 Barre Jun 1992 A
5123029 Bantz Jun 1992 A
5128938 Borras Jul 1992 A
5134347 Koleda Jul 1992 A
5142573 Umezawa Aug 1992 A
5149992 Allstot Sep 1992 A
5150361 Wieczorek Sep 1992 A
5152006 Klaus Sep 1992 A
5153878 Krebs Oct 1992 A
5162674 Allstot Nov 1992 A
5175870 Mabey Dec 1992 A
5177378 Nagasawa Jan 1993 A
5179721 Comroe Jan 1993 A
5181200 Harrison Jan 1993 A
5196805 Beckwith Mar 1993 A
5216295 Hoang Jun 1993 A
5230084 Nguyen Jul 1993 A
5239662 Danielson Aug 1993 A
5241542 Natarajan Aug 1993 A
5241691 Owen Aug 1993 A
5247656 Kabuo Sep 1993 A
5249220 Moskowitz Sep 1993 A
5249302 Metroka Sep 1993 A
5265238 Canova Nov 1993 A
5265270 Stengel Nov 1993 A
5274666 Dowdell Dec 1993 A
5276680 Messenger Jan 1994 A
5278831 Mabey Jan 1994 A
5289055 Razavi Feb 1994 A
5289469 Tanaka Feb 1994 A
5291516 Dixon Mar 1994 A
5293639 Wilson Mar 1994 A
5296849 Ide Mar 1994 A
5297144 Gilbert Mar 1994 A
5301196 Ewen Apr 1994 A
5304869 Greason Apr 1994 A
5315591 Brent May 1994 A
5323392 Ishii Jun 1994 A
5329192 Wu Jul 1994 A
5331509 Kikinis Jul 1994 A
5345449 Buckingham Sep 1994 A
5349649 Iijima Sep 1994 A
5355453 Row Oct 1994 A
5361397 Wright Nov 1994 A
5363121 Freund Nov 1994 A
5373149 Rasmussen Dec 1994 A
5373506 Tayloe Dec 1994 A
5390206 Rein Feb 1995 A
5392023 D'Avello Feb 1995 A
5406615 Miller Apr 1995 A
5406643 Burke Apr 1995 A
5418837 Johansson May 1995 A
5420529 Guay May 1995 A
5423002 Hart Jun 1995 A
5426637 Derby Jun 1995 A
5428636 Meier Jun 1995 A
5430845 Rimmer Jul 1995 A
5432932 Chen Jul 1995 A
5434518 Sinh Jul 1995 A
5437329 Brooks Aug 1995 A
5440560 Rypinski Aug 1995 A
5455527 Murphy Oct 1995 A
5457412 Tamba Oct 1995 A
5459412 Mentzer Oct 1995 A
5465081 Todd Nov 1995 A
5473607 Hausman Dec 1995 A
5481265 Russell Jan 1996 A
5481562 Pearson Jan 1996 A
5488319 Lo Jan 1996 A
5502719 Grant Mar 1996 A
5510734 Sone Apr 1996 A
5510748 Erhart Apr 1996 A
5519695 Purohit May 1996 A
5521530 Yao May 1996 A
5533029 Gardner Jul 1996 A
5535373 Olnowich Jul 1996 A
5544222 Robinson Aug 1996 A
5548230 Gerson Aug 1996 A
5548238 Zhang Aug 1996 A
5550491 Furuta Aug 1996 A
5576644 Pelella Nov 1996 A
5579487 Meyerson Nov 1996 A
5583456 Kimura Dec 1996 A
5583859 Feldmeier Dec 1996 A
5584048 Wieczorek Dec 1996 A
5600267 Wong Feb 1997 A
5603051 Ezzet Feb 1997 A
5606268 Van Brunt Feb 1997 A
5619497 Gallagher Apr 1997 A
5619650 Bach Apr 1997 A
5625308 Matsumoto Apr 1997 A
5628055 Stein May 1997 A
5630061 Richter May 1997 A
5640356 Gibbs Jun 1997 A
5640399 Rostoker Jun 1997 A
5668809 Rostoker Sep 1997 A
5675584 Jeong Oct 1997 A
5675585 Bonnot Oct 1997 A
5680038 Fiedler Oct 1997 A
5680633 Koenck Oct 1997 A
5689644 Chou Nov 1997 A
5724361 Fiedler Mar 1998 A
5726588 Fiedler Mar 1998 A
5732346 Lazaridia Mar 1998 A
5740366 Mahany Apr 1998 A
5742604 Edsall Apr 1998 A
5744366 Kricka Apr 1998 A
5744999 Kim Apr 1998 A
5748631 Bergantino May 1998 A
5754549 DeFoster May 1998 A
5767699 Bosnyak Jun 1998 A
5778414 Winter Jul 1998 A
5796727 Harrison Aug 1998 A
5798658 Werking Aug 1998 A
5802258 Chen Sep 1998 A
5802287 Rostoker Sep 1998 A
5802465 Hamalainen Sep 1998 A
5802576 Tzeng Sep 1998 A
5805927 Bowes Sep 1998 A
5821809 Boerstler Oct 1998 A
5826027 Pedersen Oct 1998 A
5828653 Goss Oct 1998 A
5829025 Mittal Oct 1998 A
5831985 Sandorfi Nov 1998 A
5839051 Grimmett Nov 1998 A
5844437 Asazawa Dec 1998 A
5848251 Lomelino Dec 1998 A
5859669 Prentice Jan 1999 A
5861881 Freeman Jan 1999 A
5875465 Kilpatrick Feb 1999 A
5877642 Hiroyuki Mar 1999 A
5887146 Baxter Mar 1999 A
5887187 Rostoker Mar 1999 A
5892382 Ueda Apr 1999 A
5892922 Lorenz Apr 1999 A
5893150 Hagersten Apr 1999 A
5893153 Tzeng Apr 1999 A
5903176 Westgate May 1999 A
5905386 Gerson May 1999 A
5908468 Hartmann Jun 1999 A
5909127 Pearson Jun 1999 A
5909686 Muller Jun 1999 A
5914955 Rostoker Jun 1999 A
5937169 Connery Aug 1999 A
5940771 Gollnick Aug 1999 A
5945847 Ransijn Aug 1999 A
5945858 Sato Aug 1999 A
5945863 Coy Aug 1999 A
5961631 Devereux Oct 1999 A
5969556 Hayakawa Oct 1999 A
5974508 Maheshwari Oct 1999 A
5977800 Iravani Nov 1999 A
5978379 Chan Nov 1999 A
5978849 Khanna Nov 1999 A
5987507 Creedon Nov 1999 A
6002279 Evans Dec 1999 A
6008670 Pace Dec 1999 A
6014041 Somasekhar Jan 2000 A
6014705 Koenck Jan 2000 A
6025746 So Feb 2000 A
6026075 Linville Feb 2000 A
6028454 Elmasry Feb 2000 A
6037841 Tanji Mar 2000 A
6037842 Bryan Mar 2000 A
6038254 Ferraiolo Mar 2000 A
6049528 Hendel et al. Apr 2000 A
6061351 Erimli May 2000 A
6061747 Ducaroir May 2000 A
6064626 Stevens May 2000 A
6081162 Johnson Jun 2000 A
6094074 Chi Jul 2000 A
6098064 Piroll Aug 2000 A
6104214 Ueda Aug 2000 A
6111425 Bertin Aug 2000 A
6111859 Godfrey Aug 2000 A
6114843 Olah Sep 2000 A
6118776 Berman Sep 2000 A
6122667 Chung Sep 2000 A
6141705 Anand Oct 2000 A
6151662 Christie Nov 2000 A
6157623 Kerstein Dec 2000 A
6178159 He Jan 2001 B1
6185185 Bass Feb 2001 B1
6188339 Hasegawa Feb 2001 B1
6194950 Kibar Feb 2001 B1
6202125 Patterson Mar 2001 B1
6202129 Palanca Mar 2001 B1
6209020 Angie Mar 2001 B1
6215497 Leung Apr 2001 B1
6218878 Ueno Apr 2001 B1
6222380 Gerowitz Apr 2001 B1
6223239 Olarig Apr 2001 B1
6223270 Chesson Apr 2001 B1
6226680 Boucher May 2001 B1
6232844 Talaga May 2001 B1
6243386 Chan Jun 2001 B1
6247060 Boucher Jun 2001 B1
6253334 Amdahl et al. Jun 2001 B1
6259312 Murtojarvi Jul 2001 B1
6265898 Bellaouar Jul 2001 B1
6266797 Godfrey Jul 2001 B1
6269427 Kuttanna Jul 2001 B1
6279035 Brown Aug 2001 B1
6310501 Yamashita Oct 2001 B1
6324181 Wong Nov 2001 B1
6332179 Okpisz Dec 2001 B1
6334153 Boucher Dec 2001 B2
6345301 Burns Feb 2002 B1
6349098 Parruck Feb 2002 B1
6349365 McBride Feb 2002 B1
6356944 McCarty Mar 2002 B1
6363011 Hirose Mar 2002 B1
6366583 Rowett Apr 2002 B2
6373846 Daniel Apr 2002 B1
6374311 Mahany Apr 2002 B1
6385201 Iwata May 2002 B1
6389479 Boucher May 2002 B1
6396832 Kranzler May 2002 B1
6396840 Rose May 2002 B1
6424194 Hairapetian Jul 2002 B1
6424624 Galand Jul 2002 B1
6427171 Craft Jul 2002 B1
6427173 Boucher Jul 2002 B1
6434620 Boucher Aug 2002 B1
6438651 Slane Aug 2002 B1
6446109 Gupta Sep 2002 B2
6459681 Oliva Oct 2002 B1
6463092 Kim Oct 2002 B1
6470029 Shimizu Oct 2002 B1
6484224 Robins Nov 2002 B1
6496479 Shionozaki Dec 2002 B1
6649343 Hirota Dec 2002 B1
6529963 Fredin et al. Mar 2003 B1
6535518 Hu et al. Mar 2003 B1
6538486 Chen Mar 2003 B1
6564267 Lindsay May 2003 B1
6597689 Chiu Jul 2003 B1
6597956 Aziz et al. Jul 2003 B1
6606321 Natanson Aug 2003 B1
6614791 Luciani Sep 2003 B1
6614796 Black Sep 2003 B1
6631351 Ramachandran Oct 2003 B1
6633936 Keller Oct 2003 B1
6636947 Neal Oct 2003 B1
6658599 Linam Dec 2003 B1
6665759 Dawkins Dec 2003 B2
6675200 Cheriton et al. Jan 2004 B1
6681283 Radhika Jan 2004 B1
6697868 Craft Feb 2004 B2
6744782 Itakura Jun 2004 B1
6757291 Hu Jun 2004 B1
6757746 Boucher Jun 2004 B2
6765901 Johnson Jul 2004 B1
6766389 Hayter Jul 2004 B2
6788686 Khotimsky Sep 2004 B1
6788704 Lindsay Sep 2004 B1
6807581 Starr et al. Oct 2004 B1
6816932 Cho Nov 2004 B2
6845403 Chadalapaka Jan 2005 B2
6850521 Kadambi Feb 2005 B1
6859435 Lee Feb 2005 B1
6862296 Desai Mar 2005 B1
6865158 Iwamoto Mar 2005 B2
6874054 Clayton Mar 2005 B2
6897697 Yin May 2005 B2
6904519 Anand Jun 2005 B2
6911855 Yin Jun 2005 B2
6912603 Kanazashi Jun 2005 B2
6927606 Kocaman Aug 2005 B2
6937080 Hairapetian Aug 2005 B2
6938092 Burns Aug 2005 B2
6971006 Krishna Nov 2005 B2
6975629 Welin Dec 2005 B2
6976205 Ziai Dec 2005 B1
6982583 Yin Jan 2006 B2
6988150 Matters et al. Jan 2006 B2
7007103 Pinkerton Feb 2006 B2
7009985 Black Mar 2006 B2
7010607 Bunton Mar 2006 B1
7103888 Cayton et al. Sep 2006 B1
7142540 Hendel et al. Nov 2006 B2
7149819 Pettey Dec 2006 B2
7181531 Pinkerton Feb 2007 B2
7185266 Blightman Feb 2007 B2
7194519 Muhlestein et al. Mar 2007 B1
7212534 Kadambi May 2007 B2
7346701 Elzur Mar 2008 B2
7362769 Black Apr 2008 B2
7366190 Black Apr 2008 B2
7376755 Pandya May 2008 B2
7382790 Warren Jun 2008 B2
7385972 Black Jun 2008 B2
7397788 Mies Jul 2008 B2
7397800 Elzur Jul 2008 B2
7400639 Madukkarumukumana Jul 2008 B2
7411959 Elzur Aug 2008 B2
7430171 Black Sep 2008 B2
7472156 Philbrick Dec 2008 B2
7515612 Thompson Apr 2009 B1
7586850 Warren Sep 2009 B2
7644188 Vlodavsky Jan 2010 B2
20010023460 Boucher et al. Sep 2001 A1
20010026553 Gallant Oct 2001 A1
20010037397 Boucher Nov 2001 A1
20010037406 Philbrick et al. Nov 2001 A1
20010049740 Karpoff Dec 2001 A1
20020059451 Haviv May 2002 A1
20020062333 Anand May 2002 A1
20020065924 Barrall et al. May 2002 A1
20020069245 Kim Jun 2002 A1
20020078265 Frazier Jun 2002 A1
20020085562 Hufferd Jul 2002 A1
20020089927 Fischer Jul 2002 A1
20020095519 Philbrick Jul 2002 A1
20020103988 Dornier Aug 2002 A1
20020120763 Miloushev et al. Aug 2002 A1
20020130692 Hairapetian Sep 2002 A1
20020174253 Hayter Nov 2002 A1
20020190770 Yin Dec 2002 A1
20020194400 Porterfield Dec 2002 A1
20020198927 Craddock et al. Dec 2002 A1
20020198934 Kistler et al. Dec 2002 A1
20030001646 Hairapetian Jan 2003 A1
20030016628 Kadambi Jan 2003 A1
20030021229 Kadambi Jan 2003 A1
20030038809 Peng Feb 2003 A1
20030046330 Hayes Mar 2003 A1
20030046396 Richter et al. Mar 2003 A1
20030046418 Raval Mar 2003 A1
20030051128 Rodriguez Mar 2003 A1
20030061505 Sperry Mar 2003 A1
20030067337 Yin Apr 2003 A1
20030079033 Craft Apr 2003 A1
20030084185 Pinkerton May 2003 A1
20030105977 Brabson Jun 2003 A1
20030107996 Black Jun 2003 A1
20030108050 Black Jun 2003 A1
20030108058 Black Jun 2003 A1
20030108060 Black Jun 2003 A1
20030108061 Black Jun 2003 A1
20030118040 Black Jun 2003 A1
20030140124 Burns Jul 2003 A1
20030169753 Black Sep 2003 A1
20030172342 Elzur Sep 2003 A1
20030174720 Black Sep 2003 A1
20030174721 Black Sep 2003 A1
20030174722 Black Sep 2003 A1
20030198251 Black Oct 2003 A1
20030204631 Pinkerton Oct 2003 A1
20030204634 Pinkerton Oct 2003 A1
20040010674 Boyd et al. Jan 2004 A1
20040019652 Freimuth Jan 2004 A1
20040042458 Elzur Mar 2004 A1
20040042464 Elzur Mar 2004 A1
20040042483 Elzur Mar 2004 A1
20040042487 Ossman Mar 2004 A1
20040044798 Elzur Mar 2004 A1
20040062245 Sharp Apr 2004 A1
20040062267 Minami et al. Apr 2004 A1
20040062275 Siddabathuni Apr 2004 A1
20040081186 Warren Apr 2004 A1
20040085972 Warren May 2004 A1
20040085994 Warren May 2004 A1
20040093411 Elzur May 2004 A1
20040133713 Elzur Jul 2004 A1
20040213205 Li et al. Oct 2004 A1
20040227544 Yin Nov 2004 A1
20050027911 Hayter Feb 2005 A1
20050160139 Boucher Jul 2005 A1
20050165980 Clayton Jul 2005 A1
20050184765 Hairapetian Aug 2005 A1
20050185654 Zadikian Aug 2005 A1
20050216597 Shah Sep 2005 A1
20050278459 Boucher Dec 2005 A1
20060165115 Warren Jul 2006 A1
20060176094 Hairapetian Aug 2006 A1
20070170966 Hairapetian Jul 2007 A1
20070171914 Kadambi Jul 2007 A1
20070237163 Kadambi Oct 2007 A1
20080025315 Elzur Jan 2008 A1
20080095182 Elzur Apr 2008 A1
20080151922 Elzur Jun 2008 A1
20080205421 Black Aug 2008 A1
20080276018 Hayter Nov 2008 A1
20080298369 Elzur Dec 2008 A1
20090074408 Black Mar 2009 A1
20090128380 Hairapetian May 2009 A1
Foreign Referenced Citations (21)
Number Date Country
0465090 Apr 1996 EP
0692892 Apr 2003 EP
1345382 Sep 2003 EP
1357721 Oct 2003 EP
1460804 Sep 2004 EP
1460805 Sep 2004 EP
1460806 Sep 2004 EP
1206075 Nov 2007 EP
1537695 Feb 2009 EP
2725573 Nov 1994 FR
19940012105 Apr 1996 FR
1188301 Jul 1989 JP
6232872 Aug 1994 JP
9006691 Jan 1997 JP
11243420 Sep 1999 JP
2001045092 Feb 2001 JP
2001313717 Nov 2001 JP
WO9900948 Jan 1999 WO
WO0056013 Sep 2000 WO
WO0235784 May 2002 WO
WO03079612 Sep 2003 WO
Related Publications (1)
Number Date Country
20090254647 A1 Oct 2009 US
Provisional Applications (22)
Number Date Country
60477279 Jun 2003 US
60478106 Jun 2003 US
60408617 Sep 2002 US
60407165 Aug 2002 US
60456265 Mar 2003 US
60456260 Mar 2003 US
60410022 Sep 2002 US
60411294 Sep 2002 US
60408207 Sep 2002 US
60405539 Aug 2002 US
60398663 Jul 2002 US
60434503 Dec 2002 US
60403817 Aug 2002 US
60404709 Aug 2002 US
60419354 Oct 2002 US
60420901 Oct 2002 US
60439951 Jan 2003 US
60442360 Jan 2003 US
60425959 Nov 2002 US
60456266 Mar 2003 US
60437887 Jan 2003 US
60456322 Mar 2003 US
Divisions (17)
Number Date Country
Parent 10652330 Aug 2003 US
Child 12480637 US
Parent 10652327 Aug 2003 US
Child 10652330 US
Parent 10652267 Aug 2003 US
Child 10652327 US
Parent 10651459 Aug 2003 US
Child 10652267 US
Parent 10298817 Nov 2002 US
Child 10651459 US
Parent 10302474 Nov 2002 US
Child 10298817 US
Parent 10337029 Jan 2002 US
Child 10302474 US
Parent 10644205 Aug 2003 US
Child 10337029 US
Parent 10336983 Jan 2003 US
Child 10644205 US
Parent 10642023 Aug 2003 US
Child 10336983 US
Parent 10643331 Aug 2003 US
Child 10642023 US
Parent 10688392 Oct 2003 US
Child 10643331 US
Parent 10688373 Oct 2003 US
Child 10688392 US
Parent 10704891 Nov 2003 US
Child 10688373 US
Parent 10803719 Mar 2008 US
Child 10704891 US
Parent 10751732 Jan 2004 US
Child 10803719 US
Parent 10230643 Aug 2002 US
Child 10751732 US